Maintenance Love :)

This commit is contained in:
lovebird 2026-03-21 20:18:25 +01:00
parent d4197fe296
commit 0906b299f4
513 changed files with 67002 additions and 13015 deletions

View File

@ -1,39 +0,0 @@
# 🚀 Deploy Mux Proxy Function
The credentials are now hardcoded in the function. You need to redeploy it:
## Option 1: Via Supabase Dashboard (Easiest)
1. Go to: https://supabase.com/dashboard/project/ytoadlpbdguriiccjnip/functions
2. Find **mux-proxy** in the functions list
3. Click the **⋮** menu (three dots) next to it
4. Select **"Deploy"** or **"Deploy new version"**
5. Upload the updated function files from `supabase/functions/mux-proxy/`
## Option 2: Via Supabase CLI
If you have the CLI installed:
```bash
supabase functions deploy mux-proxy
```
## ✅ After Deployment
1. Go to http://localhost:5173/playground/video-player
2. Sign in
3. Try uploading a video
4. Should work now! 🎉
## ⚠️ Important
The credentials are currently HARDCODED in the function. This is for testing only!
**Before deploying to production:**
1. Set MUX_TOKEN_ID and MUX_TOKEN_SECRET as Supabase secrets
2. Uncomment the env loading lines in `supabase/functions/mux-proxy/index.ts`
3. Remove the hardcoded values
4. Redeploy
See `docs/SETUP_MUX_SECRETS.md` for proper setup instructions.

View File

@ -1,223 +0,0 @@
# Mux Video Quick Start Guide
## 🎯 What You Have Now
You now have a complete Mux video integration with:
1. **VideoCard Component** - Display videos with Vidstack player
2. **Mux Uploader Integration** - Upload videos directly to Mux
3. **Video Player Playground** - Test at `/playground/video-player`
4. **Supabase Edge Function** - Secure Mux API proxy
## ⚡ Quick Setup (5 minutes)
### Step 1: Get Mux Credentials
1. Go to https://dashboard.mux.com/signup
2. After signing up, go to **Settings** → **Access Tokens**
3. Click **Generate new token**
4. Name it "pm-pics" and enable **Mux Video** permissions
5. Copy the **Token ID** and **Token Secret**
### Step 2: Configure Supabase Secrets
You need to add your Mux credentials as secrets to your Supabase project:
```bash
# Using Supabase CLI
supabase secrets set MUX_TOKEN_ID=your_token_id_here
supabase secrets set MUX_TOKEN_SECRET=your_token_secret_here
```
Or via Supabase Dashboard:
1. Go to your Supabase project dashboard
2. Navigate to **Project Settings****Edge Functions** → **Secrets**
3. Add `MUX_TOKEN_ID` and `MUX_TOKEN_SECRET`
### Step 3: Deploy the Edge Function
```bash
supabase functions deploy mux-proxy
```
### Step 4: Test It Out
1. Start your dev server: `npm run dev`
2. Navigate to http://localhost:5173/playground/video-player
3. Sign in (required for uploads)
4. Go to the "Upload Video" tab
5. Drag & drop a video or click to select one
6. Watch it upload, process, and play!
## 📝 How It Works
### The Upload Flow
```
User Selects Video
Frontend calls /functions/v1/mux-proxy (create-upload)
Edge Function calls Mux API
Returns signed upload URL
MuxUploader uploads video directly to Mux
Video processes on Mux servers
Poll for asset creation
Get playback ID
Play video using Vidstack player
```
### Key Concepts
- **Upload ID**: Temporary ID for tracking the upload
- **Asset ID**: Permanent ID for managing the video in Mux
- **Playback ID**: Public ID used to stream the video
### URLs You Get
After uploading, you get these URLs:
**HLS Stream (for playback):**
```
https://stream.mux.com/{PLAYBACK_ID}.m3u8
```
**Thumbnail:**
```
https://image.mux.com/{PLAYBACK_ID}/thumbnail.jpg
```
**MP4 Download (if enabled):**
```
https://stream.mux.com/{PLAYBACK_ID}/high.mp4
```
## 💾 Save to Database
The playground has a "Save to Database" button that stores:
```typescript
{
user_id: current_user.id,
title: "Video Title",
description: "Description",
video_url: "https://stream.mux.com/{playback_id}.m3u8",
thumbnail_url: "https://image.mux.com/{playback_id}/thumbnail.jpg",
meta: {
mux_asset_id: "asset_abc123",
mux_playback_id: "playback_xyz789"
}
}
```
## 🎨 Using in Your App
### Upload Component
```tsx
import MuxUploader from "@mux/mux-uploader-react";
import { supabase } from "@/integrations/supabase/client";
function MyUploader() {
const fetchUploadUrl = async () => {
const response = await fetch(
`${supabase.supabaseUrl}/functions/v1/mux-proxy`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${session.access_token}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ action: 'create-upload' }),
}
);
const { data } = await response.json();
return data.url;
};
return (
<MuxUploader
endpoint={fetchUploadUrl}
onSuccess={(e) => console.log('Done!', e.detail)}
/>
);
}
```
### Video Player
```tsx
import VideoCard from "@/components/VideoCard";
function MyVideo({ video }) {
return (
<VideoCard
videoId={video.id}
videoUrl={video.video_url}
thumbnailUrl={video.thumbnail_url}
title={video.title}
author={video.author_name}
authorId={video.user_id}
likes={video.likes_count || 0}
comments={video.comments_count || 0}
description={video.description}
/>
);
}
```
## 🆓 Pricing
Mux offers **$20/month in free credits**, which includes:
- ~40 minutes of video encoding
- ~100 hours of video streaming
Perfect for testing and small projects!
## 🔧 Troubleshooting
**Upload button doesn't appear**
- Make sure you're signed in
- Check that edge function is deployed
**Upload fails**
- Verify Mux credentials are set as Supabase secrets
- Check browser console for errors
- Make sure edge function has correct environment variables
**Video stuck processing**
- Large videos take time (can be 5-10 minutes for HD)
- Check Mux dashboard: https://dashboard.mux.com
- Look for the asset in the "Assets" section
**Video won't play**
- Verify the HLS URL format: `https://stream.mux.com/{playback_id}.m3u8`
- Check that playback policy is "public"
- Try the URL directly in your browser
## 📚 More Info
See `docs/mux-integration.md` for detailed documentation including:
- Complete API reference
- Webhook setup
- Advanced configuration
- Production best practices
## 🎉 You're Done!
You now have:
- ✅ VideoCard component for displaying videos
- ✅ Mux upload integration
- ✅ Secure API proxy via Edge Functions
- ✅ Video player playground
- ✅ Database storage ready
Go to `/playground/video-player` and start uploading! 🎬

View File

@ -1,89 +0,0 @@
# 🔐 Setup Mux Secrets in Supabase
## Your Credentials
```
MUX_TOKEN_ID: 3ceb1723-1274-48ed-bc1d-0ab967f2dda5
MUX_TOKEN_SECRET: kYuAFBuOEiA+XZD8qRfgv6rcLVTJWdOLUTrLhiYagVej8UCRdjSzxOAFpvFQJHePcDd/KhqFXcE
```
## ⚡ Quick Setup (2 minutes)
### Step 1: Open Supabase Dashboard
Go to: https://supabase.com/dashboard/project/ytoadlpbdguriiccjnip/settings/functions
### Step 2: Add Secrets
Look for **"Secrets"** or **"Environment Variables"** section.
Click **"New secret"** or **"Add secret"** and add:
**First Secret:**
- Name: `MUX_TOKEN_ID`
- Value: `3ceb1723-1274-48ed-bc1d-0ab967f2dda5`
**Second Secret:**
- Name: `MUX_TOKEN_SECRET`
- Value: `kYuAFBuOEiA+XZD8qRfgv6rcLVTJWdOLUTrLhiYagVej8UCRdjSzxOAFpvFQJHePcDd/KhqFXcE`
### Step 3: Save & Verify
1. Click **Save** or **Add**
2. You should see both secrets listed (values will be hidden)
### Step 4: Redeploy Function (if needed)
If the function still doesn't work after adding secrets:
1. Go to: https://supabase.com/dashboard/project/ytoadlpbdguriiccjnip/functions
2. Find **mux-proxy** in the list
3. Click the **⋮** menu (three dots)
4. Select **"Redeploy"** or **"Deploy new version"**
## ✅ Test It
1. Go to http://localhost:5173/playground/video-player
2. Sign in
3. Try uploading a video
4. Should work now! 🎉
## 🔍 Troubleshooting
### Still getting "Mux credentials not configured"?
**Check #1: Are secrets set?**
- Dashboard → Settings → Functions → Secrets
- You should see `MUX_TOKEN_ID` and `MUX_TOKEN_SECRET` listed
**Check #2: Is function deployed?**
- Dashboard → Edge Functions
- `mux-proxy` should show as "Active" or "Deployed"
**Check #3: Redeploy**
- Sometimes secrets don't update until you redeploy
- Click the ⋮ menu next to mux-proxy → Redeploy
**Check #4: Browser console**
- Open DevTools (F12)
- Look for detailed error messages
### Different error?
Check the browser console and edge function logs:
- Dashboard → Edge Functions → mux-proxy → Logs
## 📝 Notes
- **Local .env file**: Only used for local development, NOT for edge functions
- **Edge function secrets**: Must be set in Supabase Dashboard
- **Security**: Secrets are encrypted and never exposed to the client
- **Updates**: If you change secrets, redeploy the function
## 🎯 Quick Links
- Mux Dashboard: https://dashboard.mux.com
- Supabase Project: https://supabase.com/dashboard/project/ytoadlpbdguriiccjnip
- Edge Functions: https://supabase.com/dashboard/project/ytoadlpbdguriiccjnip/functions
- Function Settings: https://supabase.com/dashboard/project/ytoadlpbdguriiccjnip/settings/functions

View File

@ -0,0 +1,10 @@
https://github.com/google/adk-samples/blob/main/typescript/agents/customer_service/README.md
https://github.com/google/adk-js.git
https://github.com/google/adk-samples
https://google.github.io/adk-docs/integrations/
https://google.github.io/adk-docs/integrations/stripe/#available-tools
https://google.github.io/adk-docs/sessions/session/

View File

@ -1,175 +0,0 @@
# Cache-EX: Per-Item Cache Invalidation Architecture
> Replaces the current "nuke everything" AppCache invalidation with surgical, per-item operations.
## Problem
A single post title update currently triggers **~50+ cache invalidation events** and SSE broadcasts:
- `flushPostsCache()``appCache.invalidate('posts')` → cascades to `feed`
- `flushPicturesCache()``appCache.invalidate('pictures')` → cascades to `posts → feed`, `pages → feed`
- Each cascade node emits an SSE event
- Unknown source fires `flushPicturesCache` 9 times (once per picture row)
- Client `StreamInvalidator` receives ~20 SSE events, most for types it doesn't know (`feed`, `system`)
## Design
### Core Principle: Separate Cache Clearing from Notification
| Concern | Current | Cache-EX |
|---|---|---|
| Cache clearing | `invalidate(type)` → blast everything + SSE + cascade | `invalidate(type)` → clear cache silently |
| SSE notification | Embedded in cache clearing (N events per cascade) | `notify(type, id, action)` → 1 explicit SSE per handler |
| Client invalidation | Broad type-based (`invalidateQueries(['posts'])`) | Per-item (`invalidateQueries(['post', id])`) |
### Server Architecture
```
┌──────────────────────────┐
│ Route Handler │
│ handleUpdatePost(id) │
├──────────────────────────┤
│ 1. DB write │
│ 2. invalidate('posts') │ ← silent, no SSE
│ 3. notify('post',id, │ ← exactly 1 SSE event
│ 'update') │
└──────────────────────────┘
▼ SSE
┌──────────────────────────┐
│ StreamInvalidator │
│ event: post:abc:update │
├──────────────────────────┤
│ invalidateQueries( │
│ ['post', 'abc'] │ ← surgical
│ ) │
└──────────────────────────┘
```
### AppCache Changes
```ts
// BEFORE: invalidate() does cache clearing + SSE + cascade
invalidate(type: string): Promise<void>
// AFTER: invalidate() is silent cache clearing only
invalidate(type: string): Promise<void> // clears cache + cascade, NO SSE
notify(type: string, id: string | null, action: 'create' | 'update' | 'delete'): void // 1 SSE
```
### Event Shape
```ts
// BEFORE: { kind:'cache', type:'posts', action:'delete', data:{type:'posts'} }
// AFTER:
interface CacheEvent {
kind: 'cache';
type: string; // 'post', 'page', 'category', 'picture'
action: 'create' | 'update' | 'delete';
id: string | null; // entity ID (null = list-level change)
data?: any; // optional payload for optimistic updates
}
```
### Client StreamInvalidator
```ts
// Per-item invalidation with dependency awareness
const INVALIDATION_RULES: Record<string, (id: string | null, qc: QueryClient) => void> = {
'post': (id, qc) => {
if (id) qc.invalidateQueries({ queryKey: ['post', id] });
qc.invalidateQueries({ queryKey: ['posts'] });
qc.invalidateQueries({ queryKey: ['feed'] });
},
'picture': (id, qc) => {
if (id) qc.invalidateQueries({ queryKey: ['picture', id] });
qc.invalidateQueries({ queryKey: ['pictures'] });
},
'page': (id, qc) => {
if (id) qc.invalidateQueries({ queryKey: ['page', id] });
qc.invalidateQueries({ queryKey: ['pages'] });
},
'category': (id, qc) => {
qc.invalidateQueries({ queryKey: ['categories'] });
},
};
```
---
## Milestones
### M0: Revert & Baseline ✦ prerequisite
- [ ] Revert debounce changes from `cache.ts` (return to pre-investigation state)
- [ ] Revert `StreamInvalidator.tsx` changes (remove feed/system mappings, keep logging)
### M1: Silent Invalidation ✦ server-only
- [ ] `AppCache.invalidate()` → remove all `appEvents.emitUpdate()` calls
- [ ] `AppCache.flush()` → remove SSE emission
- [ ] `AppCache.notify(type, id, action)` → new method, emits exactly 1 SSE
- [ ] Remove `_isRoot` parameter (no longer needed — no cascade SSE)
- [ ] Keep dependency cascade for cache clearing (still needed server-side)
- [ ] Adjust SSE event shape to include `id` field
### M2: Handler-Level Notification ✦ server-only
- [ ] `handleUpdatePost` → add `appCache.notify('post', postId, 'update')`
- [ ] `handleCreatePost` → add `appCache.notify('post', newId, 'create')`
- [ ] `handleDeletePost` → add `appCache.notify('post', postId, 'delete')`
- [ ] `handleUpsertPictures` → add `appCache.notify('post', postId, 'update')` (pictures belong to a post)
- [ ] `handleUnlinkPictures` → add `appCache.notify('pictures', null, 'update')`
- [ ] `handleUpdatePicture` → add `appCache.notify('picture', picId, 'update')`
- [ ] `handleCreatePicture` → add `appCache.notify('picture', picId, 'create')`
- [ ] `handleDeletePicture` → add `appCache.notify('picture', picId, 'delete')`
- [ ] Page handlers (`pages-crud.ts`) → add `appCache.notify('page', pageId, ...)`
- [ ] Category handlers → add `appCache.notify('category', catId, ...)`
- [ ] Type handlers → add `appCache.notify('type', typeId, ...)`
- [ ] Layout handlers → add `appCache.notify('layout', layoutId, ...)`
- [ ] Flush-all handler → add `appCache.notify('system', null, 'delete')`
### M3: Client StreamInvalidator ✦ client-only
- [ ] Replace `EVENT_TO_QUERY_KEY` map with `INVALIDATION_RULES` (function-based)
- [ ] Parse `event.id` from SSE payload
- [ ] Per-item `invalidateQueries` when `id` is present
- [ ] Fallback to list-level invalidation when `id` is null
### M4: E2E Test ✦ verification
- [ ] Create `cache-ex.e2e.test.ts` following existing test patterns
- [ ] Test: post update → SSE emits exactly 1 event (type='post', has id)
- [ ] Test: picture create → SSE emits exactly 1 event (type='picture', has id)
- [ ] Test: category update → SSE emits exactly 1 event (type='category', has id)
- [ ] Test: no cascade SSE events (verify `feed`, `pages` events are NOT emitted)
- [ ] Test: cache is still cleared correctly (type + dependents)
- [ ] Test: flush-all → exactly 1 SSE event (type='system')
- [ ] Add `test:cache-ex` script to `package.json`
---
## Files Changed
| File | Change |
|---|---|
| `server/src/cache.ts` | Rewrite `invalidate()`, add `notify()`, remove debounce |
| `server/src/events.ts` | Update `AppEvent` interface with `id` field |
| `server/src/products/serving/db/db-posts.ts` | Add `notify()` calls to handlers |
| `server/src/products/serving/db/db-pictures.ts` | Add `notify()` calls to handlers |
| `server/src/products/serving/pages/pages-crud.ts` | Add `notify()` calls to handlers |
| `server/src/products/serving/db/db-categories.ts` | Add `notify()` calls to handlers |
| `server/src/products/serving/db/db-types.ts` | Add `notify()` calls to handlers |
| `server/src/products/serving/index.ts` | Update flush-all handler |
| `src/components/StreamInvalidator.tsx` | Replace map with function-based rules |
| `server/src/products/serving/__tests__/cache-ex.e2e.test.ts` | New test file |
| `server/package.json` | Add `test:cache-ex` script |
## Verification
### Automated (E2E)
```bash
cd server
npm run test:cache-ex
```
### Manual
1. Start dev servers (`npm run dev` in both client/server)
2. Open browser console, filter for `[StreamInvalidator]`
3. Edit a post title → save
4. Expected: exactly **1-2 SSE log lines** (`post:xyz:update`), no cascade spam
5. Run `npm run build` in server to verify TypeScript compiles

61
packages/ui/docs/cache.md Normal file
View File

@ -0,0 +1,61 @@
# Polymech Cache Invalidation & Mirroring
This document outlines the architecture for caching data on the Polymech platform, specifically how mutations invalidate Server/Client state and mirror those invalidations to secondary servers (e.g. from local to production instances).
## Core Mechanisms
Cache logic splits into:
1. **Server-Side Cache** (`redis` or `memory` via `getCache()`): Used in API routes to hold heavy queries and `pages`.
2. **Client-Side Cache** (`@tanstack/react-query`): The frontend data store holding currently loaded application state.
3. **Server-Sent Events (SSE)**: Event streams connecting the frontend clients to the backend to get real-time cache mutations.
4. **Cache Mirroring**: HTTP Broadcasts mapping local actions outwards to production domains using Service Accounts.
## Flow of a Cache Invalidation
When a user modifies data (e.g. Updating a Post) locally:
1. **Mutation**: A request is sent to an API endpoint (e.g. `PATCH /api/posts/:id`).
2. **Local Invalidation**:
- The handler calls `appCache.invalidate('post')` inside `[server/src/cache.ts](../server/src/cache.ts)`.
- The Cache clears the specific entities and walks the dependency tree (e.g., `post` depends on `pictures` and `categories`).
3. **SSE Broadcast**:
- The handler issues `appCache.notify('post', id, 'update')`.
- The backend pushes an SSE `app-update` down the wire.
4. **Client-Side Reset**:
- The React frontend listens to the stream in `[src/components/StreamInvalidator.tsx](../src/components/StreamInvalidator.tsx)` via `[src/contexts/StreamContext.tsx](../src/contexts/StreamContext.tsx)`.
- Depending on the entity, `StreamInvalidator` resets the specific `post` ID in React Query, and forcefully refetches the overarching `posts`, `pages`, and `feed` caches.
5. **Mirror Broadcast**:
- During `appCache.invalidate()`, if mirroring isn't explicitly disabled, `[server/src/mirror.ts](../server/src/mirror.ts)` takes the `type` and broadcasts it to the domains listed in `.env` under `CACHE_MIRRORS`.
- It acquires a JWT token utilizing `SERVICE_EMAIL` and `SERVICE_PASSWORD` (or `TEST_EMAIL`).
- A `POST` goes out to `https://service.polymech.info/api/cache/invalidate` including `mirror: false` in the payload (to prevent an infinite broadcast loop).
6. **Remote Revalidation**:
- The remote (e.g., Production) instance receives the `invalidate` request at `[server/src/products/serving/index.ts](../server/src/products/serving/index.ts)`.
- It internally invalidates its own Redis/Memory cache utilizing `appCache.invalidate(type, true /* preventMirror */)`.
- It issues an SSE to all *its* connected clients `appCache.notify(type, null, 'update')`, which triggers Step 4 for public visitors!
## Dependency Graph Configuration
In `[server/src/cache.ts](../server/src/cache.ts)`, dependencies orchestrate cascade invalidation:
```typescript
private static DEPENDENCIES: Record<string, string[]> = {
'posts': ['categories', 'pictures'], // Changing a picture/category invalidates posts
'pages': ['categories', 'pictures', 'translations'],
'categories': ['types'],
'translations': [],
'feed': ['posts', 'pages', 'categories'],
'auth': []
};
```
## Adding a New Mirrored Cache Endpoint
1. If it represents a new domain model, add it to `DEPENDENCIES` in `AppCache`.
2. Add its alias/hook inside `[src/components/StreamInvalidator.tsx](../src/components/StreamInvalidator.tsx)`.
Note: The mirror sends pluralized definitions (`pages`, `types`, `posts`) whereas standard CRUD routes may notify singularly (`page`, `type`, `post`). Map both aliases to the same React Query keys.
3. Utilize `appCache.invalidate('entityName')` inside your API logic. Mirroring happens automatically!
## Debugging
Mirroring successes and errors are not logged to `stdout`. All `mirror.ts` and `cache.ts` activities are strictly written to:
`[server/logs/cache.json](../server/logs/cache.json)` using the internal pino `cacheLogger`.

View File

@ -1,49 +0,0 @@
# Caching Strategy
## 1. Server-Side Caching (The Fast Layer)
**Goal**: Reduce DB load by caching public reads (Feeds, Profiles).
### Cache Adapter Interface
We will use a platform-agnostic interface to support both Memory (Dev/Single-Node) and Redis (Prod/Cluster).
```typescript
// server/src/commons/cache/types.ts
export interface CacheAdapter {
get<T>(key: string): Promise<T | null>;
set<T>(key: string, value: T, ttl?: number): Promise<void>;
del(key: string): Promise<void>;
flush(pattern?: string): Promise<void>;
}
```
### Implementations
- [ ] **MemoryCache**: Use `lru-cache`. Default for local dev.
- [ ] **RedisCache**: Use `ioredis`. Enabled if `REDIS_URL` is present.
### usage in `ServingProduct`
- **Feed**: Cache `home-feed` for 60 seconds (Target: [`server/src/products/serving/index.ts`](../server/src/products/serving/index.ts)).
- **Profile**: Cache `profile-{id}` for 5 minutes (Target: [`server/src/products/serving/index.ts`](../server/src/products/serving/index.ts)). Invalidate on profile update webhook.
---
## 2. Client-Side Caching (The Smart Layer)
**Goal**: Eliminate "Double Fetching" and provide instant navigation (Back/Forward).
### TanStack Query (React Query)
- [ ] **Config**: Set global `staleTime` to 5 minutes for "Content" (Posts, Pictures) in [`src/App.tsx`](../src/App.tsx).
- [ ] **Prefetching**:
- On hover of a User Link, `queryClient.prefetchQuery(['profile', id])`.
- On hover of a Post Card, `queryClient.prefetchQuery(['post', id])`.
- [ ] **Hydration**:
- Use `HydrationBoundary` to ingest `window.__INITIAL_STATE__` served by the optimized Server Injection in [`src/App.tsx`](../src/App.tsx).
### Optimistic Updates
- [ ] **Likes**: Update UI immediately. Rollback on error in [`src/components/LikeButton.tsx`](../src/components/LikeButton.tsx) (or relevant component).
- [ ] **Edits**: Update Local Cache immediately. Background sync.
---
## 3. CDN & Static Assets
- [ ] Ensure Supabase Storage bucket is behind a CDN (Cloudflare or Supabase built-in).
- [ ] **Thumbnails**: Use the Resizing Proxy (`/api/images/cache/...`) which natively caches processed images on disk/nginx.

365
packages/ui/docs/cad.md Normal file
View File

@ -0,0 +1,365 @@
# PolyMech CAD Tools (pm-cad)
**Automate your SolidWorks workflow from the command line.** PolyMech CAD Tools (aka **`pm-cad`**) is a Windows-first CLI and Node.js toolkit that **batch-converts SolidWorks assemblies, parts, and drawings** into downstream-friendly formats—**STEP, PDF, JPG, interactive HTML webviews, XLSX BOMs, and JSON metadata**—without repetitive GUI "Save As" clicks.
If you build products in SolidWorks and need **repeatable exports for manufacturing, quoting, documentation, release packages, or CI/CD**, `pm-cad` turns CAD deliverables into a scriptable, auditable pipeline.
> Keywords: SolidWorks CLI, batch convert SolidWorks, SolidWorks STEP export automation, SolidWorks BOM to Excel, eDrawings HTML export, CAD conversion tool, Pack and Go automation, Node.js CAD automation.
---
## Why pm-cad?
Engineering teams lose hours to manual export routines: opening models, switching configurations, rebuilding, exporting PDFs, STEP files, and BOMs—then doing it again after every change. `pm-cad` replaces all that with **one command** that can process **hundreds of files in parallel**, consistently.
| Pain point | pm-cad solution |
|---|---|
| Manual "Save As" for every file format | **One command** converts entire folders (glob patterns) into multiple outputs (e.g., `step|pdf|jpg`) |
| No easy way to export BOMs | Generates **XLSX bills of materials** directly from assemblies |
| HTML previews require eDrawings seats | Batch-exports **interactive HTML webviews** via the **eDrawings API** (eDrawings is free) |
| Configuration variants are tedious | Iterates **all configurations automatically** with `${CONFIGURATION}` |
| Hard to integrate into build pipelines | Use it as a **CLI**, **Node.js library**, or **Grunt task** |
---
## What you can do with PolyMech CAD Tools
### 1) Batch CAD conversion (SolidWorks → STEP/PDF/JPG)
Export manufacturing and documentation deliverables from **parts, assemblies, and drawings**:
- **STEP** for CAM, CNC, suppliers, and neutral exchange
- **PDF** for drawings, release packs, and approvals
- **JPG** for thumbnails, product pages, and catalogs
### 2) Interactive HTML webviews (via eDrawings)
Create **lightweight, shareable HTML previews** for assemblies and parts—ideal for:
- internal reviews
- sales/support handoffs
- web portals
- vendor communication
### 3) Excel BOM export (XLSX)
Generate structured **Bills of Materials in Excel** directly from assemblies, with support for:
- BOM type (Parts Only / Top Level / Indented)
- detail level
- BOM templates
- optional component images
### 4) Metadata & configuration export (JSON)
Extract rich assembly intelligence for PLM, QA, quoting, and analytics:
- custom properties (global + per-configuration)
- mass properties, materials, bounding boxes
- suppression state, equations, and more
### 5) Pack & Go (flatten references)
Collect an assembly and all referenced files into a clean folder for:
- supplier packages
- archiving
- controlled handoff
- reproducible builds
---
## Quick Start
```sh
# Install globally
npm i @polymech/cad -g
# Convert an assembly to STEP + PDF
pm-cad sw --src="./cad/Global*.SLDASM" --dst="${SRC_DIR}/${SRC_NAME}.+(step|pdf)"
# See all options
pm-cad --help
```
---
## Installation
### Via npm (recommended)
```sh
npm i @polymech/cad -g
```
### Windows Installer
Download and run `PolyMechCAD-Setup.exe`. The installer auto-detects privileges:
- **Double-click** → installs per-user to `%LOCALAPPDATA%`, no admin required
- **Right-click → Run as administrator** → installs system-wide to `Program Files`
Both modes add `pm-cad` to your **PATH** automatically.
---
## Core features (in detail)
### Format conversions (glob patterns + multi-output)
Convert between supported formats using glob patterns and multi-extension outputs.
```sh
# Assembly → STEP
pm-cad sw --src="./cad/**/*.SLDASM" --dst="${SRC_DIR}/${SRC_NAME}.step"
# Parts + Assemblies → PDF + JPG (recursive)
pm-cad sw --src="./cad/**/*.+(SLDASM|SLDPRT)" --dst="${SRC_DIR}/${SRC_NAME}.+(pdf|jpg)"
# Assembly → Interactive HTML webview (via eDrawings)
pm-cad sw --src="./cad/*.SLDASM" --dst="${SRC_DIR}/${SRC_NAME}.html"
# Draw.io diagrams → PNG
pm-cad sw --src="./docs/**/*.drawio" --dst="${SRC_DIR}/${SRC_NAME}.png"
```
### Bill of Materials (BOM) → Excel (XLSX)
Extract structured BOMs directly from assemblies.
```sh
pm-cad sw --src="./cad/**/*.SLDASM" --dst="${SRC_DIR}/${SRC_NAME}.xlsx"
```
### Metadata & configuration export (JSON)
Turn SolidWorks models into data you can use in automation, dashboards, and audits.
```sh
# Custom properties → JSON
pm-cad sw --src="./cad/*.SLDASM" --dst="${SRC_DIR}/${SRC_NAME}.json"
# All configurations → JSON
pm-cad sw --src="./cad/*.SLDASM" --dst="${SRC_DIR}/${SRC_NAME}-configs.json"
# Per-configuration STEP + HTML export
pm-cad sw --src="./cad/*.SLDASM" --dst="${SRC_DIR}/${SRC_NAME}-${CONFIGURATION}.+(step|html)"
```
### Pack & Go (flatten assembly references)
```sh
pm-cad pack --src="./cad/Global*.SLDASM" --dst="./packed"
```
---
## Path variables (dynamic, repeatable output paths)
Build deterministic output paths using built-in variables:
| Variable | Description |
|---|---|
| `${SRC_DIR}` | Directory of the current source file |
| `${SRC_NAME}` | Base name without extension |
| `${SRC_FILE_EXT}` | Source file extension |
| `${CONFIGURATION}` | Current model configuration name |
This makes it easy to keep exports adjacent to sources, mirror folder structures, or produce configuration-specific deliverables.
---
## Glob pattern syntax
| Pattern | Matches |
|---|---|
| `*.SLDASM` | All assemblies in current directory |
| `**/*.SLDPRT` | All parts, recursively |
| `*.+(SLDASM\|SLDPRT)` | Assemblies and parts |
| `*.+(step\|pdf\|jpg)` | Multiple output formats |
---
## Integration options
### File manager integration (Altap Salamander)
Register `pm-cad` as a custom menu command (F9) for right-click conversions:
```
Command: pm-cad
Arguments: sw --src="$(FullName)" --dst="&{SRC_DIR}/&{SRC_NAME}.+(step)"
```
> Use `--alt=true` to switch variable prefix from `$` to `&` when the host app uses `$` for its own variables.
### Batch droplets (drag & drop)
Create a `.bat` file and drag-and-drop files onto it:
```batch
@echo off
pm-cad sw --src="%~1" --dst="${SRC_DIR}/${SRC_NAME}.+(html)"
```
For folders:
```batch
@echo off
pm-cad sw --src="%~1/**/*.+(SLDASM|SLDPRT)" --dst="${SRC_DIR}/${SRC_NAME}.+(html)"
```
### Node.js library (programmatic automation)
```js
import { convert } from '@polymech/cad/cad/sw-lib';
await convert({
src: './cad/**/*.SLDASM',
dst: '${SRC_DIR}/${SRC_NAME}.+(step|pdf)',
verbose: true,
skip: true
});
```
### Grunt task (build pipeline)
```js
const cad = require('@polymech/cad/cad/sw-lib');
grunt.registerMultiTask('cad-convert', 'Convert SW files', function () {
const done = this.async();
convert(this.data.items, { ...options }, this.data.output)
.then(() => done());
});
```
```js
// Gruntfile config
'cad-convert': {
step: { items: products, output: '${SRC_DIR}/${SRC_NAME}.+(step)' },
html: { items: products, output: '${SRC_DIR}/../resources/${SRC_NAME}.+(html)' },
bom: { items: products, output: '${SRC_DIR}/../resources/${SRC_NAME}.+(xlsx)' }
}
```
---
## Supported formats
### Input formats
| Format | Extension |
|---|---|
| SolidWorks Assembly | `.SLDASM` |
| SolidWorks Part | `.SLDPRT` |
| SolidWorks Drawing | `.SLDDRW` |
| STEP | `.step`, `.stp` |
### Output formats
| Format | Extension | Source | Engine |
|---|---|---|---|
| STEP | `.step` | Parts, Assemblies | `convert.exe` |
| PDF | `.pdf` | Parts, Assemblies, Drawings | `convert.exe` |
| JPEG | `.jpg` | Parts, Assemblies, Drawings | `convert.exe` |
| HTML (eDrawings) | `.html` | Parts, Assemblies | `ExportHTML.exe` |
| JSON (metadata) | `.json` | Assemblies | `model-reader.exe` |
| JSON (configs) | `-configs.json` | Assemblies | `getconfigs.exe` |
| Excel BOM | `.xlsx` | Assemblies | `bom.exe` |
---
## Native toolchain (how it works under the hood)
`pm-cad` orchestrates a high-performance native toolchain built in C# that talks directly to SolidWorks via **COM interop** (using [xCAD](https://xcad.xarial.com/) + SolidWorks Interop) and to eDrawings via the **eDrawings API**.
| Binary | Role |
|---|---|
| `convert.exe` | Opens models via COM and calls `SaveAs` for STEP/PDF/JPG. Supports PhotoView 360 ray-trace rendering with configurable quality, resolution, and camera view. Includes Pack & Go support for assembly flattening. |
| `model-reader.exe` | Traverses the full assembly tree and extracts per-component data. Outputs a flat properties table plus a hierarchical `.tree.json`. |
| `bom.exe` | Inserts a BOM table annotation via xCAD, then calls `SaveAsExcel` to produce XLSX output. Supports custom templates, BOM type (PartsOnly / TopLevel / Indented), detail level, and optional component images. |
| `ExportHTML.exe` | Headless eDrawings API export—opens a hidden WinForms host, loads the model, and saves interactive HTML. No SolidWorks license required (eDrawings is free). |
| `getconfigs.exe` | Enumerates all configurations in a model and serializes configuration names + properties to JSON. |
### Extracted data (model-reader)
For each component in the assembly tree, `model-reader.exe` can extract:
| Category | Fields |
|---|---|
| **Custom Properties** | All configuration-specific + global custom properties (evaluated values) |
| **Mass Properties** | Mass, Density, Volume, Surface Area, Center of Mass (X/Y/Z) |
| **Materials** | Material name + database per part |
| **Bounding Box** | Min/Max X/Y/Z per component |
| **Equations** | All equation names and evaluated values |
| **Model States** | What's Wrong count, error codes, warning flags, affected feature types |
| **Suppression** | Component suppression state |
---
## CLI options reference
### Rendering
| Flag | Default | Description |
|---|---|---|
| `--renderer` | `solidworks` | Render engine (`solidworks` or `photoview`) |
| `--quality` | `2` (Good) | Ray trace quality level |
| `--width` | `1024` | Output image width in pixels |
| `--height` | `1024` | Output image height in pixels |
| `--view` | `Render` | Camera view name |
| `--configuration` | `Default` | Model configuration to use |
### BOM options
| Flag | Default | Description |
|---|---|---|
| `--bom-config` | `Default` | BOM configuration |
| `--bom-type` | `2` | BOM type |
| `--bom-detail` | `1` | Detail level |
| `--bom-template` | — | Custom BOM table template |
| `--bom-images` | `false` | Include component images |
### Workflow
| Flag | Description |
|---|---|
| `--dry` | Preview operations without executing |
| `--cache` | Use file hash caching to skip unchanged files |
| `--save` | Save the model after processing |
| `--rebuild` | Force model rebuild before export |
| `--pack` | Pack and go mode |
| `--light` | Lightweight mode |
| `--close` | Close SolidWorks after each conversion |
| `--alt` | Use `&` instead of `$` for variable prefix |
| `--logLevel` | Minimum log level (`debug`, `info`, `warn`, `error`) |
---
## Requirements
- **Node.js** 18+
- **SolidWorks** 20202025 (auto-detected)
- **Windows** (SolidWorks COM interop)
---
## Who pm-cad is for (common use cases)
- **Manufacturing handoffs:** Export STEP + drawing PDFs for suppliers in one repeatable step.
- **Engineering release:** Generate a "release pack" folder with consistent naming and formats.
- **Sales/Support enablement:** Produce interactive **HTML previews** that anyone can open in a browser.
- **Automation & CI:** Create nightly exports (STEP/PDF/BOM) whenever CAD changes.
- **Data extraction:** Pull custom properties and mass/material data into JSON for quoting or audits.
---
## FAQ
### Does HTML export require a SolidWorks license?
No. HTML export is performed via the **eDrawings API**. eDrawings is free; `pm-cad` uses a headless host to export interactive HTML.
### Can I export all configurations automatically?
Yes. Use `${CONFIGURATION}` in the destination pattern and `pm-cad` will iterate configurations.
### Can I keep exports next to source files with matching names?
Yes. Use `${SRC_DIR}` and `${SRC_NAME}` for deterministic output paths.
### Is it safe to run on large assemblies?
`pm-cad` is designed for batch processing and supports workflow controls like caching (`--cache`), rebuild control (`--rebuild`), lightweight mode (`--light`), and closing SolidWorks between jobs (`--close`) depending on your stability/performance needs.
---
## Summary
**PolyMech CAD Tools** brings modern automation to SolidWorks deliverables: **batch exports**, **BOM to Excel**, **interactive HTML previews**, **metadata extraction**, and **Pack & Go**—all from a **scriptable CLI** or **Node.js API**. Standardize your exports, eliminate manual steps, and make CAD output a reliable part of your engineering pipeline.

View File

@ -0,0 +1,117 @@
# Campaigns
Email campaign management — create campaigns, pick an email page template, target contact groups, and track delivery.
## Architecture Overview
```
┌──────────────────────────────────────┐
│ Frontend │
│ CampaignsManager.tsx │
│ (MUI DataGrid, PagePicker, GroupPicker) │
│ │
│ client-campaigns.ts │
│ (fetch wrappers, bearer token) │
└──────────────┬───────────────────────┘
│ /api/campaigns/*
┌──────────────────────────────────────┐
│ Server CampaignsProduct │
│ products/campaigns/index.ts │
│ products/campaigns/routes.ts │
└──────────────┬───────────────────────┘
│ Supabase
┌──────────────────────────────────────┐
│ Tables │
│ campaigns │
│ (+ marketing_emails for sends) │
└──────────────────────────────────────┘
```
## Database
### `campaigns`
| Column | Type | Notes |
|--------|------|-------|
| `id` | uuid | PK, auto-generated |
| `owner_id` | uuid | FK → `auth.users`, not null |
| `name` | text | Campaign label, not null |
| `page_slug` | text | Page slug to render as email |
| `page_id` | text | Optional page UUID |
| `subject` | text | Email subject line |
| `group_ids` | uuid[] | Target contact group IDs |
| `lang` | text | Language tag (`en`, `de`, …) |
| `tracking_id` | text | Tracking param injected into URLs |
| `vars` | jsonb | Template variables (`--var-*` equivalent) |
| `status` | text | `draft` / `scheduled` / `sending` / `sent` / `failed` |
| `stats` | jsonb | `{ total, sent, failed, skipped }` |
| `scheduled_at` | timestamptz | When to send (null = manual) |
| `started_at` | timestamptz | When sending began |
| `completed_at` | timestamptz | When sending finished |
| `meta` | jsonb | Arbitrary metadata |
| `created_at` | timestamptz | — |
| `updated_at` | timestamptz | Auto-updated via trigger |
**Indexes:** `owner_id`, `status`, `page_slug`
### RLS
- **Owners**: full CRUD on their own rows (`owner_id = auth.uid()`)
- **Admins** (`user_roles.role = 'admin'`): full access to all rows
## Server API Endpoints
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/api/campaigns` | Auth | List campaigns. Query: `?status=&q=&limit=&offset=` |
| `POST` | `/api/campaigns` | Auth | Create campaign |
| `GET` | `/api/campaigns/:id` | Auth | Get single campaign |
| `PATCH` | `/api/campaigns/:id` | Auth | Update campaign (partial) |
| `DELETE` | `/api/campaigns/:id` | Auth | Delete campaign |
## Frontend Client
`src/modules/campaigns/client-campaigns.ts` — all functions inject Supabase bearer token automatically.
```ts
fetchCampaigns(options?) → Campaign[] // options: { status?, q?, limit?, offset? }
getCampaign(id) → Campaign
createCampaign(data) → Campaign
updateCampaign(id, data) → Campaign
deleteCampaign(id) → void
```
## Frontend UI — `CampaignsManager`
Mounted at `/user/:id/campaigns` via `UserProfile.tsx`.
| Feature | Detail |
|---------|--------|
| **DataGrid** | Columns: name, page, status, groups, stats, created_at, actions |
| **URL state sync** | Filter, sort, visibility, pagination persisted in search params |
| **Toolbar** | Search, status filter, "New Campaign" button |
| **Campaign dialog** | Name, subject, language, tracking ID, PagePickerDialog for email page, ContactsPicker for groups |
| **Status workflow** | `draft``scheduled``sending``sent` / `failed` |
## Relationship to Email System
The `campaigns` table is the **parent** that defines what to send and to whom. The existing `marketing_emails` table continues to track **individual sends** per recipient. A future "Send Campaign" action will resolve group members, render the email page, and use the email tracking flow (`POST /api/email/track` → send → `PATCH /api/email/track/:id`) — the same pipeline currently used by the CLI `email-send` command.
## Environment Variables
Inherits same Supabase env as the rest of the server — no additional variables required.
## Source Files
| File | Description |
|------|-------------|
| [campaigns.md](campaigns.md) | This document |
| [migration](../supabase/migrations/20260306130000_create_campaigns.sql) | DB schema, RLS, indexes |
| [routes.ts](../server/src/products/campaigns/routes.ts) | Zod-OpenAPI route definitions |
| [index.ts](../server/src/products/campaigns/index.ts) | CampaignsProduct handlers |
| [client-campaigns.ts](../src/modules/campaigns/client-campaigns.ts) | Frontend fetch wrappers |
| [CampaignsManager.tsx](../src/components/CampaignsManager.tsx) | Main UI component (DataGrid, dialogs) |
| [ContactsPicker.tsx](../src/components/ContactsPicker.tsx) | Group selection component |
| [PagePickerDialog.tsx](../src/modules/pages/PagePickerDialog.tsx) | Page selection dialog |

View File

@ -0,0 +1,179 @@
# Category ACL — Design & Implementation Plan
## Overview
Per-category access control — reusing the existing **resource-agnostic ACL system** (`resource_acl` table + `IAclBackend` + `AclEditor`).
**Roles:** `admin`, `authenticated` (logged-in users), `anonymous`
**Phase 1:** Only admins can manage category permissions
**Phase 2:** Users can create categories and manage permissions for their own categories
---
## ✅ Completed
### Backend registration
`serving/index.ts` — registered the `'category'` ACL backend:
```ts
import { DbAclBackend } from './db/db-acl-db.js';
registerAclBackend('category', new DbAclBackend('category'));
```
This enables the full ACL API for categories:
- `GET /api/acl/category/:categoryId` — read ACL
- `POST /api/acl/category/:categoryId/grant` — grant permission
- `POST /api/acl/category/:categoryId/revoke` — revoke permission
### Virtual user ID fix
`db-acl-db.ts` — fixed `DbAclBackend` to handle virtual/sentinel user IDs (`anonymous`, `authenticated`). These aren't valid UUIDs and can't be stored in the `user_id` FK column, so they're transparently mapped to/from `group_name`:
| AclEntry.userId | DB column | Value |
|-----------------|-----------|-------|
| `anonymous` | `group_name` | `'anonymous'` |
| `authenticated` | `group_name` | `'authenticated'` |
| `<real-uuid>` | `user_id` | UUID FK → auth.users |
| _(group grant)_ | `group_name` | e.g. `'editors'` |
### E2E tests
`category-acl.e2e.test.ts` — 19 tests, all passing:
| Group | Tests | Coverage |
|-------|-------|----------|
| ACL CRUD | 9 | grant anon/auth/user, verify reads, upsert, full revoke cycle |
| Access control | 4 | unauth rejection, 404 missing revoke, input validation |
| Multi-user | 3 | regular user denied ACL management |
| Group grants | 2 | grant/revoke by group name |
| ACL isolation | 1 | grants don't leak between categories |
Run: `npm run test:acl:categories`
---
## 🔲 Next Steps — CategoryManager UI Integration
### Goal
Add a **Permissions** section to [CategoryManager.tsx](../src/components/widgets/CategoryManager.tsx)'s right panel, using the existing [AclEditor](../src/components/admin/AclEditor.tsx) component.
### Where it goes
The right panel in CategoryManager has 3 states:
```
editingCategory? → Edit form (name, slug, visibility, variables, types, translate)
selectedCategoryId? → Selected view (name, description, page link actions) ← ADD HERE
else → Empty state ("Select a category...")
```
Add the `AclEditor` inside the **selected view** (state 2), below the "Current Page Link" section (around line 548):
### Implementation
```tsx
// 1. Import
import { AclEditor } from '@/components/admin/AclEditor';
// 2. Add to the right panel, after the page-link section (~line 548)
// Inside the `selectedCategoryId ? (...)` branch:
<div className="space-y-2">
<Label className="text-xs uppercase text-muted-foreground">
<T>Permissions</T>
</Label>
<AclEditor
resourceType="category"
mount={selectedCategoryId}
path="/"
/>
</div>
```
The `AclEditor` already handles:
- Anonymous toggle with permission picker
- Authenticated toggle with permission picker
- Per-user grants via `UserPicker`
- Active permissions table with revoke buttons
### Props mapping
| AclEditor prop | StorageManager (VFS) | CategoryManager |
|---------------|---------------------|-----------------|
| `resourceType` | `'vfs'` (default) | `'category'` |
| `mount` | mount name (`'home'`) | category UUID |
| `path` | folder path (`'/shared'`) | always `'/'` |
### Files to change
| File | Change |
|------|--------|
| [CategoryManager.tsx](../src/components/widgets/CategoryManager.tsx) | Import `AclEditor`, add it to selected-category view |
No other files need modification — the backend, API, and ACL editor component are all ready.
---
## Architecture Reference
```
┌── Supabase ─────────────────────────┐
│ resource_acl table (RLS-enabled) │
│ ┌─ resource_type: 'category' │
│ ├─ resource_id: category UUID │
│ ├─ user_id / group_name │
│ ├─ permissions[] │
│ └─ path: '/' │
└─────────────────────────────────────┘
┌── Server ───────────────────────────┐
│ db-acl.ts IAclBackend │
│ ├ registerAclBackend('vfs', ...) │
│ └ registerAclBackend('category',…) │ ✅
│ db-acl-db.ts DbAclBackend │ ✅ virtual ID fix
│ db-categories.ts CRUD + cache │
└─────────────────────────────────────┘
┌── Client ───────────────────────────┐
│ client-acl.ts fetch/grant/revoke│ reuse as-is
│ AclEditor.tsx UI │ reuse as-is
│ CategoryManager + ACL section │ 🔲 next step
└─────────────────────────────────────┘
```
## Permission Model
| Permission | Meaning |
|------------|---------|
| `read` | View category and its items |
| `list` | Browse category in tree navigation |
| `write` | Add/remove items from category |
| `manage` | Edit category metadata |
| `admin` | Full control including ACL |
### Default Behavior
Categories default to their `visibility` field (`public`/`unlisted`/`private`). ACL entries override visibility for fine-grained control.
### Phase 2 — Future
| Task | Description |
|------|-------------|
| ACL-filtered category reads | Filter `fetchCategoriesServer` by caller's `list` permission |
| ACL-filtered tree view | Hide restricted categories in `CategoryTreeView.tsx` |
| User-owned category ACL mgmt | Let users manage permissions on categories they own |
## File Reference
| File | Role | Status |
|------|------|--------|
| [resource_acl.sql](../supabase/migrations/20260219215000_create_resource_acl.sql) | DB schema | ✅ No changes needed |
| [db-acl.ts](../server/src/products/serving/db/db-acl.ts) | ACL orchestrator | ✅ No changes needed |
| [db-acl-db.ts](../server/src/products/serving/db/db-acl-db.ts) | Supabase backend | ✅ Virtual ID fix |
| [index.ts](../server/src/products/serving/index.ts) | Backend registration | ✅ Category registered |
| [category-acl.e2e.test.ts](../server/src/products/serving/__tests__/category-acl.e2e.test.ts) | E2E tests | ✅ 19/19 pass |
| [CategoryManager.tsx](../src/components/widgets/CategoryManager.tsx) | Category UI | 🔲 Add AclEditor |
| [client-acl.ts](../src/modules/user/client-acl.ts) | Client ACL API | ✅ No changes needed |
| [AclEditor.tsx](../src/components/admin/AclEditor.tsx) | ACL UI component | ✅ No changes needed |

View File

@ -0,0 +1,457 @@
# Chat Module — Architecture & Developer Documentation
> **Location:** [`src/modules/ai/`](../src/modules/ai/)
> **Page:** [`src/pages/PlaygroundChat.tsx`](../src/pages/PlaygroundChat.tsx)
---
## Table of Contents
- [Overview](#overview)
- [Architecture](#architecture)
- [Component Tree](#component-tree)
- [ChatPanel — Reusable Component](#chatpanel--reusable-component)
- [Message Flow](#message-flow)
- [Tool-Calling Flow](#tool-calling-flow)
- [Session Management](#session-management)
- [Features](#features)
- [File Browser Integration](#file-browser-integration)
- [External Context & Tools Injection](#external-context--tools-injection)
- [Tool System](#tool-system)
- [Storage](#storage)
- [File Reference](#file-reference)
---
## Overview
The chat module is a client-side AI playground built on the **OpenAI SDK** (used for both OpenAI and OpenRouter providers). It supports multi-turn conversations, image attachments, file context attachments, streaming responses, tool calling (search, page creation, image generation, file system), session persistence, and full export capabilities.
The core is a **reusable `ChatPanel` component** ([`ChatPanel.tsx`](../src/modules/ai/ChatPanel.tsx)) with preset modes (`simple`, `standard`, `developer`), embeddable anywhere. All state and logic lives in [`useChatEngine`](../src/modules/ai/useChatEngine.ts). The playground page ([`PlaygroundChat.tsx`](../src/pages/PlaygroundChat.tsx)) is a thin wrapper using the `developer` preset.
---
## Architecture
```mermaid
graph TB
subgraph ChatPanelComp["ChatPanel.tsx (reusable)"]
Header["ChatHeader"]
RPG["ResizablePanelGroup"]
subgraph Sidebar["ChatSidebar (ResizablePanel)"]
Sessions["Sessions"]
Provider["Provider & Model"]
SysPrompt["System Prompt"]
Tools["Tools Toggles"]
FileBrowser["Files (FileBrowserPanel)"]
Stats["Stats"]
Payload["Prompt Payload (CompactTreeView)"]
Logs["Chat Logs (ChatLogBrowser)"]
end
subgraph Main["ChatMessages (ResizablePanel)"]
Messages["MessageBubble × N"]
Composer["ChatComposer + FileContext chips"]
end
end
subgraph Engine["useChatEngine Hook"]
State["State Management"]
API["API Client (OpenAI SDK)"]
ToolPreset["Tool Presets"]
ExtTools["extraToolsRef (external)"]
CtxProvider["contextProviderRef (external)"]
FileCtx["FileContext State"]
SessionMgr["Session Storage"]
Export["Export Handlers"]
end
ChatPanelComp --> Engine
API --> OpenAI["OpenAI API"]
API --> OpenRouter["OpenRouter API"]
ToolPreset --> SearchTools["searchTools.ts"]
ToolPreset --> ImageTools["imageTools.ts"]
ToolPreset --> VfsTools["vfsTools.ts"]
ToolPreset --> PageTools["pageTools.ts"]
SessionMgr --> LocalStorage["localStorage"]
```
---
## Component Tree
| Component | File | Role |
|-----------|------|------|
| **PlaygroundChat** | [`PlaygroundChat.tsx`](../src/pages/PlaygroundChat.tsx) | Thin page wrapper: `<ChatPanel preset="developer" />` |
| **ChatPanel** | [`ChatPanel.tsx`](../src/modules/ai/ChatPanel.tsx) | Reusable layout: header + sidebar + messages, configurable via presets and props |
| ↳ **ChatHeader** | [`components/ChatHeader.tsx`](../src/modules/ai/components/ChatHeader.tsx) | Top bar: provider badge, New/JSON/MD/Clear/Settings buttons |
| ↳ **ChatSidebar** | [`components/ChatSidebar.tsx`](../src/modules/ai/components/ChatSidebar.tsx) | Collapsible settings: sessions, provider, system prompt, tools, **file browser**, stats, payload inspector, logs |
| ↳ **FileBrowserPanel** | [`FileBrowserPanel.tsx`](../src/apps/filebrowser/FileBrowserPanel.tsx) | VFS file browser (home mount, list view) with file selection → attach as context |
| ↳ **ChatMessages** | (in ChatPanel.tsx) | Messages area + composer wrapper |
| ↳ **MessageBubble** | [`components/MessageBubble.tsx`](../src/modules/ai/components/MessageBubble.tsx) | Single message: avatar, copy-to-clipboard, markdown rendering, streaming indicator |
| ↳ **ChatComposer** | [`components/ChatComposer.tsx`](../src/modules/ai/components/ChatComposer.tsx) | Textarea input, attachments, drag-drop, image picker, **file context chips**, prompt history, send/cancel |
| ↳ **CompactTreeView** | [`ChatLogBrowser.tsx`](../src/components/ChatLogBrowser.tsx) | Keyboard-navigable JSON tree browser (shared by Payload and Logs) |
| ↳ **ChatLogBrowser** | [`ChatLogBrowser.tsx`](../src/components/ChatLogBrowser.tsx) | Log viewer with level filtering and drill-in on data objects |
---
## ChatPanel — Reusable Component
[`ChatPanel.tsx`](../src/modules/ai/ChatPanel.tsx) is the primary entry point for embedding chat anywhere.
### Presets
| Preset | Header | Sidebar | Sidebar open by default |
|-----------|--------|---------|-------------------------|
| `simple` | ❌ | ❌ | — |
| `standard` | ✅ | ✅ | closed |
| `developer` | ✅ | ✅ | open |
### Props
| Prop | Type | Description |
|------|------|-------------|
| `preset` | `'simple' \| 'standard' \| 'developer'` | Layout preset (default: `'developer'`) |
| `showHeader` | `boolean?` | Override header visibility |
| `showSidebar` | `boolean?` | Override sidebar availability |
| `sidebarOpen` | `boolean?` | Override sidebar initial state |
| `className` | `string?` | CSS class on outer container |
| `layoutId` | `string?` | Persistence ID for panel sizes (default: `'chat-layout'`) |
| `getContext` | `() => string \| null` | Dynamic context injected into system prompt per-send |
| `extraTools` | `() => any[]` | Extra tools added to tool-calling payload per-send |
### Usage
```tsx
import ChatPanel from '@/modules/ai/ChatPanel';
// Full developer experience (default)
<ChatPanel preset="developer" />
// Embeddable minimal chat
<ChatPanel preset="simple" />
// With external context and tools
<ChatPanel
preset="standard"
getContext={() => `Active page: ${slug}\nSelection: ${ids.join(', ')}`}
extraTools={() => [myCustomTool]}
/>
```
### Exports
| Export | Description |
|--------|-------------|
| `ChatPanel` (default) | Full layout with header + sidebar + messages |
| `ChatMessages` | Just the messages + composer (for custom layouts) |
| `ChatPreset` | Type: `'simple' \| 'standard' \| 'developer'` |
| `ChatPanelProps` | Props interface |
---
## Message Flow
```mermaid
sequenceDiagram
participant U as User
participant C as ChatComposer
participant E as useChatEngine
participant A as OpenAI API
U->>C: Types message + optional images
U->>C: Press Enter
C->>E: sendMessage()
E->>E: Create userMsg + empty assistantMsg
E->>E: setMessages([...prev, user, assistant])
E->>E: Build apiMessages[] from systemPrompt + history
E->>A: chat.completions.create (stream: true)
loop Streaming
A-->>E: delta chunks
E->>E: Append to assistantMsg.content
E->>E: setMessages() (triggers re-render)
end
A-->>E: Stream complete
E->>E: Mark assistantMsg.isStreaming = false
E->>E: Auto-save session to localStorage
```
### Cancellation
The user can cancel an in-progress response via the **Stop** button in the composer or the streaming indicator in the message bubble. This triggers `handleCancel()`, which calls `abortRef.current?.abort()` to abort the fetch and marks the current assistant message as complete.
---
## Tool-Calling Flow
When **tools are enabled**, the engine uses `runTools()` from the OpenAI SDK instead of `create()`:
```mermaid
sequenceDiagram
participant E as useChatEngine
participant A as OpenAI API
participant T as Tool Functions
E->>A: runTools(apiMessages, tools[])
loop Tool Calls
A-->>E: tool_call request (e.g. search_content)
E->>T: Execute tool function
T-->>E: JSON result
E->>E: Append tool message to context
E->>E: addChatLog() for verbose logging
E->>A: Continue with tool result
end
A-->>E: Final assistant response (streamed)
E->>E: Merge toolContext into assistant message
E->>E: Stream content to UI
```
### Tool Context Persistence
Tool call results are summarized and attached to the assistant message as `toolContext`. This context is included in subsequent API calls, preventing the AI from re-searching the same data in follow-up questions.
---
## Session Management
```mermaid
sequenceDiagram
participant E as useChatEngine
participant S as chatSessions.ts
participant LS as localStorage
Note over E: On every message change
E->>S: saveSession({ id, title, messages })
S->>LS: setItem("chat-session-{id}", full data)
S->>LS: setItem("chat-sessions-index", metadata[])
S->>S: Trim to MAX_SESSIONS (50)
Note over E: Load session
E->>S: loadSession(id)
S->>LS: getItem("chat-session-{id}")
S-->>E: ChatSession (sanitized)
E->>E: setMessages(clean), setSessionId(id)
```
---
## Features
### Multi-Provider Support
- **OpenAI** — via proxy or direct API key
- **OpenRouter** — custom `baseURL` + API key
- Provider/model selection persisted in localStorage
- API keys stored in Supabase user secrets
### Image Attachments
- **Drag & drop** files onto the composer
- **Paste** images from clipboard
- **File picker** (file input dialog)
- **Gallery picker** — browse platform images via `ImagePickerDialog`
- Remote images proxied via server render API for resizing (`getResizedImageUrl`)
### Streaming Responses
- Uses `stream: true` with the OpenAI SDK
- Real-time content rendering via `isStreaming` flag on assistant messages
- Auto-scroll to bottom during streaming
### Prompt History
- All sent prompts saved to localStorage via `usePromptHistory`
- Navigate with **Ctrl+↑ / Ctrl+↓** in the composer
- History persisted under key `promptHistoryChat`
### Export
| Format | Action | Method |
|--------|--------|--------|
| **JSON** | Download `.json` file + copy to clipboard | [`exportChatAsJson`](../src/modules/ai/chatExport.ts) |
| **Markdown** | Copy to clipboard | [`exportChatAsMarkdown`](../src/modules/ai/chatExport.ts) |
| **Payload JSON** | Copy button in sidebar | `CompactTreeView` headerContent |
| **Logs JSON** | Copy button in sidebar | `ChatSidebar` headerContent |
### Sidebar Inspector
- **Prompt Payload** — live `useMemo` of the API messages array, browseable via `CompactTreeView` with keyboard navigation (↑↓←→, search, breadcrumbs)
- **Chat Logs** — verbose timestamped log of all engine events (tool calls, results, errors)
- Both sections support **copy-to-clipboard** via header buttons
### Desktop/Mobile Layout
- **Desktop**`ResizablePanelGroup` (sidebar 25% default, 1545% range; `autoSaveId="chat-layout"` for persistence)
- **Mobile** — sidebar as overlay (85vw, max 360px) with backdrop; full-width chat panel
---
## File Browser Integration
The sidebar includes a **Files** collapsible section embedding a [`FileBrowserPanel`](../src/apps/filebrowser/FileBrowserPanel.tsx) configured for the user's `home` VFS mount.
### File Context Workflow
1. **Browse** — navigate your home drive in the sidebar file browser
2. **Preview** — double-click / Enter / Space opens the lightbox (image, video, or text)
3. **Attach** — click a file, then click the **Attach** button to add it as context
4. **View** — attached files appear as **teal chips** above the composer and listed in the Files section footer
5. **Inject** — file path + content is injected into the system prompt under `--- Attached Files (editable via fs_write) ---`
6. **Write back** — the LLM can modify attached files via the existing `fs_write` tool
7. **Remove** — click the × on any chip or in the sidebar list to detach
### FileContext Type
```typescript
interface FileContext {
path: string; // VFS path (e.g. "notes/readme.md")
mount: string; // VFS mount (e.g. "home")
name: string; // filename only
content: string; // file text content
}
```
---
## External Context & Tools Injection
Consumers can inject dynamic context and custom tools into the chat engine via `ChatPanel` props, enabling domain-specific integrations without modifying the core chat module.
### `getContext` — Dynamic System Prompt Injection
Called synchronously in `sendMessage()` just before building the API payload. The returned string is appended to the system prompt for that specific request.
```tsx
<ChatPanel
getContext={() => {
const sel = getSelectedItems();
return sel.length ? `Selected items:\n${sel.map(s => `- ${s.name}`).join('\n')}` : null;
}}
/>
```
### `extraTools` — Custom Tool Injection
Called when assembling the tools array for each send. Return OpenAI-compatible `RunnableToolFunctionWithParse` definitions.
```tsx
<ChatPanel
extraTools={() => [
{
type: 'function',
function: {
name: 'update_product',
parse: JSON.parse,
description: 'Update a product field',
parameters: { type: 'object', properties: { id: { type: 'string' } } },
function: async (args) => { /* ... */ },
},
},
]}
/>
```
### Implementation
Both use **ref-based injection** in `useChatEngine`:
- `contextProviderRef``React.MutableRefObject<(() => string | null) | null>`
- `extraToolsRef``React.MutableRefObject<(() => any[]) | null>`
`ChatPanel` wires its props into these refs via `useEffect`, ensuring the engine always has the latest provider functions without prop-drilling into the hook.
## Tool System
All tools follow the OpenAI SDK `RunnableToolFunctionWithParse` interface (Zod parse + async function + JSON schema).
### Search Tools — [`searchTools.ts`](../src/modules/ai/searchTools.ts)
| Tool | Description |
|------|-------------|
| `search_content` | Full-text search across pages, posts, pictures (type filter, limit) |
| `find_pages` | Search pages only |
| `find_pictures` | Search pictures only |
| `get_page_content` | Fetch full page content by user_id + slug |
| `list_categories` | List all category trees |
| `find_by_category` | Find items by category slug |
Bundled via `createSearchToolPreset()`.
### Image Tools — [`imageTools.ts`](../src/modules/ai/imageTools.ts)
| Tool | Description |
|------|-------------|
| `generate_image` | Generate image from prompt → upload to Supabase `temp-images` bucket → return markdown embed |
### VFS Tools — [`vfsTools.ts`](../src/modules/ai/vfsTools.ts)
| Tool | Description |
|------|-------------|
| `vfs_ls` | List directory contents (optional glob filter) |
| `vfs_read` | Read file content |
| `vfs_write` | Write/create file |
| `vfs_mkdir` | Create directory |
| `vfs_delete` | Delete file or directory |
Operates on the user's home drive via authenticated API calls.
### Page Tools — [`lib/pageTools.ts`](../src/lib/pageTools.ts)
| Tool | Description |
|------|-------------|
| `create_page` | Create a new page on the user's account |
### Tool Toggle Controls
Each tool category has an independent toggle in the sidebar:
| Toggle | localStorage Key | Default |
|--------|-----------------|---------|
| Search Tools | `chat-settings-tools` | `true` |
| Page Tools | `chat-settings-page-tools` | `true` |
| Image Tools | `chat-settings-image-tools` | `false` |
| File Tools | `chat-settings-vfs-tools` | `false` |
---
## Storage
All persistence is **client-side via `localStorage`**. No server database is used for chat data.
### localStorage Keys
| Key | Type | Description |
|-----|------|-------------|
| `chat-sessions-index` | `Omit<ChatSession, 'messages'>[]` | Session metadata index (max 50) |
| `chat-session-{uuid}` | `ChatSession` | Full session data (messages + metadata) |
| `chat-settings-provider` | `string` | Selected AI provider |
| `chat-settings-model` | `string` | Selected model |
| `chat-settings-system-prompt` | `string` | System prompt text |
| `chat-settings-tools` | `boolean` | Search tools toggle |
| `chat-settings-page-tools` | `boolean` | Page tools toggle |
| `chat-settings-image-tools` | `boolean` | Image tools toggle |
| `chat-settings-vfs-tools` | `boolean` | VFS tools toggle |
| `chat-settings-show` | `boolean` | Sidebar visibility |
| `chat-settings-sidebar-width` | `number` | (legacy — now managed by ResizablePanelGroup) |
| `chat-layout` | (react-resizable-panels) | Panel sizes (auto-managed) |
| `promptHistoryChat` | `string[]` | Prompt history ring buffer |
| `chat-section-*` | `boolean` | CollapsibleSection open/closed states |
### Session Lifecycle
1. **Auto-save** — sessions save on every message change via `useEffect`
2. **Title generation** — first 60 chars of the first user message
3. **Max sessions** — oldest sessions pruned when exceeding 50
4. **Sanitization** — streaming flags stripped, empty orphan messages filtered on save and load
---
## File Reference
| File | Purpose |
|------|---------|
| [`ChatPanel.tsx`](../src/modules/ai/ChatPanel.tsx) | **Reusable component**: presets, layout, external context/tools injection |
| [`useChatEngine.ts`](../src/modules/ai/useChatEngine.ts) | Central hook: all state, API calls, streaming, tool orchestration, file contexts |
| [`types.ts`](../src/modules/ai/types.ts) | `ChatMessage`, `ImageAttachment`, `FileContext`, helpers |
| [`chatSessions.ts`](../src/modules/ai/chatSessions.ts) | Session persistence (localStorage) |
| [`chatExport.ts`](../src/modules/ai/chatExport.ts) | JSON/Markdown export (download + clipboard) |
| [`searchTools.ts`](../src/modules/ai/searchTools.ts) | 6 search/content tools + `createSearchToolPreset` |
| [`imageTools.ts`](../src/modules/ai/imageTools.ts) | `generate_image` tool |
| [`vfsTools.ts`](../src/modules/ai/vfsTools.ts) | 6 VFS tools (ls/read/write/write_many/mkdir/delete) |
| [`ChatHeader.tsx`](../src/modules/ai/components/ChatHeader.tsx) | Top bar with provider badge and action buttons |
| [`ChatSidebar.tsx`](../src/modules/ai/components/ChatSidebar.tsx) | Settings panel: sessions, provider, prompt, tools, **file browser**, stats, payload, logs |
| [`ChatComposer.tsx`](../src/modules/ai/components/ChatComposer.tsx) | Input area: textarea, attachments, **file context chips**, drag-drop, image picker |
| [`MessageBubble.tsx`](../src/modules/ai/components/MessageBubble.tsx) | Message rendering: avatar, copy, markdown, streaming |
| [`PlaygroundChat.tsx`](../src/pages/PlaygroundChat.tsx) | Page wrapper: `<ChatPanel preset="developer" />` |
| [`ChatLogBrowser.tsx`](../src/components/ChatLogBrowser.tsx) | Log viewer + CompactTreeView (keyboard-nav JSON browser) |

40
packages/ui/docs/chat.md Normal file
View File

@ -0,0 +1,40 @@
# Chat Export
> openai/gpt-5 — 2/26/2026, 8:11:17 PM
## System Prompt
You are a helpful assistant. Be concise and clear. Always return Markdown raw with additional escapes ticks except for code
---
### 🧑 User `08:10 PM`
what injectors you have - as table, with images
---
### 🔧 list_categories `08:10 PM`
Calling list_categories()
---
### 🔧 find_by_category `08:10 PM`
Calling find_by_category(category_slug="injectors", limit=50)
---
### 🤖 Assistant `08:10 PM`
Here are the injectors we have:
| Image | Product | Type | Highlights | Price | Link |
|---|---|---|---|---:|---|
| ![Katbot - Injection Machine](https://ytoadlpbdguriiccjnip.supabase.co/storage/v1/object/public/pictures/3bb4cfbf-318b-44d3-a9d3-35680e738421/1771352688946-9buns9.jpg) | Katbot Injection Machine | Injector | Fast, repeatable, comfortable, safe, precise injection | 1800 | http://192.168.1.14:3333/user/3bb4cfbf-318b-44d3-a9d3-35680e738421/pages/katbot-injection-machine |
| ![Elena - Injection Machine](https://ytoadlpbdguriiccjnip.supabase.co/storage/v1/object/public/pictures/cache/4cf80b755169ce7e5faef7038d7ef79363e9ea3ecbf71f85e3d3ee50443fb4a1.jpeg) | Elena Injection Machine | Injector | Clean, safe, predictable, precise; quick mould clamp; motorized upgrade | 5555 | http://192.168.1.14:3333/user/3bb4cfbf-318b-44d3-a9d3-35680e738421/pages/elena-injection-machine |
| ![Polymech Controller](https://ytoadlpbdguriiccjnip.supabase.co/storage/v1/object/public/pictures/3bb4cfbf-318b-44d3-a9d3-35680e738421/1771514429070-ngwu5t.jpg) | Polymech Controller | Controller (accessory) | Modern HMI; ModbusTCP/Serial/WebSocket/REST; safety features; VFD/PID support | 10 | http://192.168.1.14:3333/user/3bb4cfbf-318b-44d3-a9d3-35680e738421/pages/polymech-controller |
Want a sidebyside spec comparison of the two injector machines?
---

View File

@ -0,0 +1,216 @@
# Contacts
User-managed address book — vCard-compatible contacts, groups, import/export, and flexible `meta` jsonb.
## Architecture Overview
```
┌──────────────────────────────────────┐
│ Frontend │
│ ContactsManager.tsx │
│ (MUI DataGrid, batch bar, dialogs) │
│ │
│ client-contacts.ts │
│ (fetch wrappers, bearer token) │
└──────────────┬───────────────────────┘
│ /api/contacts/*
┌──────────────────────────────────────┐
│ Server ContactsProduct │
│ products/contacts/index.ts │
│ products/contacts/routes.ts │
└──────────────┬───────────────────────┘
│ Supabase
┌──────────────────────────────────────┐
│ Tables │
│ contacts │
│ contact_groups │
│ contact_group_members │
└──────────────────────────────────────┘
```
## Database
### `contacts`
| Column | Type | Notes |
|--------|------|-------|
| `id` | uuid | PK, auto-generated |
| `owner_id` | uuid | FK → `auth.users`, not null |
| `name` | text | Full display name |
| `first_name` | text | — |
| `last_name` | text | — |
| `emails` | jsonb | Array of `{ email, label?, primary? }` objects |
| `phone` | text | Primary phone |
| `organization` | text | Company / org name |
| `title` | text | Job title |
| `address` | jsonb | Array of `{ street, city, state, postal_code, country, label? }` |
| `source` | text | Origin of contact (`cscart`, `import`, `manual`, …) |
| `language` | text | Preferred language tag (`en`, `de`, …) |
| `status` | text | `active` / `unsubscribed` / `bounced` / `blocked` |
| `notes` | text | Free-form notes |
| `tags` | text[] | Searchable tags |
| `log` | jsonb | Audit / event log array `[{ at, event, data }]` |
| `meta` | jsonb | Arbitrary extra fields (vCard extensions, etc.) |
| `created_at` | timestamptz | — |
| `updated_at` | timestamptz | Auto-updated via trigger |
**Indexes:** `owner_id`, `status`, `source`, `language`, `tags` (GIN), `emails` (GIN)
### `contact_groups`
| Column | Type | Notes |
|--------|------|-------|
| `id` | uuid | PK |
| `owner_id` | uuid | FK → `auth.users` |
| `name` | text | not null |
| `description` | text | — |
| `meta` | jsonb | e.g. color, icon |
| `created_at` | timestamptz | — |
| `updated_at` | timestamptz | — |
### `contact_group_members`
| Column | Type | Notes |
|--------|------|-------|
| `group_id` | uuid | FK → `contact_groups` |
| `contact_id` | uuid | FK → `contacts` |
| `added_at` | timestamptz | — |
| PK | composite | (`group_id`, `contact_id`) |
### RLS
- **Owners**: full CRUD on their own rows (`owner_id = auth.uid()`)
- **Admins** (`user_roles.role = 'admin'`): full access to all rows
## Server API Endpoints
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/api/contacts` | Auth | List contacts. Query: `?group=<id>&q=<search>&status=<status>&limit=&offset=` |
| `POST` | `/api/contacts` | Auth | Create contact |
| `GET` | `/api/contacts/:id` | Auth | Get single contact |
| `PATCH` | `/api/contacts/:id` | Auth | Update contact (partial) |
| `DELETE` | `/api/contacts/:id` | Auth | Delete contact |
| `POST` | `/api/contacts/import` | Auth | Bulk import JSON array or vCard text (`?format=json\|vcard`) |
| `GET` | `/api/contacts/export` | Auth | Export all contacts. Query: `?format=json\|vcard&group=<id>` |
| `GET` | `/api/contact-groups` | Auth | List groups |
| `POST` | `/api/contact-groups` | Auth | Create group |
| `PATCH` | `/api/contact-groups/:id` | Auth | Update group |
| `DELETE` | `/api/contact-groups/:id` | Auth | Delete group |
| `GET` | `/api/contact-groups/members` | Auth | List all group memberships for the user's contacts `→ { contact_id, group_id }[]` |
| `POST` | `/api/contact-groups/:id/members` | Auth | Add contacts `{ contact_ids: string[] }` `→ { added: number }` |
| `DELETE` | `/api/contact-groups/:id/members/:contactId` | Auth | Remove contact from group |
> **Route priority:** static sub-paths (`/import`, `/export`, `/members`) are registered before parameterised `:id` routes to avoid conflicts.
## Import / Export Format
### JSON (default)
```json
[
{
"email": "jane@example.com",
"name": "Jane Doe",
"first_name": "Jane",
"last_name": "Doe",
"phone": "+1 555 0100",
"organization": "Acme",
"title": "Engineer",
"address": { "city": "Berlin", "country": "DE" },
"tags": ["customer", "newsletter"],
"meta": { "source": "cscart" }
}
]
```
### vCard (format=vcard)
Standard vCard 3.0 — one `BEGIN:VCARD … END:VCARD` block per contact.
Fields mapped: `FN`, `N`, `EMAIL`, `TEL`, `ORG`, `TITLE`, `ADR`, `NOTE`, `CATEGORIES`.
Extended fields stored in `meta` as `X-PM-*` (`X-PM-LANGUAGE`, `X-PM-SOURCE`, `X-PM-STATUS`).
## Frontend Client
`src/modules/contacts/client-contacts.ts` — all functions inject the Supabase bearer token automatically via `authHeaders()`. Requests are routed through a shared `apiFetch` helper that resolves `VITE_SERVER_IMAGE_API_URL`.
```ts
// Contacts CRUD
fetchContacts(options?) → Contact[] // options: { group?, q?, status?, limit?, offset? }
getContact(id) → Contact
createContact(data) → Contact
updateContact(id, data) → Contact
deleteContact(id) → void
// Import / Export
importContacts(body, format?) → { imported: number; skipped: number }
exportContacts(options?) → string | Contact[] // options: { format?, group? }
// Groups CRUD
fetchContactGroups() → ContactGroup[]
createContactGroup(data) → ContactGroup
updateContactGroup(id, data) → ContactGroup
deleteContactGroup(id) → void
fetchGroupMembers() → { contact_id: string; group_id: string }[]
addGroupMembers(groupId, contactIds) → { added: number }
removeGroupMember(groupId, contactId) → void
```
### Key Types
```ts
interface ContactEmail { email: string; label?: string; primary?: boolean }
interface ContactAddress { street?; city?; state?; postal_code?; country?; label? }
interface Contact { id; owner_id; name?; first_name?; last_name?; emails: ContactEmail[];
phone?; address: ContactAddress[]; source?; language?;
status?: 'active'|'unsubscribed'|'bounced'|'blocked';
organization?; title?; notes?; tags?; log?; meta?;
created_at?; updated_at? }
interface ContactGroup { id; owner_id; name; description?; meta?; created_at?; updated_at? }
```
## Frontend UI — `ContactsManager`
Full-featured management interface built with **MUI DataGrid** inside a shadcn/ui shell.
### Features
| Feature | Detail |
|---------|--------|
| **DataGrid** | Sortable, filterable columns (name, email, status, groups, tags, actions). Checkbox selection. |
| **URL state sync** | Filter, sort, column visibility and pagination models are persisted in URL search params via `gridUtils`. |
| **Toolbar filters** | Search (`q`), group dropdown, status dropdown — all reflected in URL and sent to the server. |
| **Contact dialog** | Create / edit form with email chips, tag chips, group toggles, and status select. |
| **Batch bar** | When rows are selected: set group, remove from all groups, set status, or delete. Uses `addGroupMembers`, `removeGroupMember`, `updateContact`, `deleteContact`. |
| **Import** | File picker accepts `.json` / `.vcf`, auto-detects format. |
| **Export** | Dropdown for JSON or vCard, respects active group filter. Downloads as file. |
| **Group management** | Dialog to create / delete groups. Inline in the toolbar. |
### URL Parameters
| Param | Source |
|-------|--------|
| `q` | Search input |
| `group` | Group filter dropdown |
| `status` | Status filter dropdown |
| `filter_*` | DataGrid column filters (via `gridUtils`) |
| `sort` | DataGrid sort model |
| `hidden` | DataGrid column visibility |
| `page` / `pageSize` | DataGrid pagination (defaults: 0 / 50) |
## Environment Variables
Inherits same Supabase env as the rest of the server — no additional variables required.
## Source Files
| File | Description |
|------|-------------|
| [contacts.md](contacts.md) | This document |
| [migration](../supabase/migrations/20260306120000_create_contacts.sql) | DB schema, RLS, indexes |
| [routes.ts](../server/src/products/contacts/routes.ts) | Zod-OpenAPI route definitions |
| [index.ts](../server/src/products/contacts/index.ts) | ContactsProduct handlers |
| [client-contacts.ts](../src/modules/contacts/client-contacts.ts) | Frontend fetch wrappers |
| [ContactsManager.tsx](../src/components/ContactsManager.tsx) | Main UI component (DataGrid, batch ops, dialogs) |

View File

@ -1,62 +0,0 @@
# Creating Brand/Context Aware Articles
The AI Page Generator allows you to create highly consistent, on-brand content by combining **visual context** (Reference Images) with **structural context** (Context Templates). This workflow ensures that generated articles not only look like your brand but also follow your specific formatting and content standards.
## The Workflow
### 1. Visual Context (Reference Images)
Use **Reference Images** to establish the visual identity of your article.
* **What to use**: Upload brand assets, logo variations, previous diagram styles, or product photos.
* **How it works**: The AI "sees" these images and uses them to:
* **Style Match**: Generate new images that match the color palette and artistic style of your references.
* **Contextual Description**: Accurately describe visual details in the text (e.g., "As shown in the diagram...").
* **Brand Alignment**: Ensure generated visuals align with your brand's aesthetic.
### 2. Structural Context (Context Templates)
Use a **Context Template** in your prompt to define the exact structure and tone of the article. This serves as a "skeleton" for the AI to fill in.
* **What is a Context Template?**: A markdown structure that defines headers, required sections, image placement, and key points, without the final text.
* **How to use**: Paste a structured template into the prompt area.
#### Example Context Template
```markdown
# [Article Title]
## Overview
[Brief summary of the topic]
## Core Concepts
* Concept A: [Description]
* Concept B: [Description]
## Visual Breakdown
[Instruction: Generate an exploded view diagram here similar to the reference image]
* **Part 1**: Details...
* **Part 2**: Details...
## Technical Specifications
| Spec | Value |
|------|-------|
| [Key Spec] | [Value] |
## Conclusion
[Summarize benefits]
```
## Step-by-Step Guide
1. **Open Page Generator**: Click "Create Page" -> "Generate with AI".
2. **Add Reference Images**: Click the "Add" button and select your brand assets or style references.
3. **Input Context Template**: Paste your structured markdown template into the prompt box.
4. **Refine Prompt**: Add specific instructions above or below the template (e.g., "Fill out this template for a new shelving unit product using the attached technical drawings as reference").
5. **Generate**: The AI will combine your **Visual Context** (images) and **Structural Context** (template) to produce a production-ready article that feels authentic to your brand.
## Best Practices
* **Consistency**: Keep a library of standard templates for different content types (e.g., "Product Launch", "Technical Guide", "Case Study").
* **Quality References**: High-resolution, clear reference images yield better results.
* **Explicit Instructions**: Tell the AI *how* to use the references (e.g., "Use the color scheme from Image 1 for all generated diagrams").

View File

@ -0,0 +1,35 @@
[StreamInvalidator] post:51c11e3b-f063-4889-9bf7-cc53dfb22728:create
publishHandlers.ts:253 Connecting to 'data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABAAAAAQACAIAAADwf7zUAAAYwGNhQlgAABjAanVtYgAAAB5qdW1kYzJwYQARABCAAACqADibcQNjMnBhAAAAGJpqdW1iAAAAR2p1bWRjMm1hABEAEIAAAKoAOJtxA3VybjpjMnBhOjIzMTBhYTYyLTUwZWYtOGYwMS04ZmQyLTQzY2M2MzYzZTNhMgAAABMDanVtYgAAAChqdW1kYzJjcwARABCAAACqADibcQNjMnBhLnNpZ25hdHVyZQAAABLTY2JvctKEWQYrogEmGCGCWQM/MIIDOzCCAsCgAwIBAgIUAJ6vFWKBqUkCFltI/1ipbSSYHs4wCgYIKoZIzj0EAwMwUTELMAkGA1UEBhMCVVMxEzARBgNVBAoMCkdvb2dsZSBMTEMxLTArBgNVBAMMJEdvb2dsZSBDMlBBIE1lZGlhIFNlcnZpY2VzIDFQIElDQSBHMzAeFw0yNjAyM...je5wH8nu0Yi0QwnmWIa3LOpWATXOx+jGdNe87Krvks/TO3vfAzg2YRRgpTHdlcHldFoNPRFkXalOM0Pu96rsvOdHc/xcoHzhnk1hD3hgkM/g5+7MhD18P1jJ7l0CiJ7oGBufzL2ESNZ1T/OzcxlQ0XIDxZIZ9xMsZ+lz2hxW9+eA7D6F2OKfJD2fvdZjGIuBHybCN7PD9j5kdZD1DRYRMDII3VudeLNt/vbfX3/CmGSYF5tl4bXY//uaQBjWH2/jvg7rz1VjKsDacWv7/ZOxREJbbn7qXP19Xf1iv1ivYQ2teLTSh9+K+762btI3Owc8c/826vRZf2WQSL2L0X2KhsGIlwDQrpus33b8VEAltvQLXJBsQM1P1ZNgIJUSZFzgKZI3TJmld92m0/Xl00cflsxmU4QsbyA83mA+C7ON59/fDp7X0C+N+g/TeK/+c///nPP+vv//3P79f//M///PvN//7v//77zf8BwMrEn3UCvAUAAAAASUVORK5CYII=' violates the following Content Security Policy directive: "connect-src 'self' https://*.supabase.co wss://*.supabase.co https://assets.polymech.info https://*.polymech.info https://service.polymech.info wss://service.polymech.info https://cdn.jsdelivr.net https://api.openai.com https://www.googletagmanager.com https://*.google-analytics.com https://www.google-analytics.com https://*.googleapis.com https://*.gstatic.com". The action has been blocked.
B @ publishHandlers.ts:253
await in B
Aa @ ImageWizard.tsx:1175
(anonymous) @ index.mjs:4
Oe @ react-dom.production.min.js:54
Ue @ react-dom.production.min.js:54
(anonymous) @ react-dom.production.min.js:55
zr @ react-dom.production.min.js:105
Fr @ react-dom.production.min.js:106
(anonymous) @ react-dom.production.min.js:117
dc @ react-dom.production.min.js:273
Pe @ react-dom.production.min.js:52
$r @ react-dom.production.min.js:109
Kt @ react-dom.production.min.js:74
Gt @ react-dom.production.min.js:73Understand this error
publishHandlers.ts:253 Fetch API cannot load data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAABAAAAAQACAIAAADwf7zUAAAYwGNhQlgAABjAanVtYgAAAB5qdW1kYzJwYQARABCAAACqADibcQNjMnBhAAAAGJpqdW1iAAAAR2p1bWRjMm1hABEAEIAAAKoAOJtxA3VybjpjMnBhOjIzMTBhYTYyLTUwZWYtOGYwMS04ZmQyLTQzY2M2MzYzZTNhMgAAABMDanVtYgAAAChqdW1kYzJjcwARABCAAACqADibcQNjMnBhLnNpZ25hdHVyZQAAABLTY2JvctKEWQYrogEmGCGCWQM/MIIDOzCCAsCgAwIBAgIUAJ6vFWKBqUkCFltI/1ipbSSYHs4wCgYIKoZIzj0EAwMwUTELMAkGA1UEBhMCVVMxEzARBgNVBAoMCkdvb2dsZSBMTEMxLTArBgNVBAMMJEdvb2dsZSBDMlBBIE1lZGlhIFNlcnZpY2VzIDFQIElDQSBHMzAeFw0yNjAyM...je5wH8nu0Yi0QwnmWIa3LOpWATXOx+jGdNe87Krvks/TO3vfAzg2YRRgpTHdlcHldFoNPRFkXalOM0Pu96rsvOdHc/xcoHzhnk1hD3hgkM/g5+7MhD18P1jJ7l0CiJ7oGBufzL2ESNZ1T/OzcxlQ0XIDxZIZ9xMsZ+lz2hxW9+eA7D6F2OKfJD2fvdZjGIuBHybCN7PD9j5kdZD1DRYRMDII3VudeLNt/vbfX3/CmGSYF5tl4bXY//uaQBjWH2/jvg7rz1VjKsDacWv7/ZOxREJbbn7qXP19Xf1iv1ivYQ2teLTSh9+K+762btI3Owc8c/826vRZf2WQSL2L0X2KhsGIlwDQrpus33b8VEAltvQLXJBsQM1P1ZNgIJUSZFzgKZI3TJmld92m0/Xl00cflsxmU4QsbyA83mA+C7ON59/fDp7X0C+N+g/TeK/+c///nPP+vv//3P79f//M///PvN//7v//77zf8BwMrEn3UCvAUAAAAASUVORK5CYII=. Refused to connect because it violates the document's Content Security Policy.
B @ publishHandlers.ts:253
await in B
Aa @ ImageWizard.tsx:1175
(anonymous) @ index.mjs:4
Oe @ react-dom.production.min.js:54
Ue @ react-dom.production.min.js:54
(anonymous) @ react-dom.production.min.js:55
zr @ react-dom.production.min.js:105
Fr @ react-dom.production.min.js:106
(anonymous) @ react-dom.production.min.js:117
dc @ react-dom.production.min.js:273
Pe @ react-dom.production.min.js:52
$r @ react-dom.production.min.js:109
Kt @ react-dom.production.min.js:74
Gt @ react-dom.production.min.js:73Understand this error
publishHandlers.ts:276 Error quick publishing image: TypeError: Failed to fetch. Refused to connect because it violates the document's Content Security Policy.
at B (publishHandlers.ts:253:30)

View File

@ -1,48 +0,0 @@
# Database & Architecture Todos
## Server-Side & Schema Tasks
### Schema Changes (Postgres/Supabase)
- [ ] **Split `profiles` Table**:
- [ ] Create `user_secrets` table (Columns: `user_id` (PK, FK), `openai_api_key`, `bria_api_key`, `replicate_api_key`, `settings`, `google_api_key`).
- [ ] Migrate data from `profiles` to `user_secrets` (Ref: [`src/integrations/supabase/types.ts`](../src/integrations/supabase/types.ts)).
- [ ] Drop secret columns from `profiles`.
- [ ] Rename `profiles` to `profiles_public` (optional, or just restrict access).
- [ ] **Create `page_collaborators` Table**:
- [ ] Columns: `page_id` (FK), `user_id` (FK), `role` (enum: 'viewer', 'editor', 'owner'), `created_at`.
- [ ] Add unique constraint on `(page_id, user_id)`.
- [ ] **RLS Policies Update**:
- [ ] `user_secrets`: Enable RLS. Policy: `auth.uid() = user_id`.
- [ ] `profiles`: Policy: Public read. Update strictly limited to owner.
- [ ] `pages`: Policy:
- Read: `is_public` OR `auth.uid() = owner` OR `auth.uid() IN (select user_id from page_collaborators)`.
- Update: `auth.uid() = owner` OR `auth.uid() IN (select user_id from page_collaborators where role IN ('editor', 'owner'))`.
### Server Logic (Node/Hono)
- [ ] **Implement `ServingProduct` Endpoints** (Ref: [`server/src/products/serving/index.ts`](../server/src/products/serving/index.ts)):
- [ ] `GET /api/feed`: Returns hydrated feed (Posts + Authors + Cover Images).
- [ ] `GET /api/profile/:id`: Returns public profile + recent posts.
- [ ] `GET /api/me/secrets`: (Secure) Returns user secrets for settings page.
- [ ] **Server-Side Injection**:
- [ ] Update `handleServeApp` in [`ServingProduct`](../server/src/products/serving/index.ts) to pre-fetch User & Feed.
- [ ] Inject into `index.html` as `window.__INITIAL_STATE__`.
---
## Client-Side Tasks
### `src/lib/db.ts` Refactor
- [ ] **Deprecate Direct Selects**: Identify all `supabase.from('posts').select(...)` calls in [`src/lib/db.ts`](../src/lib/db.ts).
- [ ] **Implement Proxy Clients**:
- [ ] Create `fetchFeedFromProxy()` calling `/api/feed` in [`src/lib/db.ts`](../src/lib/db.ts).
- [ ] Create `fetchProfileFromProxy(id)` calling `/api/profile/:id` in [`src/lib/db.ts`](../src/lib/db.ts).
- [ ] **Hydration Logic**:
- [ ] Check `window.__INITIAL_STATE__` on app boot to populate React Query cache before fetching.
### Component Updates
- [ ] **Post Page**:
- [ ] Use `fetchPostFromProxy` (or standard `db.fetchPostById` redirected to proxy) in [`src/pages/Post.tsx`](../src/pages/Post.tsx).
- [ ] Handle 404s gracefully (See Security.md for details).
- [ ] **PageManager**:
- [ ] Update [`src/components/PageManager.tsx`](../src/components/PageManager.tsx) to fetch "My Pages" AND "Shared Pages".

View File

@ -1,169 +0,0 @@
# Short Term DB Caching Proposal
## Objective
Reduce database load and improve response times for high-traffic, read-heavy routes by implementing a short-term caching layer using a **Generically Safe Decorator Pattern**.
## Proposed Solution
Implement a **Generic CachedHandler Utility** (`server/src/commons/decorators.ts`) that:
1. **Auto-Generates Keys**: Defaults to URL + Query.
2. **Auth Protection**: Skips caching for Authenticated requests by default.
3. **Size Protection**: Skips caching for responses larger than a threshold (e.g. 1MB).
4. **Memory Protection**: Enforces LRU/Limits in `MemoryCache`.
### 1. Functional Decorator
```typescript
import { Context } from 'hono';
import { getCache } from '../commons/cache/index.js';
type KeyGenerator = (c: Context) => string;
const defaultKeyInfo = (c: Context) => {
const url = new URL(c.req.url);
// Deterministic Sort: key=a&key=b vs key=b&key=a
// 1. Sort keys
url.searchParams.sort();
return `auto-cache:${c.req.method}:${url.pathname}${url.search}`;
};
export const CachedHandler = (
handler: (c: Context) => Promise<Response>,
options: {
ttl: number,
keyGenerator?: KeyGenerator,
skipAuth?: boolean, // Default true
maxSizeBytes?: number // Default: 1MB
}
) => async (c: Context) => {
// defaults
const ttl = options.ttl;
const skipAuth = options.skipAuth !== false;
const maxSizeBytes = options.maxSizeBytes || 1024 * 1024; // 1MB
const keyGen = options.keyGenerator || defaultKeyInfo;
// 1. Auth Bypass
if (skipAuth && c.req.header('Authorization')) {
return handler(c);
}
const cache = getCache();
const key = keyGen(c);
const bypass = c.req.query('cache') === 'false';
// 2. Hit
if (!bypass) {
const cached = await cache.get(key);
if (cached) {
c.header('X-Cache', 'HIT');
if (cached.contentType) c.header('Content-Type', cached.contentType);
return c.body(cached.data);
}
}
// 3. Miss
const response = await handler(c);
// 4. Save
if (response instanceof Response && response.ok) {
const cloned = response.clone();
try {
const contentType = response.headers.get('Content-Type') || 'application/json';
let data: any;
// Check content length if available
const contentLength = cloned.headers.get('Content-Length');
if (contentLength && parseInt(contentLength) > maxSizeBytes) {
// Too big, skip cache
return response;
}
if (contentType.includes('application/json')) {
const jsonObj = await cloned.json();
data = JSON.stringify(jsonObj);
} else {
data = await cloned.text();
}
// Double check actual size after reading
if (data.length > maxSizeBytes) {
// Too big, skip cache
return response;
}
await cache.set(key, { data, contentType }, ttl);
c.header('X-Cache', bypass ? 'BYPASS' : 'MISS');
} catch (e) {
console.error('Cache interception failed', e);
}
}
return response;
}
```
### 2. Usage Implementation
In `server/src/products/serving/index.ts`:
```typescript
// 5 minute cache, auto-key, skip if auth, max 500kb
this.routes.push({
definition: getApiUserPageRoute,
handler: CachedHandler(handleGetApiUserPage, { ttl: 300, maxSizeBytes: 500 * 1024 })
});
```
### 3. MemoryCache Protection (Limit)
Update `server/src/commons/cache/MemoryCache.ts`:
```typescript
// Add limit
const MAX_KEYS = 1000;
async set(key: string, value: any, ttlSeconds: number): Promise<void> {
this.prune();
if (this.cache.size >= MAX_KEYS) {
const first = this.cache.keys().next().value;
this.cache.delete(first);
}
// ... set logic
}
```
### 4. Summary of Protections
| Protection | Mechanism | Benefit |
| :--- | :--- | :--- |
| **Data Leak** | `skipAuth: true` | Prevents private data being cached/served to public. |
| **Stale Data** | `ttl` | Ensures updates propagate eventually. |
| **OOM (Large Item)** | `maxSizeBytes` | Prevents caching huge responses (e.g. giant JSONs). |
| **OOM (Many Items)** | `MAX_KEYS` | Prevents unlimited growth of the cache map. |
| **Performance** | `X-Cache` | Visibility into hit rates. |
### 5. Sequence Diagram (Final)
```mermaid
sequenceDiagram
participant Client
participant Dec as CachedHandler
participant Cache as MemoryCache
participant H as Handler
Client->>Dec: GET /api/data
Dec->>Dec: Check Auth Header?
opt Authenticated
Dec->>H: Invoke Handler Directly
H-->>Client: Returns Private Data
end
Dec->>Cache: get(key)
alt Hit
Cache-->>Client: Returns Data (HIT)
else Miss
Dec->>H: Invoke Handler
H-->>Dec: Returns Response
Dec->>Dec: Check Size < 1MB?
alt Small Enough
Dec->>Cache: set(key, data)
Dec-->>Client: Returns (MISS)
else Too Big
Dec-->>Client: Returns (MISS - No Cache)
end
end
```

View File

@ -1,9 +0,0 @@
# [DEPRECATED] Database Consolidation Plan
> **Note**: This document has been split into more specific task lists. Please refer to:
> - [Database Todos & Schema](./database-todos.md)
> - [Security & Auth Plans](./security.md)
> - [Caching Strategy](./caching.md)
This file remains for historical context but may be out of date.

258
packages/ui/docs/dnd.md Normal file
View File

@ -0,0 +1,258 @@
# Drag-and-Drop Integration with dnd-kit
Proposal for adding drag-and-drop widget management to the page editor using `@dnd-kit/core` (6.3.1), `@dnd-kit/sortable` (10.0.0), and `@dnd-kit/utilities` (3.2.2) — all already installed.
---
## Current State
| Concern | Implementation | File |
|---------|---------------|------|
| Widget insertion | Click-to-add via `onToggleWidget` | `PageRibbonBar.tsx` |
| Widget reorder | `MoveWidgetCommand` (arrow D-pad) | `commands.ts`, `LayoutManager.ts` |
| Native file/URL drop | Window-level `drop` listener | `GlobalDragDrop.tsx` |
| Native drag state | `DragDropContext` (custom React context) | `contexts/DragDropContext.tsx` |
**Key detail:** The custom `DragDropContext` already has a `polymech/internal` discriminator — it skips drags with `types.includes('polymech/internal')`, so dnd-kit drags (which use `setData` with custom types) will **not** trigger the global file-drop overlay. No conflict.
---
## Proposed Architecture
```
┌─────────────────────────────────────────────────────────┐
│ DndContext (from @dnd-kit/core) │
│ └─ mounted in UserPageEdit.tsx (wraps canvas + ribbon) │
│ ├─ DragOverlay: ghost preview of dragged widget │
│ ├─ Draggable sources: │
│ │ ├─ RibbonItemSmall (drag from ribbon) │
│ │ └─ WidgetItem / FlexWidgetItem (drag to reorder)│
│ └─ Droppable targets: │
│ ├─ Container cells (LayoutContainer grid slots) │
│ └─ Flex cells (row:column in FlexContainer) │
└─────────────────────────────────────────────────────────┘
```
### Data Model
All draggables attach structured data via `useDraggable({ data })`:
```ts
// Dragging a new widget from the ribbon
{ type: 'new-widget', widgetId: 'photo-card' }
// Dragging an existing widget within/across containers
{ type: 'existing-widget', widgetInstanceId: 'widget-123', sourceContainerId: 'container-456' }
```
All droppables identify themselves via `useDroppable({ data })`:
```ts
// Regular container cell
{ type: 'container-cell', containerId: 'container-456', index: 3 }
// Flex container cell
{ type: 'flex-cell', containerId: 'container-789', rowId: 'row-abc', column: 1 }
```
---
## Implementation Plan
### Phase 1: DndContext Setup
#### `UserPageEdit.tsx` — Wrap editor in `DndContext`
```tsx
import { DndContext, DragOverlay, closestCenter, pointerWithin } from '@dnd-kit/core';
// Inside the component:
const [activeDrag, setActiveDrag] = useState<DragData | null>(null);
<DndContext
collisionDetection={pointerWithin}
onDragStart={({ active }) => setActiveDrag(active.data.current)}
onDragEnd={handleDragEnd}
onDragCancel={() => setActiveDrag(null)}
>
<PageRibbonBar ... />
<GenericCanvas ... />
<DragOverlay>
{activeDrag && <DragPreview data={activeDrag} />}
</DragOverlay>
</DndContext>
```
`handleDragEnd` logic:
```ts
function handleDragEnd({ active, over }) {
setActiveDrag(null);
if (!over) return;
const src = active.data.current;
const dst = over.data.current;
if (src.type === 'new-widget') {
// Insert new widget at drop target
if (dst.type === 'container-cell') {
addWidgetToPage(pageId, dst.containerId, src.widgetId, dst.index);
} else if (dst.type === 'flex-cell') {
addWidgetToPage(pageId, dst.containerId, src.widgetId, dst.column, dst.rowId);
}
} else if (src.type === 'existing-widget') {
// Move widget via command (for undo/redo)
executeCommand(new MoveWidgetToTargetCommand(pageId, src.widgetInstanceId, dst));
}
}
```
---
### Phase 2: Draggable Ribbon Items
#### `PageRibbonBar.tsx` — Make widget buttons draggable
Wrap `RibbonItemSmall` with `useDraggable`. Click still works (dnd-kit only initiates drag after movement threshold via `activationConstraint`).
```tsx
const DraggableRibbonItem = ({ widgetId, ...props }) => {
const { attributes, listeners, setNodeRef, isDragging } = useDraggable({
id: `ribbon-${widgetId}`,
data: { type: 'new-widget', widgetId },
});
return (
<div ref={setNodeRef} {...listeners} {...attributes}
style={{ opacity: isDragging ? 0.5 : 1 }}>
<RibbonItemSmall {...props} />
</div>
);
};
```
> [!TIP]
> Use `activationConstraint: { distance: 8 }` on the `PointerSensor` to prevent accidental drags from clicks.
---
### Phase 3: Droppable Container Cells
#### `LayoutContainer.tsx` — Grid cells as drop targets
Each column slot becomes a `useDroppable` target:
```tsx
const DroppableCell = ({ containerId, index, children }) => {
const { setNodeRef, isOver } = useDroppable({
id: `cell-${containerId}-${index}`,
data: { type: 'container-cell', containerId, index },
});
return (
<div ref={setNodeRef} className={cn(isOver && 'ring-2 ring-blue-400 bg-blue-50/20')}>
{children}
</div>
);
};
```
#### `FlexibleContainerRenderer.tsx` — Flex cells as drop targets
Each `(rowId, column)` pair becomes a drop target:
```tsx
const DroppableFlexCell = ({ containerId, rowId, column, children }) => {
const { setNodeRef, isOver } = useDroppable({
id: `flex-${containerId}-${rowId}-${column}`,
data: { type: 'flex-cell', containerId, rowId, column },
});
return (
<div ref={setNodeRef} className={cn(isOver && 'ring-2 ring-purple-400 bg-purple-50/20')}>
{children}
</div>
);
};
```
---
### Phase 4: Draggable Existing Widgets (Reordering)
#### `LayoutContainer.tsx` and `FlexibleContainerRenderer.tsx`
Use `useSortable` from `@dnd-kit/sortable` for intra-container reordering, with `useDraggable` for cross-container moves:
```tsx
const { attributes, listeners, setNodeRef, transform, transition, isDragging } = useSortable({
id: widget.id,
data: {
type: 'existing-widget',
widgetInstanceId: widget.id,
sourceContainerId: container.id,
},
});
```
---
### Phase 5: New Command for Targeted Moves
#### `commands.ts``MoveWidgetToTargetCommand`
A new command that moves a widget to a specific target location (container + position), supporting undo by capturing the original location:
```ts
export class MoveWidgetToTargetCommand implements Command {
// Captures source (containerId, index/rowId/column) and target
// execute(): removes from source, inserts at target
// undo(): removes from target, re-inserts at source
}
```
---
## Coexistence with GlobalDragDrop
The custom `DragDropContext` checks `e.dataTransfer?.types.includes('polymech/internal')` and **skips** internal drags. dnd-kit uses pointer events, not native HTML5 drag events, so:
1. **No event collision** — dnd-kit pointer captures don't fire `dragenter`/`dragleave` on `window`
2. **Overlay suppression**`GlobalDragDrop` only renders when `isDragging && !isLocalZoneActive`. During dnd-kit drags, `isDragging` stays `false` in the custom context
3. **Native file drops still work** — External file/URL drops bypass dnd-kit (which only tracks its registered draggables)
> [!IMPORTANT]
> No changes needed to `GlobalDragDrop.tsx` or `DragDropContext.tsx`. The two systems operate on completely separate event channels.
---
## Files to Modify
| File | Change |
|------|--------|
| `UserPageEdit.tsx` | Wrap in `DndContext`, add `DragOverlay`, `handleDragEnd` |
| `PageRibbonBar.tsx` | Wrap widget buttons with `useDraggable` |
| `LayoutContainer.tsx` | Add `useDroppable` to grid cells, `useSortable` to widgets |
| `FlexibleContainerRenderer.tsx` | Add `useDroppable` to flex cells, `useSortable` to widgets |
| `commands.ts` | Add `MoveWidgetToTargetCommand` |
| `LayoutManager.ts` | Add `moveWidgetToTarget()` static method |
### New Files
| File | Purpose |
|------|---------|
| `DragPreview.tsx` | `DragOverlay` content — shows widget name + icon while dragging |
---
## Phasing
| Phase | Scope | Effort |
|-------|-------|--------|
| 1 | DndContext setup + handler skeleton | Small |
| 2 | Draggable ribbon items (drag-to-add) | Small |
| 3 | Droppable container/flex cells | Medium |
| 4 | Draggable existing widgets (reorder + cross-container) | Medium |
| 5 | New command for cross-container targeted moves | Small |
Phases 13 deliver the most impactful feature (drag from ribbon to canvas). Phase 45 adds reordering, which is lower priority since the D-pad already covers this.

210
packages/ui/docs/emails.md Normal file
View File

@ -0,0 +1,210 @@
# Email System
Newsletter and marketing email pipeline — from page rendering to delivery and tracking.
## Architecture Overview
```
┌──────────────┐ fetch HTML ┌──────────────────┐
│ CLI │ ──────────────────>│ Server │
│ email-send │ /email-preview │ (pages-email.ts) │
│ │<───────────────────│ │
│ sends via │ rendered HTML └──────────────────┘
@polymech/
│ mail │── POST /track ───> ┌──────────────────┐
│ │<── {id, token} ── │ Server API │
│ │── PATCH /track/:id │ (EmailProduct) │
│ │ │ → marketing_emails│
└──────────────┘ └──────────────────┘
```
**Flow:**
1. CLI authenticates with Supabase (get bearer token)
2. Optionally filters unsubscribed (`--check-unsubs`) and already-sent (`--check-sent`) recipients
3. Fetches rendered email HTML from server (`/user/:id/pages/:slug/email-preview`)
4. Applies `--var-*` template variables (`keep=true` to preserve `${unsubscribe}`)
5. Injects tracking parameters into all links/images (via cheerio)
6. For each recipient:
- `POST /api/email/track` → creates pending row, returns `{id, unsubscribe_token}`
- Substitutes `${unsubscribe}` with per-recipient unsubscribe URL
- Sends via `@polymech/mail` (nodemailer, `newsletter` transport)
- `PATCH /api/email/track/:id` → updates status to `sent`/`failed`
## CLI Usage
```bash
pm-cli-cms email-send --page-slug <slug> [options]
```
### Required
| Arg | Description |
|-----|-------------|
| `--page-slug` | Page slug to render as email |
### Optional
| Arg | Default | Description |
|-----|---------|-------------|
| `--user` | `cgo` | User identifier (username or UUID) |
| `--subject` | `Newsletter Polymech - DD:HH:mm` | Email subject line |
| `--recipient` | `cgoflyn@gmail.com` | Single email or path to `.json` contacts file |
| `--targethost` | `https://service.polymech.info` | Server base URL |
| `--lang` | — | Language tag for translated content |
| `--dry` | `false` | Log actions without sending |
| `--tracking` | `mail-DD-HH-mm` | Tracking param appended to all href/src URLs |
| `--campaign` | — | Campaign identifier string |
| `--check-unsubs` | `false` | Query DB and skip unsubscribed recipients |
| `--check-sent` | `false` | Skip recipients already sent to (matches email + campaign + subject) |
| `--var-<key>` | — | Template variable: `--var-name=hobbit``${name}` becomes `hobbit` |
### Examples
```bash
# Basic send
pm-cli-cms email-send --page-slug newsletter-march
# Send to contacts file with campaign tracking
pm-cli-cms email-send --page-slug newsletter-march \
--recipient ./ref/cscart-contacts.json \
--campaign spring-2026 \
--var-emailName "Dear Customer"
# Dry run with language override
pm-cli-cms email-send --page-slug newsletter-march \
--dry --lang de
# Full safety checks (skip unsubs + already sent)
pm-cli-cms email-send --page-slug newsletter-march \
--recipient ./ref/cscart-contacts.json \
--campaign spring-2026 \
--check-unsubs --check-sent
# Send to a single recipient
pm-cli-cms email-send --page-slug email-2026 --targethost http://localhost:3333/ --var-emailName "Hobbit Ex" --subject test2 --campaign=test2222 --lang=de
```
## Contacts File Format
JSON array with `email` and optional `name` fields (e.g. CS-Cart export):
```json
[
{ "email": "user@example.com", "name": "John", "Language": "en" },
{ "email": "other@example.com", "name": "Jane", "Language": "de" }
]
```
Object format (legacy):
```json
{
"contact1": { "email": "user@example.com" },
"contact2": { "email": "other@example.com" }
}
```
Only entries with a non-empty `email` field are used. See [`cli-ts/ref/cscart-contacts.json`](../cli-ts/ref/cscart-contacts.json) for a real example.
## Template Variables
The server renders email HTML with `substitute()` from `@polymech/commons/variables`. Unresolved `${key}` patterns are preserved (`keep=true`) so the CLI can resolve them client-side.
Pass variables via `--var-<key>=<value>`:
```bash
--var-emailName "John Doe" --var-company "Acme Inc"
```
This replaces `${emailName}` and `${company}` in the rendered HTML.
**Reserved variable:** `${unsubscribe}` — auto-substituted per recipient with the unsubscribe URL. Do not pass this via `--var-*`.
## Tracking
Every `<a href>` and `<img src>` in the email gets `?tracking=<value>` appended (via cheerio DOM manipulation). Defaults to `mail-DD-HH-mm` timestamp.
The tracking ID is also stored in `marketing_emails.tracking_id`.
## Server API Endpoints
| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/api/email/unsubscribe/:token` | None | Marks recipient as unsubscribed |
| `POST` | `/api/email/track` | Admin | Creates pending tracking row, returns `{id, unsubscribe_token}` |
| `PATCH` | `/api/email/track/:id` | Admin | Updates row status (`sent`/`failed`/`bounced`) |
| `GET` | `/api/render/email/:id` | Auth | Renders email HTML for a post |
| `POST` | `/api/send/email/:id` | Auth | Generates and sends email for a post |
## Database: `marketing_emails`
| Column | Type | Notes |
|--------|------|-------|
| `id` | uuid | PK, auto-generated |
| `name` | text | Contact name |
| `email` | text | **not null** |
| `status` | text | `pending` / `sent` / `failed` / `bounced` |
| `sent_at` | timestamptz | When successfully sent |
| `page_slug` | text | **not null** |
| `subject` | text | Email subject |
| `tracking_id` | text | Tracking tag |
| `campaign` | text | Campaign identifier |
| `lang` | text | Language used |
| `error_message` | text | Failure reason |
| `retry_count` | int | Send attempts (default 0) |
| `last_retry_at` | timestamptz | Last retry timestamp |
| `sender_id` | uuid | FK → `auth.users` |
| `from_address` | text | Sender email |
| `unsubscribed` | boolean | Default `false` |
| `unsubscribed_at` | timestamptz | When unsubscribed |
| `unsubscribe_token` | uuid | Auto-generated, used in unsubscribe links |
| `meta` | jsonb | Flexible metadata (vars, targethost) |
**Indexes:** `email`, `status`, `page_slug`, `tracking_id`, `campaign`, `unsubscribe_token`
**Migration:** [`supabase/migrations/20260302163400_create_marketing_emails.sql`](../supabase/migrations/20260302163400_create_marketing_emails.sql)
## Unsubscribe
### Flow
1. CLI inserts a `pending` row via `POST /api/email/track` → gets `unsubscribe_token`
2. CLI substitutes `${unsubscribe}` in HTML with `targethost/api/email/unsubscribe/<token>`
3. Email is sent with the per-recipient unsubscribe URL
4. Recipient clicks → `GET /api/email/unsubscribe/:token` → sets `unsubscribed=true`, shows confirmation
### Template
The unsubscribe link lives in [`public/widgets/email-clean/social_links.html`](../public/widgets/email-clean/social_links.html):
```html
<a href="${unsubscribe}">Unsubscribe</a>
```
### RLS
- **Admins**: full access to all rows
- **Authenticated users**: can view rows matching their email
- **Anonymous**: can update `unsubscribed=true` only (via token)
## Environment Variables
| Variable | Required | Description |
|----------|----------|-------------|
| `SUPABASE_URL` | ✓ | Supabase project URL |
| `SUPABASE_ANON_KEY` | ✓ | Supabase anon/publishable key |
| `ADMIN_EMAIL` | ✓ | Admin email for auth |
| `ADMIN_PASSWORD` | ✓ | Admin password for auth |
| `TEST_EMAIL_FROM` | — | Sender address (default: `newsletter@osr-plastic.org`) |
## Source Files
| File | Description |
|------|-------------|
| [email-send.ts](../cli-ts/src/lib/email-send.ts) | Core library: auth, fetch, substitute, track, send |
| [send.ts](../cli-ts/src/commands/email/send.ts) | CLI wrapper (yargs args, var extraction) |
| [index.ts](../server/src/products/email/index.ts) | Server EmailProduct (render, send, track, unsubscribe) |
| [routes.ts](../server/src/products/email/routes.ts) | Route definitions (zod-openapi) |
| [pages-email.ts](../server/src/products/serving/pages/pages-email.ts) | Page → email HTML renderer |
| [social_links.html](../public/widgets/email-clean/social_links.html) | Email footer template (social icons + unsubscribe) |
| [nodemailer/index.ts](../../polymech-mono/packages/mail/src/lib/nodemailer/index.ts) | `@polymech/mail` transport wrapper |

View File

@ -0,0 +1,120 @@
# Feature: FileBrowser (VFS)
## Product Description (Copy/Paste Ready)
> **Polymech Virtual File System (VFS)**
> Experience native-grade file management directly within your browser. The Polymech VFS FileBrowser is a blazing-fast, dual-pane explorer designed for modern CMS workflows. Drag, drop, and navigate through your assets with intuitive keyboard shortcuts and zero-latency loading. Unlike basic media libraries, it features a built-in rendering engine capable of instantly previewing everything from standard images and PDFs to raw markdown, spreadsheets, and complex engineering 3D models (STL/STEP/OBJ)—all without requiring intermediate downloads.
## Comparison vs. Other Web Filebrowsers
* **Traditional CMS Media Libraries (WordPress, Drupal):** Standard platforms heavily favor flat grids or tagged media blobs, restricting hierarchical folder organizations. The Polymech VFS acts like a true operating system file manager, enforcing strict tree hierarchies and directory paths, while still allowing grid-based thumbnail views.
* **Headless Asset Managers (Sanity, Strapi):** Often architected purely as metadata stores linking out to CDN URLs. The Polymech VFS operates on physical nested directories giving developers full UNIX-like path predictability.
* **Out-of-the-Box Web Explorers (elFinder, FilePond):** Most third-party integrations require aging backend dependencies (PHP) or use heavy, unoptimized DOM techniques. The Polymech FileBrowser is built natively in modern React. It utilizes zero-repaint layout effects to lock keyboard focus seamlessly and ensures instant UI reaction speeds.
* **Specialized Previews (3D & Code):** Almost no native web filebrowsers offer immediate in-browser rendering for 3D engineering formats (`.stl`, `.stp`, `.obj`) or syntax-highlighted code files (`.tsx`, `.py`, etc.) without downloading them first. The Polymech VFS features a dynamic `FileViewerRegistry` that handles these specialized mesh and plaintext previews instantly.
---
The FileBrowser is a fully-featured, dual-pane virtual file system (VFS) client built into the Polymech platform. It provides native-like file exploration capabilities directly in the browser, featuring robust keyboard navigation, drag-and-resize panels, type-safe file discovery, and extensive previewing capabilities for a wide variety of file formats, including 3D models.
## Core Architecture
The FileBrowser operates on a Virtual File System structure, connecting to a backend VFS API that mounts specific folders or resources. It extensively leverages standard modern React patterns (Hooks, Contexts, Layout Effects) to build a zero-latency feel for the user.
### Main Component
- **[`./src/apps/filebrowser/FileBrowserPanel.tsx`](../src/apps/filebrowser/FileBrowserPanel.tsx)**
The entry point component. It orchestrates layout via `ResizablePanelGroup` and houses the toolbar, dialog popups, and the dual-pane system (the explorer on the left, the preview pane on the right). Core logic is delegated to modular hooks listed below.
### State Hooks
The business logic of the browser is split into modular hooks located in `./src/modules/storage/hooks/`:
#### Foundation Hooks (Data & Primitives)
- **[`useVfsAdapter.ts`](../src/modules/storage/hooks/useVfsAdapter.ts)**
Handles data fetching from the backend API. Abstracts away directory fetching, cache management, fallback README indexing, and recursive size queries (`includeSize`).
- **[`useSelection.ts`](../src/modules/storage/hooks/useSelection.ts)**
Governs state for selected files, handles batch selection arrays, and calculates item counts.
- **[`useKeyboardNavigation.ts`](../src/modules/storage/hooks/useKeyboardNavigation.ts)**
Intercepts standard keyboard inputs within the module to provide accessible navigation. Converts array/grid positional awareness to focus updates natively.
- **[`useFilePreview.ts`](../src/modules/storage/hooks/useFilePreview.ts)**
Manages states and refs for opening files in overlays/lightboxes.
#### Orchestration Hooks (Compose the primitives)
- **[`useDefaultSelectionHandler.ts`](../src/modules/storage/hooks/useDefaultSelectionHandler.ts)**
Auto-select engine that runs on directory load: selects `readme.md` when present, restores the parent directory item on "go up" (return-target pattern), and auto-opens `initialFile` from URL params.
- **[`useDefaultKeyboardHandler.ts`](../src/modules/storage/hooks/useDefaultKeyboardHandler.ts)**
Global keyboard shortcut listener (`Ctrl+F` / `F3` for search), typeahead search buffer management (with auto-clear on directory change), focus management on view mode switches, and wires the lower-level `useKeyboardNavigation`.
- **[`useDefaultActions.ts`](../src/modules/storage/hooks/useDefaultActions.ts)**
Centralizes all action callbacks: download (single/multi/dir-zip), lightbox navigation (prev/next), close-and-restore-focus patterns, double-click handling, and a unified relative link resolver (`resolveRelativeVfsLink`) that de-duplicates the previously separate readme-pane and file-viewer-pane link handlers.
---
## File Views
The left pane of the file browser provides three rendering modes for the fetched directory contents.
- **[`./src/apps/filebrowser/FileTree.tsx`](../src/apps/filebrowser/FileTree.tsx)**
The Tree view mode provides an expanding folder hierarchy approach.
*Note: Recursion and layout-level rendering effects explicitly override grid-snapping hooks to avoid redundant focus issues.*
- **[`./src/modules/storage/FileListView.tsx`](../src/modules/storage/FileListView.tsx)**
A detailed, vertical list rendering standard OS-like columns (Icon, Name, Size, Date).
- **[`./src/modules/storage/FileGridView.tsx`](../src/modules/storage/FileGridView.tsx)**
A resizable thumbnail rendering mode tailored for media directories.
- **[`./src/modules/storage/ThumbPreview.tsx`](../src/modules/storage/ThumbPreview.tsx)**
Handles rendering fast thumbnail previews inline for media and models, mapping correctly scaled image requests to the backend.
---
## Preview Architecture (FileViewerRegistry)
One of the stand-out features of the FileBrowser is the ability to preview files directly within the UI or via an isolated popup overlay, without downloading them individually.
This routing is handled by:
- **[`./src/modules/storage/FileViewerRegistry.tsx`](../src/modules/storage/FileViewerRegistry.tsx)**
Depending on the `mime` category or file extension, the requested node is dynamically routed to the appropriate specialized viewer component under `./src/modules/storage/views/`:
1. **`ImageLightbox`** ([`./src/components/ImageLightbox.tsx`](../src/components/ImageLightbox.tsx))
Native integration for standard images via `ResponsiveImage`.
2. **`PdfLightbox.tsx`**
Built over `react-pdf`, handles rendering standard `.pdf` outputs.
3. **`SpreadsheetLightbox.tsx`**
Preview tables like `.csv` and `.xlsx`.
4. **`LightboxIframe.tsx`**
Renders pure `.html`/`.htm` safely inside an iframe.
5. **`LightboxText.tsx`**
Code and text preview utilizing `Prism.js` for on-the-fly markdown and syntax highlighting.
6. **`ThreeDViewer.tsx`**
A robust 3D model engine implementation previewing `.stl`, `.obj`, `.step` and other engineering-centric mesh types up to specific memory thresholds.
If a folder is selected instead of a file, the right-pane preview defaults to:
- A rendered `README.md` if one sits natively inside the directory.
- An internal mounted `FileBrowserPanel` in thumbnail mode strictly locked as a directory sub-preview (`autoFocus=false`).
---
## Keyboard Navigation & Focus Flow
The keyboard and tab-index systems were optimized extensively to act natively:
- Pressing `Arrow Keys` calculates grid-level math to jump up/down/left/right seamlessly.
- Pressing `Enter` commits to entering directories.
- Pressing `Backspace` triggers a "Return-to-Sender" effect (jumping out of folders automatically shifts focus onto the parent directory node precisely).
- `Ctrl + F` / `F3` triggers native file search scoped to the currently tracked tree context.
- Typeahead search buffer auto-clears on directory navigation preventing stale matches from locking navigation.
- Unmounting/mounting directories handles DOM manipulation strictly in `useLayoutEffect`, enforcing zero-repaint focus locking on tree and list container nodes.
---
## Backend VFS Service
The backend drives the file structure by translating standardized REST commands to the file-system operations safely.
- **[`./server/src/products/storage/api/vfs-routes.ts`](../server/src/products/storage/api/vfs-routes.ts)**
Exposes the OpenAPI specifications and strict route mapping inputs for the File System.
- **[`./server/src/products/storage/api/vfs.ts`](../server/src/products/storage/api/vfs.ts)**
Contains the `MountManager`, global route validations, rate-limited file-tree recursive walks (Search API), and ACL-governed path sandboxing logic.
---
## Future Improvements (TODOs)
- **I18n (Internationalization):** Abstract hardcoded strings into the globalization architecture via **[`../src/i18n.tsx`](../src/i18n.tsx)** to support multilingual FileBrowser deployments.
- **Keyboard Navigation:** Expand keyboard shortcuts for batch selection manipulation and add comprehensive focus trapping within isolated preview lightboxes.
- **Security (User Content):** Implement stricter Content Security Policies (CSP) within `LightboxIframe` and sanitization checks for user-uploaded HTML/SVG content to prevent stored XSS attacks.
- **Context API Abstraction:** Formalize a generic `ViewerControlContext` to clean up the prop-drilling required for passing preview state (like `LightboxNode`) down deeper into specialized viewers.

View File

@ -1,79 +0,0 @@
# Instagram-like Feed Implementation Plan
## Objective
Create a responsive, immersive feed experience that adapts to device size:
- **Desktop/Large Screens**: Retain the current `PhotoGrid` (grid view).
- **Mobile**: Implement a new `Feed` view (vertical list) similar to Instagram.
- **Carousel**: Support horizontal swiping (left/right) through multiple pictures within a single post.
- **Performance**: Implement "load ahead" strategy (buffer ~5 posts) to ensure smooth scrolling without loading the entire database.
## Architecture & Components
### 1. Data Layer Enhancements
Current `PhotoGrid` logic fetches posts and selects a single "cover" image.
We need to modify the data transformation to pass *all* visible pictures to the UI components.
- **Query**: Keep fetching `posts` with `pictures`.
- **Transformation**: Instead of flattening to a single `MediaItem`, we need a structure that preserves the list of pictures for each post.
```typescript
interface FeedPost {
id: string; // Post ID
user_id: string; // Author
pictures: MediaItemType[]; // Array of pictures in the post
// ... other post metadata (title, description, etc.)
}
```
### 2. New `Feed` Component (Mobile)
A new component `src/components/Feed.tsx` will be created for the mobile view.
- **Layout**: Vertical list of full-width cards.
- **Virtualization**: Use `react-window` or simpler intersection observer-based rendering to only render posts in (and slightly outside) the viewport.
- **Preloading**: Ensure the next 5 image/video assets are preloaded.
### 3. Updated `MediaCard` / New `FeedCard`
`MediaCard` currently handles a single media item. We have two options:
1. **Refactor `MediaCard`**: Add support for an array of media and internal carousel logic.
2. **Create `FeedCard`**: A specialized card for the Feed view that wraps `MediaCard` or implements its own carousel.
* *Decision*: Use `FeedCard` (or `PostCard`) to encapsulate the carousel logic (Embla Carousel or similar) and use `MediaCard` for individual slides if needed, or implement a lighter slide view.
* **Carousel**: Must support touch gestures for left/right swiping.
### 4. `PhotoGrid` Updates
- **Logic Separation**: Extract the data fetching hook (e.g., `useFeedMedia`) so both `PhotoGrid` and `Feed` can share the same data source and state (likes, etc.).
- **Responsive Switch**: In `Index.tsx`, conditionally render `PhotoGrid` (desktop) or `Feed` (mobile). Or render both and hide via CSS (better for SSR/hydration matching, but heavier on DOM). Better to use a valid hook for `isMobile`.
## Implementation Steps
### Phase 1: Data & Hooks
1. Create `useFeedQuery` hook to fetch posts + pictures.
2. Implement pagination (infinite scroll) logic (load 10, load next 10 when bottom reached).
3. Preloading utility: Function to preload images `n` indexes ahead of the current viewport item.
### Phase 2: Carousel Component
1. Implement a Swipe/Carousel component (using `embla-carousel-react` or purely custom CSS scroll-snap).
2. Ensure it handles image aspect ratios gracefully (Instagram usually restricts to 4:5 or square, but we might support flexible).
### Phase 3: `MobileFeed` Component
1. Create the vertical list layout.
2. Implement the "Load 5 ahead" logic (prefetching images for the next 5 cards).
3. Integrate the Carousel for multi-image posts.
### Phase 4: Integration
1. Update `Index.tsx` to switch between `PhotoGrid` and `MobileFeed`.
2. Ensure shared state (Likes, Comments) works in both views.
## Technical Details
### "Load 5 Ahead" Strategy
- **Intersection Observer**: Watch the last rendered element to trigger fetching the next page.
- **Image Preloading**: Watch the *currently visible* post index. Automatically create `Link rel="preload"` or `new Image()` for the `cover` images of the next 5 posts.
- **Carousel Preloading**: If a user stops on a post, prioritize loading the *next* slide of that specific post.
### Swiping Interaction
- **Carousel (Inner)**: Swiping horizontally moves between pictures of the *same* post.
- **Feed (Outer)**: Scrolling vertically moves between *different* posts.
## Proposed File Structure
- `src/components/feed/Feed.tsx`
- `src/components/feed/FeedCard.tsx` (Handles the carousel)
- `src/components/feed/FeedCarousel.tsx` (The actual swiper)
- `src/hooks/useFeed.ts` (Data fetching logic)

58
packages/ui/docs/gdrp.md Normal file
View File

@ -0,0 +1,58 @@
# GDPR (General Data Protection Regulation) & ePrivacy Compliance Report
## Overview
This report evaluates the current tracking, analytics, and marketing campaign features of the platform against the requirements of the GDPR and ePrivacy Directive.
With the recent additions of detailed analytics, session tracking, and individual email tracking, the platform collects, processes, and stores Personal Identifiable Information (PII).
## 1. Analytics & Web Tracking (Cookies & PII)
**Current State:**
- The `analyticsMiddleware` generates and assigns a `SESSION_COOKIE` (`pm_sid`) to track website visitors across sessions.
- IP addresses are **anonymized** before writing to `analytics.jsonl` — the last octet is zeroed for IPv4 (e.g. `203.0.113.47``203.0.113.0`) and the last 5 groups are zeroed for IPv6. Full IPs are still used transiently for geo-resolution (BigDataCloud API) and cached in `cache/geoip.json` for operational purposes.
- `userEmail` and `userName` are **not logged**. Only `userId` is persisted for correlation.
- User Agents, Referers, Language, and inferred Geographic locations are logged.
**Remaining Deficiency:**
- **Cookie Consent:** The ePrivacy Directive requires explicit, opt-in consent *before* setting any non-essential cookies (including analytics and session tracking IDs). Currently, the `pm_sid` cookie is placed automatically on all routes not in the ignore list.
**Recommendation:**
- **Consent Banner:** Implement a Cookie Consent UI. Only set the `SESSION_COOKIE` and execute the tracking portion of the middleware if the user has explicitly accepted.
### Google Analytics (GA4)
**Status: ✅ Removed**
- The GA4 script (`G-QZ02JX0J0L`) has been **removed** from `index.html`. No third-party analytics cookies are set, no data is transmitted to Google.
### Security Middleware (unaffected by anonymization)
The `rateLimiter` and `autoBan` middleware continue to use **full, un-anonymized IPs** in-memory at request time. These IPs are never written to the JSONL analytics log — they exist only in transient memory structures (`Map`) and the operational `config/ban.json` blocklist. This is permissible under GDPR's "legitimate interest" basis (Article 6(1)(f)) for security and abuse prevention.
## 2. Marketing Campaigns & Email Tracking
**Status: ✅ Per-contact tracking disabled**
- The 1x1 tracking pixel (`pixel.gif`) injection is **commented out** — no individual open tracking.
- Per-contact `c_id` URL parameters are **commented out** — no individual click tracking.
- The `pixel.gif` route is **disabled** in the `CampaignsProduct` route registration.
- Campaign-level `tracking` param is preserved on links (non-PII, used for aggregate campaign attribution only).
- Unsubscribe links continue to function via `unsubscribe_token`.
## 3. Data Retention and Erasure (Right to be Forgotten)
**Current State:**
- `analytics.jsonl` continuously appends telemetry data without a predefined rotation or deletion policy.
- Campaign interactions and `marketing_emails` records preserve history.
**Deficiencies:**
- **Right to Erasure (Article 17):** If a user requested their data be deleted, the system would need to scan and purge their `userId` from the raw `analytics.jsonl` file.
- **Storage Limitation (Article 5):** PII should only be kept as long as necessary.
**Recommendations:**
- **Structured Log Rotation:** Implement a background job that aggregates or rotates the `analytics.jsonl` file (e.g., deleting logs older than 90 days).
- **Deletion Workflows:** When a user invokes their right to be forgotten (or deletes their profile), ensure that cascading deletes cover the `contacts`, `contact_group_members`, and `marketing_emails` tables.
## Summary
**Completed mitigations:**
- ✅ **IP anonymization** — last octet zeroed in persisted logs; geo data preserved via pre-anonymization lookup.
- ✅ **PII minimization**`userEmail` and `userName` removed from JSONL logs; only `userId` retained.
- ✅ **Google Analytics removed**`gtag.js` stripped from `index.html`. No third-party cookies.
- ✅ **Per-contact email tracking disabled**`c_id` and `pixel.gif` commented out in `campaigns/index.ts`.
**Remaining housekeeping (non-blocking):**
1. Automatically **rotate/expire** old unstructured analytics logs (recommend 90-day TTL).
2. Implement **deletion workflows** to purge `userId` from JSONL on erasure requests.

180
packages/ui/docs/gmail.md Normal file
View File

@ -0,0 +1,180 @@
# Gmail / IMAP Integration
Developer reference for connecting user mailboxes via IMAP to harvest contacts.
---
## Overview
Users can connect any IMAP mailbox (Gmail, Outlook, etc.) from the **Profile → Integrations** tab. Credentials are stored encrypted within `user_secrets.settings.mailboxes` in Supabase — same row-level security as API keys.
**Phase 1 (current):** Store credentials, test connection.
**Phase 2 (future):** Harvest sender/recipient contacts from mailbox, import into contact groups.
**Phase 3 (future):** OAuth2 flow for Gmail (no App Password needed).
---
## Gmail App Password Setup (Required for Phase 1)
Standard Gmail passwords won't work over IMAP if 2FA is enabled. Users must generate an **App Password**:
1. Go to [myaccount.google.com/security](https://myaccount.google.com/security)
2. Enable 2-Step Verification if not already on
3. Go to **App passwords** → Select app: *Mail*, device: *Other (custom name)*
4. Copy the generated 16-character password
IMAP settings for Gmail:
- **Host:** `imap.gmail.com`
- **Port:** `993`
- **TLS:** true (IMAPS)
- **Auth:** plain (user + App Password)
---
## Credential Storage Schema
Mailboxes are stored in `user_secrets.settings.mailboxes` as a JSON array:
```ts
interface MailboxCredential {
id: string; // uuid, generated on save
label: string; // user-facing name e.g. "Work Gmail"
host: string; // imap.gmail.com
port: number; // 993
tls: boolean; // always true for Gmail
user: string; // email@gmail.com
password: string; // App Password (stored as-is, protected by Supabase RLS)
status?: 'ok' | 'error' | 'pending';
lastTestedAt?: string; // ISO datetime
lastError?: string;
}
```
No plaintext encryption beyond Supabase's column-level storage. The password is **never returned** from the API — only `has_password: boolean` and `user` are exposed.
---
## API Routes
All routes require `Authorization: Bearer <jwt>` header.
### `GET /api/contacts/mailboxes`
List connected mailboxes for the current user. Password is masked.
**Response:**
```json
[
{
"id": "uuid",
"label": "Work Gmail",
"host": "imap.gmail.com",
"port": 993,
"tls": true,
"user": "user@gmail.com",
"has_password": true,
"status": "ok",
"lastTestedAt": "2026-03-06T12:00:00Z"
}
]
```
### `POST /api/contacts/mailboxes`
Save or update a mailbox credential.
**Body:**
```json
{
"id": "optional-uuid-for-update",
"label": "Work Gmail",
"host": "imap.gmail.com",
"port": 993,
"tls": true,
"user": "user@gmail.com",
"password": "abcd efgh ijkl mnop"
}
```
**Response:** Masked mailbox object (same as GET item).
### `DELETE /api/contacts/mailboxes/:id`
Remove a mailbox by ID.
**Response:** `{ "ok": true }`
### `POST /api/contacts/mailboxes/:id/test`
Test an IMAP connection using saved credentials. Does not modify stored data but updates `status` and `lastTestedAt`.
**Response:**
```json
{ "ok": true }
// or
{ "ok": false, "error": "Invalid credentials" }
```
---
## Server Implementation
Located in `server/src/products/contacts/`:
- **`imap-handler.ts`** — business logic using `imapflow`
- **`imap-routes.ts`** — route definitions (Hono/Zod OpenAPI)
- **`index.ts`** — routes registered in `ContactsProduct.initializeRoutes()`
### imapflow connection pattern
```ts
import { ImapFlow } from 'imapflow';
const client = new ImapFlow({
host: creds.host,
port: creds.port,
secure: creds.tls,
auth: { user: creds.user, pass: creds.password },
logger: false,
});
await client.connect();
await client.logout();
```
---
## Environment Variables
No additional env vars required for Phase 1 (credentials are per-user in Supabase).
**Future OAuth2 vars (Phase 3):**
```env
# Google OAuth2 for Gmail IMAP access
GOOGLE_CLIENT_ID=your-client-id.apps.googleusercontent.com
GOOGLE_CLIENT_SECRET=your-client-secret
GOOGLE_REDIRECT_URI=https://your-domain.com/api/auth/google/callback
```
---
## Security Notes
- Passwords are stored in `user_secrets` which has Supabase Row Level Security allowing only the owner to read their own row
- Server API never returns the password field — only `has_password: true`
- Test connection is server-side only; credentials are never sent to the browser after initial save
- In future Phase 3, App Password flow is replaced by OAuth2 refresh tokens (no password stored at all)
---
## Frontend
Component: `src/components/GmailIntegrations.tsx`
Client module: `src/modules/contacts/client-mailboxes.ts`
Profile tab: `Profile.tsx``/profile/integrations`
---
## Roadmap
| Phase | Description | Status |
|-------|-------------|--------|
| 1 | Store IMAP credentials, test connection | ✅ Current |
| 2 | Harvest contacts from sent/received emails | 🔜 Planned |
| 3 | OAuth2 for Gmail (no App Password) | 🔜 Planned |
| 4 | Scheduled background sync, dedup contacts | 🔜 Planned |

View File

@ -0,0 +1,34 @@
# GridSearch V2: Uniform Geometric Simulation to Backend Execution
## The Problem (V1)
Our V1 backend architecture iterated directly over political GADM boundary nodes (e.g., cities, municipalities) and searched their geographic centers or raw bounding boxes. This methodology had three severe drawbacks:
1. **Gaps**: Political polygons are irregular. A fixed search radius originating from a polygon's centroid inevitably missed the edges and corners of oddly-shaped areas.
2. **Overlaps**: Densely packed suburban municipalities resulted in centroids sitting dangerously close to one another. The search radii overlapped, causing redundant API calls returning the exact same prospects.
3. **Empty Wastelands**: A single large municipality might be 80% uninhabited mountain ranges or deserts. Searching its center burned API credits on regions with zero B2B locations.
## The V2 Solution & Architecture
In V2, we relegated GADM nodes to being **clipping masks** rather than search targets. The actual API "hops" happen across a mathematically uniform geometric grid that perfectly tiles the target terrain.
This system guarantees **100% geographic coverage with 0% redundancy** and relies on a dual-stack architecture:
### 1. The Client-Side UI & Simulator (Completed)
We constructed the `GridSearchPlayground`, `GadmPicker`, and `GridSearchSimulator` to visually tune and preview search parameters in the browser:
* The user selects specific hierarchical GADM geographies.
* The simulator overlays a Turf.js generated grid (Hex or GADM-native).
* Configurable optimization parameters (Max Elevation, Min Population Density) dynamically cull the grid in real time, preventing wasted API hops in uninhabited or extreme terrain.
* The simulator visualizes the path trajectory ("snake", "zigzag", "spiral-out") representing the exact sequence of planned API calls.
### 2. The Backend Execution Engine (Next Steps)
The exact Grid generation and culling logic visually perfected in the frontend must now be translated into the Node.js API pipeline to actually execute the real GridSearches.
**Porting Requirements:**
* **Payload Ingestion:** The server must accept the optimized parameters selected by the user (target regions/polygons, grid mode, cell size, path trajectory, filters).
* **Grid Computation (Server-Side Turf.js):** The backend will replicate the Turf.js bounding box, grid generation, intersection, and sorting logic to reconstruct the exact `validCells` array the UI simulator previewed.
* **Topographical Filtering:** Recreate the logic that drops cells failing the structural constraints (e.g., average elevation > threshold, population density < threshold).
* **Sequential API Execution:** Once the valid grid is ordered matching the trajectory, the backend will iterate over the cells using a queue (or sequential loop), rate-limiting the execution of the actual Provider API calls to scrape the specified coordinates.
* **Progress Tracking & Persistence:** Emit progress updates (e.g., via WebSockets or job tracking) marking cells as 'processed', saving scraped data back to the database, and ensuring the job can resume cleanly if interrupted.
## Execution Flow Porting Plan
1. **API Design**: Define the structured endpoint payload `POST /api/locations/gridsearch/generate` capable of receiving the complex region + filter configuration.
2. **Modular Turf Utils**: Abstract the Turf.js grid logic (`turf.hexGrid`, intersections, centroid path sorting) into shared utility functions accessible by the backend worker.
3. **Workflow Integration**: Wire the resulting mathematically optimal coordinate arrays into the pre-existing grid search pipeline, effectively bridging the sophisticated V2 UI targeting with the core V1 scraping engine.

View File

@ -0,0 +1,197 @@
# Groups & ACL — Proposal
## Problem
The ACL system currently hacks `anonymous` and `authenticated` as **sentinel strings** mapped to `group_name` in [resource_acl](../supabase/migrations/20260219215000_create_resource_acl.sql). This works but:
- No concept of custom groups (`vendors`, `editors`, etc.)
- Virtual user ID mapping in [db-acl-db.ts](../server/src/products/serving/db/db-acl-db.ts) is fragile
- No UI for managing groups or discovering group membership
- Can't answer "which groups does user X belong to?" or "who is in group Y?"
## @polymech/acl Package
The [acl package](../../polymech-mono/packages/acl/README.md) already has a proper RBAC system:
| Concept | Implementation |
|---------|---------------|
| Core | [Acl.ts](../../polymech-mono/packages/acl/src/Acl.ts) — Zend-inspired RBAC: `allow(roles, resources, perms)`, `isAllowed(user, resource, perms)` |
| Backend interface | [IBackend](../../polymech-mono/packages/acl/src/interfaces.ts) — bucket-based key-value storage |
| Backends | [MemoryBackend](../../polymech-mono/packages/acl/src/data/MemoryBackend.ts), [FileBackend](../../polymech-mono/packages/acl/src/data/FileBackend.ts) |
| VFS layer | [vfs-acl.ts](../../polymech-mono/packages/acl/src/vfs/vfs-acl.ts), [AclVfsClient](../../polymech-mono/packages/acl/src/vfs/AclVfsClient.ts) |
The `Acl` class already supports:
- `addUserRoles(userId, roles)` / `removeUserRoles(userId, roles)`
- `userRoles(userId)` → returns all roles for a user
- `roleUsers(role)` → returns all users in a role
- `addRoleParents(role, parents)` → role hierarchy
- `isAllowed(userId, resource, permissions)` → permission check
**Groups = Roles in the ACL model.** The `Acl` class uses "roles" — groups are just roles with membership. The missing piece is a **Supabase-backed backend** that persists role/group data in the DB instead of files/memory.
## Current Architecture
```
pm-pics (app) @polymech/acl (package)
───────────────── ──────────────────────
resource_acl table Acl class
├── user_id UUID FK ├── addUserRoles()
├── group_name TEXT (hack) ├── isAllowed()
│ └── IBackend
db-acl-db.ts (DbAclBackend) ├── MemoryBackend
├── sentinel mapping hack └── FileBackend
├── rowToEntry / entryToRow
└── direct Supabase queries VFS extensions
├── AclVfsClient
db-acl.ts (orchestrator) ├── vfs-acl.ts
├── IAclBackend (app-level) └── uses FileBackend
├── registerAclBackend()
└── fetchAclSettings / grant / revoke
```
The problem: `pm-pics` has its own `IAclBackend` + `DbAclBackend` that **duplicate** what `@polymech/acl` does, but with Supabase instead of files. The sentinel hack exists because `DbAclBackend` bypasses `Acl` entirely.
## Proposed Architecture
### New backend in @polymech/acl
```
@polymech/acl
├── src/
│ ├── data/
│ │ ├── MemoryBackend.ts (existing)
│ │ ├── FileBackend.ts (existing)
│ │ └── SupabaseBackend.ts [NEW] — IBackend using Supabase tables
│ ├── groups/
│ │ ├── GroupManager.ts [NEW] — CRUD for groups + membership
│ │ └── interfaces.ts [NEW] — IGroupStore, Group, GroupMember
│ ├── Acl.ts (existing, no changes)
│ └── interfaces.ts (existing, no changes)
```
The key insight: **the `Acl` class doesn't need to change**. It already handles roles, users, and permissions generically. We just need:
1. **`SupabaseBackend`** — implements `IBackend` using Supabase tables instead of in-memory maps
2. **`GroupManager`** — thin wrapper for creating/listing groups and managing membership (writes to `groups` + `group_members` tables)
### Database Tables
```sql
-- Groups definition
create table public.groups (
id uuid not null default gen_random_uuid() primary key,
slug text not null unique, -- 'editors', 'vendors'
name text not null, -- 'Editors', 'Vendors'
description text,
builtin boolean not null default false, -- true for anonymous/authenticated
created_by uuid references auth.users(id),
created_at timestamptz default now()
);
-- Seed built-in groups
insert into public.groups (slug, name, builtin) values
('anonymous', 'Anonymous', true),
('authenticated', 'Authenticated Users', true);
-- Group membership (not needed for built-in groups)
create table public.group_members (
id uuid not null default gen_random_uuid() primary key,
group_id uuid not null references public.groups(id) on delete cascade,
user_id uuid not null references auth.users(id) on delete cascade,
role text default 'member',
added_at timestamptz default now(),
unique(group_id, user_id)
);
```
### How It Connects
```
resource_acl.group_name ──references──> groups.slug
groups.builtin = true → membership is implicit (auth state)
groups.builtin = false → membership via group_members
```
`resource_acl.group_name` stays as-is — becomes a soft FK to `groups.slug`.
### Resolution Flow
```
1. Load ACL entries for resource R
2. For each entry:
├── entry.user_id === caller → direct user grant
├── entry.group_name = 'anonymous' → builtin, match everyone
├── entry.group_name = 'authenticated' && caller.authenticated → match
└── entry.group_name = 'vendors' → check group_members → match?
3. Any match with required permission → allow
```
### Relationship to user_roles
> **Keep `user_roles` separate.** It's for system-level admin. Groups are for content/resource access control.
## Files to Change
### Phase 1 — @polymech/acl package
| File | Change |
|------|--------|
| [NEW] `packages/acl/src/groups/interfaces.ts` | `IGroupStore`, `Group`, `GroupMember` types |
| [NEW] `packages/acl/src/groups/GroupManager.ts` | CRUD: `fetchGroups`, `createGroup`, `addMember`, `removeMember`, `getGroupsForUser` |
| [MODIFY] [index.ts](../../polymech-mono/packages/acl/src/index.ts) | Export `GroupManager` + types |
### Phase 2 — pm-pics integration
| File | Change |
|------|--------|
| [NEW] `supabase/migrations/xxx_create_groups.sql` | `groups` + `group_members` tables |
| [MODIFY] [db-acl-db.ts](../server/src/products/serving/db/db-acl-db.ts) | Remove sentinel hack, use `GroupManager` for membership checks |
| [MODIFY] [db-categories.ts](../server/src/products/serving/db/db-categories.ts) | `isCategoryVisible()` uses group membership lookup |
| [MODIFY] [db-acl.ts](../server/src/products/serving/db/db-acl.ts) | Group-aware permission resolution |
| [MODIFY] [index.ts](../server/src/products/serving/index.ts) | Register group API routes |
| [NEW] `server/src/products/serving/db/db-groups.ts` | Handlers for groups API |
| [NEW] `src/modules/groups/client-groups.ts` | Client API |
### Phase 3 — UI
| File | Change |
|------|--------|
| [MODIFY] [AclEditor.tsx](../src/components/admin/AclEditor.tsx) | Dynamic group picker from `groups` table |
| [NEW] `src/components/admin/GroupManager.tsx` | Admin UI: create groups, manage members |
| [MODIFY] [CategoryManager.tsx](../src/components/widgets/CategoryManager.tsx) | Group picker in permissions |
## API Design
```
GET /api/groups → list all groups
POST /api/groups → create group (admin)
PATCH /api/groups/:id → update group (admin)
DELETE /api/groups/:id → delete group (admin, not builtin)
GET /api/groups/:id/members → list members
POST /api/groups/:id/members → add member
DELETE /api/groups/:id/members/:userId → remove member
GET /api/users/:id/groups → groups for a user (self or admin)
```
## AclEntry Type Evolution
```diff
interface AclEntry {
- userId?: string; // real UUID or 'anonymous'/'authenticated' (hack)
- group?: string; // unused for virtual IDs
+ userId?: string; // real UUID only
+ groupSlug?: string; // 'anonymous', 'authenticated', 'vendors', etc.
path?: string;
permissions: string[];
}
```
## Priority
| Phase | Effort | Enables |
|-------|--------|---------|
| 1. @polymech/acl groups | ~2h | Reusable group system for any app |
| 2. pm-pics integration | ~2h | Category/resource visibility by group |
| 3. UI | ~3h | Admin can manage groups visually |

View File

@ -543,6 +543,194 @@ In Tailwind, this is `max-md:!grid-cols-1` (with `!` for `!important` to overrid
---
## Email Rendering Compatibility
### How Email Rendering Currently Works
`pages-email.ts` renders page layouts to email HTML server-side. The pipeline:
1. **Load HTML templates**`image_xl.html`, `image_col_2.html`, `image_col_3.html`, `section_text.html`, etc.
2. **Walk containers**`renderContainer()` iterates widgets sorted by order.
3. **Group by columns** — If `container.columns === 2` or `3`, widgets are chunked and rendered via the matching column template. Everything else → single-column (`image_xl.html`).
4. **Inline CSS**`juice` inlines all CSS for email client compatibility.
### Current Column Templates
The 2-column template (`image_col_2.html`):
```html
<table role="presentation" width="100%">
<tr>
<td>
<!--[if mso]><table width="100%"><tr><td width="50%" valign="top"><![endif]-->
<div class="image-col-item image-col-item-2">
<!-- widget 0 content -->
</div>
<!--[if mso]></td><td width="50%" valign="top"><![endif]-->
<div class="image-col-item image-col-item-2">
<!-- widget 1 content -->
</div>
<!--[if mso]></td></tr></table><![endif]-->
</td>
</tr>
</table>
```
**Pattern:** Outer `<table>` wrapper → `<div>` elements with `inline-block` styling (via CSS) for modern clients + MSO conditional `<table>` for Outlook. Fixed `width="50%"` / `width="33%"`**no arbitrary widths**.
### What Email Clients Support
| Feature | Gmail | Outlook (MSO) | Apple Mail | Yahoo |
|---------|-------|---------------|------------|-------|
| `<table>` layout | ✅ | ✅ | ✅ | ✅ |
| `display: inline-block` | ✅ | ❌ (needs MSO table) | ✅ | ✅ |
| `display: grid` | ❌ | ❌ | ❌ | ❌ |
| `display: flex` | ❌ | ❌ | Partial | ❌ |
| Arbitrary `width` on `<td>` | ✅ (%) | ✅ (%) | ✅ (%) | ✅ (%) |
| `fr` units | ❌ | ❌ | ❌ | ❌ |
| `px` widths on `<td>` | ✅ | ✅ | ✅ | ✅ |
| `max-width` media queries | ✅ | ❌ | ✅ | ✅ |
| `align-items`, `align-self` | ❌ | ❌ | ❌ | ❌ |
> [!CAUTION]
> **CSS Grid and Flexbox are NOT usable in email.** The row-layout's frontend CSS Grid rendering has zero overlap with email rendering. Email must use `<table>` layout exclusively.
### Strategy: Dynamic Table Generation for Row-Layouts
Instead of pre-built templates, row-layouts need **dynamically generated table HTML** on the server. The `fr``%` conversion is straightforward:
```typescript
// Convert ColumnDef[] to percentage widths for email tables
function columnsToPercentages(columns: ColumnDef[]): number[] {
const totalFr = columns
.filter(c => c.unit === 'fr')
.reduce((sum, c) => sum + c.width, 0);
// Fixed columns (px) get a fixed width, remaining space split by fr
// For email simplicity: convert everything to percentages
const total = columns.reduce((sum, c) => {
if (c.unit === '%') return sum + c.width;
if (c.unit === 'fr') return sum + (c.width / totalFr) * 100;
// px: approximate against 600px email width
return sum + (c.width / 600) * 100;
}, 0);
return columns.map(c => {
if (c.unit === '%') return c.width;
if (c.unit === 'fr') return (c.width / totalFr) * 100;
return (c.width / 600) * 100;
}).map(p => Math.round(p * 100) / 100); // round to 2dp
}
```
### Generating Row HTML
Each row becomes a `<table>` with `<td>` elements at computed percentages:
```typescript
function renderRowToEmailHtml(row: RowDef, cellContents: string[]): string {
const widths = columnsToPercentages(row.columns);
// MSO conditional table for Outlook
const msoStart = `<!--[if mso]><table role="presentation" width="100%"><tr>` +
widths.map((w, i) => `${i > 0 ? '' : ''}`).join('') +
`<![endif]-->`;
let html = `<table role="presentation" width="100%" cellpadding="0" cellspacing="0" border="0"><tr><td>`;
html += `<!--[if mso]><table role="presentation" width="100%"><tr><![endif]-->`;
widths.forEach((w, i) => {
if (i > 0) {
html += `<!--[if mso]></td><![endif]-->`;
}
html += `<!--[if mso]><td width="${w}%" valign="top"><![endif]-->`;
html += `<div style="display:inline-block; vertical-align:top; width:100%; max-width:${w}%;">`;
html += cellContents[i] || '';
html += `</div>`;
});
html += `<!--[if mso]></td></tr></table><![endif]-->`;
html += `</td></tr></table>`;
return html;
}
```
### What Changes in `pages-email.ts`
The existing `renderContainer()` handles `type: 'container'`. A new `renderRowLayout()` function handles `type: 'row-layout'`:
```typescript
async function renderRowLayout(container: RowLayoutContainer) {
let html = '';
for (const row of container.rows) {
// Get widgets for each cell
const cellContents: string[] = [];
for (let colIdx = 0; colIdx < row.columns.length; colIdx++) {
const cellWidgets = container.widgets
.filter(w => w.rowId === row.id && w.column === colIdx)
.sort((a, b) => (a.order ?? 0) - (b.order ?? 0));
let cellHtml = '';
for (const widget of cellWidgets) {
cellHtml += await renderWidget(widget);
}
cellContents.push(cellHtml);
}
html += renderRowToEmailHtml(row, cellContents);
}
return html;
}
```
Then in the main rendering loop:
```typescript
for (const container of sortedRootContainers) {
if (container.type === 'row-layout') {
contentHtml += await renderRowLayout(container);
} else {
contentHtml += await renderContainer(container); // existing path
}
}
```
### Sizing Mode Mapping in Email
| Frontend sizing | Email equivalent | Notes |
|----------------|-----------------|-------|
| **Constrained** (`items-stretch`) | `valign="top"` + equal row height | Email doesn't support equal-height columns natively. Closest: `valign="top"` (cells align to top, height varies). True equal-height requires background tricks or is simply accepted as a limitation. |
| **Unconstrained** (`items-start`) | `valign="top"` | Natural in email — cells are always content-height. |
| **Per-cell alignment** | `valign="top"` / `"middle"` / `"bottom"` | Maps directly to `<td valign>`. Works everywhere including MSO. |
| **`fr` widths** | Converted to `%` | `[1fr, 2fr, 1fr]``[25%, 50%, 25%]` |
| **`px` widths** | Converted to `%` against 600px | `250px``41.67%`. Or use fixed `width="250"` on `<td>` and let remaining cells flex. |
| **Mobile collapse** | `@media (max-width: 480px)` | Supported by Gmail, Apple Mail, Yahoo. NOT Outlook — Outlook always shows desktop layout. |
### Limitations & Tradeoffs
| Issue | Severity | Mitigation |
|-------|----------|-----------|
| **No equal-height columns** in email | Low | Most content renders fine with `valign="top"`. Background-color tricks exist but add complexity. |
| **Outlook ignores media queries** | Medium | Outlook always renders at desktop width. Acceptable — newsletter emails typically have a 600px fixed width anyway. |
| **Mixed `px`/`fr` approximation** | Low | Converting to `%` is imperfect but good enough for email. The 600px assumption is standard for email templates. |
| **Widget rendering fallback** | Low | Any widget not supported by `renderWidget()` already falls through silently. Row-layout just needs the column wrapper logic. |
| **New templates NOT needed** | Nice | Unlike the current approach with pre-built `image_col_2.html` / `image_col_3.html`, the dynamic table generator handles any column count. The pre-built templates can remain for the old `LayoutContainer` path. |
### Summary
**Yes, row-layouts can be rendered to email.** The approach is:
1. Convert `ColumnDef[]` widths to `%` values
2. Generate `<table><tr><td width="X%">` dynamically per row
3. MSO conditional comments for Outlook compatibility
4. `valign` for vertical alignment
5. Reuse existing `renderWidget()` for cell content
6. ~80 lines of new code in `pages-email.ts`
---
## Impact Analysis
### 🟢 No-impact (if we keep old `LayoutContainer` as-is)

View File

@ -0,0 +1,48 @@
# Flexible Container — Implementation Todos
## Phase 1: Data Model & Types
- [x] Add `ColumnDef`, `RowDef`, `FlexibleContainer` types to `LayoutManager.ts`
- [x] Add Zod schemas to `schemas.ts` (`ColumnDefSchema`, `RowDefSchema`, `FlexibleContainerSchema`, `AnyContainerSchema`)
- [x] Update `PageLayout.containers` to `AnyContainer[]` union type
- [x] Add `rowId`, `column` fields to `WidgetInstance` for cell placement
- [x] Add type guards (`isLayoutContainer`, `isFlexibleContainer`, `getLayoutContainers`)
- [x] Add `findLayoutContainer()`, `findFlexContainer()` narrowing helpers
- [x] Add `generateRowId()`, `addFlexibleContainer()` methods
- [x] Fix all internal closures in `LayoutManager.ts` to handle `AnyContainer[]`
## Phase 2: Renderer Component
- [x] Create `FlexibleContainerRenderer.tsx` — CSS Grid row-based renderer
- [x] Integrate type-based dispatch in `GenericCanvas.tsx`
- [x] Fix all downstream type errors (`UserPageEdit.tsx`, `GenericCanvas.tsx`)
## Phase 3: UI Integration & Add Button
- [x] Quick-add buttons in `GenericCanvas.tsx` empty state
- [x] `AddFlexContainerCommand` in `commands.ts` (with undo)
- [x] `addFlexPageContainer` method in `LayoutContext.tsx`
- [x] `handleAddFlexContainer` handler in `UserPageEdit.tsx`
- [x] CommandPicker palette entry (Ctrl+K → "Add Flexible Container")
- [x] Ribbon bar "Flex Container" button (purple icon, `Columns`)
- [x] Update `commands.ts` helpers for `AnyContainer[]` support
## Phase 4: Row/Column Commands
- [x] `FlexAddRowCommand` — adds row with 2 equal columns
- [x] `FlexRemoveRowCommand` — removes row (with state capture for undo)
- [x] `FlexAddColumnCommand` — adds 1fr column to row
- [x] `FlexRemoveColumnCommand` — removes column by index (with state capture)
- [x] Wire commands to `FlexibleContainerRenderer` UI buttons via `executeCommand`
## Phase 5: Edit-Mode UI Polish
- [ ] Column drag handles for width adjustment
- [ ] Column width presets dropdown
- [ ] Row reordering (move up/down)
## Phase 6: Email Rendering
- [ ] Add `renderFlexibleContainer()` to `pages-email.ts`
- [ ] `columnsToPercentages()` utility
- [ ] Dynamic `<table>` generation with MSO conditionals
## Phase 7: Iteration & Polish
- [ ] Settings panel for row sizing mode (constrained/unconstrained)
- [ ] Per-cell alignment overrides
- [ ] Mobile responsive collapse
- [ ] PDF rendering support

View File

@ -0,0 +1,50 @@
# Search Algorithm Strategies
## 1. Zoom Level Calibration
**Objective**: Determine the optimal zoom level to maximize the number of unique locations found for a given keyword and location.
### Strategy
1. **Input**:
- Keyword (e.g., "carpenters")
- Location (e.g., "Barcelona, Spain")
- Zoom Range (e.g., 12-18)
2. **Process**:
- Iterate through the defined range of zoom levels.
- For each zoom level:
- Perform a Google Maps search using the `googleMaps` function.
- Store the results in a JSON file using the `--dst` option (e.g., `./tmp/search/test-zoomlevel-<zoom>.json`).
- Count the total number of valid results returned.
- Maintain a record of (Zoom Level -> Result Count).
3. **Output**:
- The zoom level that yielded the maximum number of results.
4. **Notes**:
- This process assumes that for a fixed point, the "best" zoom captures the most relevant density without being too broad (losing small entities) or too narrow (missing context).
- Overlaps/Duplicates should be handled by the underlying search function or post-processing if multi-point scanning is used later.
## 2. Area Scanning (Grid Search)
**Objective**: Scan a larger, defined area (e.g., "Madrid" or "Spain") using the optimal zoom level to ensure comprehensive coverage.
### Strategy (Planning)
1. **Input**:
- Target Area Boundaries (Polygon/Box).
- Calibrated Zoom Level (from Step 1).
- List of Provinces/Towns (if segmenting by admin regions).
2. **Grid Generation**:
- Determine the lat/long delta that corresponds to the calibrated zoom level's viewport size.
- Create a grid of search coordinates covering the Target Area.
3. **Execution & State Management**:
- This is a long-running task.
- **State Store**: Maintain a persistent state (JSON/DB) tracking:
- Queue of pending coordinates.
- Completed coordinates.
- Failed coordinates.
- Process the queue sequentially or in parallel batches.
4. **Aggregation**:
- Combine all result files.
- Perform global deduplication (by `place_id` or `title` + `address`).

View File

@ -0,0 +1,114 @@
# Enricher System Design
## Overview
We are separating the "enrichment" logic (scraping, email finding, etc.) from the core search library (`@polymech/search`) to create a modular, extensible system within the server. This system will support both on-demand discovery (fast initial results + streaming enrichment) and batch processing.
## Goals
1. **Decouple:** Move enrichment logic out of `googlemaps.ts`.
2. **Performance:** Allow fast initial search results (meta=false) with lazy loading for enrichment.
3. **Extensibility:** Registry-based system to easily swap or add enrichers (e.g., 'local', 'outsource').
4. **Streaming:** Centralized streaming hub to emit enrichment updates to the client.
## Architecture
### 1. The Enricher Interface
Each enricher must implement a standard interface.
```typescript
export interface EnrichmentContext {
userId: string;
// ... potentially other context
}
export interface IEnricher {
name: string;
type: 'meta' | 'email' | 'phones' | string;
/**
* Enrich a single location.
* @param location The partial competitor data available
* @param context Execution context
*/
enrich(location: CompetitorFull, context: EnrichmentContext): Promise<Partial<CompetitorFull>>;
}
```
### 2. Registry
A simple registry to manage available enrichers.
```typescript
export class EnricherRegistry {
private static enrichers: Map<string, IEnricher> = new Map();
static register(name: string, enricher: IEnricher) {
this.enrichers.set(name, enricher);
}
static get(name: string): IEnricher | undefined {
return this.enrichers.get(name);
}
static getAll(): IEnricher[] {
return Array.from(this.enrichers.values());
}
}
```
### 3. Implementation: 'Local' Meta Enricher
We will port the scraping logic from `search/src/lib/html.ts` to `server/src/products/locations/enrichers/local-meta.ts`.
* **Logic:** Puppeteer/Axios based scraping.
* **Target:** Updates `raw_data.meta`, and extracts social links/emails to `CompetitorSchemaFull` fields.
* **Adjustments:** Ensure strictly server-side dependencies are used and handle errors gracefully without crashing the stream.
### 4. Streaming Hub
A new endpoint `/api/competitors/enrich/stream` (or integrated into existing stream logic) that allows the client to request enrichment for specific items.
**Request:**
```json
{
"place_ids": ["..."],
"enrichers": ["meta"]
}
```
**Flow:**
1. Verify usage/credits.
2. For each `place_id`:
* Load current data.
* Run requested enrichers (concurrently or sequentially).
* Emit `enrichment-update` SSE event with the diff/new data.
* Persist updates to DB.
## Data Schema Extensions
We will extend `CompetitorSchemaFull` (via `raw_data` or explicit fields) to hold the enrichment results.
* `meta`: Object containing scraping results (title, description, og-tags).
* `social`: Standardized social profile links.
* `emails`: Discovered emails.
## Phasing
### Phase 1: Meta Enricher & Registry
* Create `EnricherRegistry`.
* Port `html.ts` to `server/src/products/locations/enrichers/meta.ts`.
* Setup the streaming endpoint for "meta" enrichment.
### Phase 2: Email Enricher
* Implement 'email' enricher (likely using existing logic or new providers).
### Phase 3: Client Integration
* Update client to fetch search results *without* meta first.
* Trigger enrichment stream for visible/requested items.

View File

@ -0,0 +1,82 @@
# GADM Integration Documentation
## Overview
We use the [GADM (Database of Global Administrative Areas)](https://gadm.org/) as our source of truth for administrative boundaries (GeoJSON). This allows us to perform "Regional Scanning" by defining precise polygons for irregular areas like cities, provinces, and states.
## Data Structure
GADM organizes areas hierarchically.
- **Level 0**: Country (e.g., `ESP` for Spain)
- **Level 1**: Primary subdivision (e.g., "Catalunya" - Region)
- **Level 2**: Secondary subdivision (e.g., "Barcelona" - Province)
- **Level 3+**: Tertiary (e.g., Municipalities)
Every area has a unique **GID** (GADM ID):
- `ESP` (Spain)
- `ESP.5_1` (Catalunya)
- `ESP.5.1_1` (Barcelona)
> **Note**: GADM codes are *not* standard ISO codes. Always rely on **Name Search** to find the correct GID.
---
## API Endpoints
We expose a set of public endpoints to interface with the local `pygadm` wrapper.
### 1. Search Regions
Search for a region by name to find its metadata (GID, Name, Type) or full geometry.
`GET /api/regions/search`
| Parameter | Type | Required | Description |
| :--- | :--- | :--- | :--- |
| `query` | string | Yes | Name to search for (e.g., "France", "Paris") |
| `content_level` | int | No | Filter by admin level (e.g., `1` for regions) |
| `geojson` | boolean | No | If `true`, returns full `FeatureCollection` with geometry. |
**Example:**
`/api/regions/search?query=Catalunya&content_level=1`
### 2. Get Boundary
Retrieve the precise GeoJSON boundary for a specific known GID.
`GET /api/regions/boundary/{id}`
| Parameter | Type | Required | Description |
| :--- | :--- | :--- | :--- |
| `id` | string | Yes | GADM ID (e.g., `FRA.1_1`) |
**Response:**
Returns a GeoJSON `FeatureCollection` containing the polygon(s) for that region.
### 3. Get Sub-Region Names
List all child regions for a given parent code. Useful for cascading dropdowns.
`GET /api/regions/names`
| Parameter | Type | Required | Description |
| :--- | :--- | :--- | :--- |
| `admin` | string | Yes | Parent Admin Code (e.g., `FRA` or `FRA.1_1`) |
| `content_level` | int | Yes | The target level to retrieve (e.g., `2`) |
---
## Mapping Strategy: SerpAPI to GADM
External data sources like Google Maps (via SerpAPI) often use different standards (ISO-3166-2) than GADM. **Do not try to map by Code.**
**Recommended Workflow:**
1. **Extract Name**: Get the administrative name from the external result (e.g., `geo.principalSubdivision` -> "Catalunya").
2. **Search GADM**: Search for this name using the endpoint.
- `GET /api/regions/search?query=Catalunya`
3. **Filter Results**:
- Match `GID_0` to the known Country Code (e.g., `ESP`) to resolve ambiguity (e.g., "Valencia" exists in Spain and Venezuela).
4. **Get Boundary**: Use the resulting `GID` (e.g., `ESP.5_1`) to fetch the polygon.

View File

@ -0,0 +1,268 @@
# Grid Search — Implementation Plan
## Core Insight
The grid search is a **GADM tree walk** with a **pluggable iterator**.
Two distinct phases:
1. **Enumerate** — walk the GADM tree to the target level → return area names + centers (free, cached)
2. **Search** — plug in an iterator function (Google Maps, email, …) that runs per area (costs credits)
The user sees the area list first. Only when they confirm, the iterator runs. Results are cached per area — re-walking skips already-searched areas.
---
## Data Flow
```
Phase 1: Enumerate (no cost) Phase 2: Search (iterator)
──────────────────────────── ─────────────────────────
region: "Spain, Catalonia" iterator: googleMapsSearch
level: "cities" types: ["machine shop"]
┌─ searchRegions("Catalonia") for each area in areas:
│ → GID = "ESP.6_1" (L1) searchGoogleMap(
│ type,
├─ getRegionNames(admin=GID, contentLevel=3) @center.lat,lon,14z
│ → ["Barcelona", "Terrassa", "Sabadell", …] )
│ → results[]
├─ getBoundaryFromGpkg(GID, 3)
│ → GeoJSON per city deduplicate by place_id
│ cache per area GID+type
└─ centroid(bbox) → { lat, lon }
→ GridArea[]
Return to user: "Found 42 cities in Catalonia.
Run location search?" → user confirms → Phase 2
```
---
## Level Mapping
| User Level | GADM | Typical Meaning |
|-------------|------|-----------------------------|
| `provinces` | 1 | State / Province / Region |
| `districts` | 2 | County / District / Kreis |
| `cities` | 3 | Municipality / City |
| `towns` | 4 | Town / Commune |
| `villages` | 5 | Village / Sub-commune |
> Not all countries have all levels. Function caps at the country's max depth.
---
## API Design
### Types
```typescript
// server/src/products/locations/gridsearch-googlemaps.ts
/** A resolved area from the GADM tree walk */
export interface GridArea {
name: string;
gid: string;
level: number;
center: { lat: number; lon: number };
}
/** Phase 1 result — just the enumerated areas */
export interface GridEnumerateResult {
region: { name: string; gid: string; level: number };
areas: GridArea[];
maxLevelAvailable: number;
}
/** Phase 2 result — per-area search output */
export interface GridSearchAreaResult {
area: GridArea;
results: any[];
cached: boolean;
error?: string;
}
/** Phase 2 full result */
export interface GridSearchResult {
region: { name: string; gid: string; level: number };
areaCount: number;
totalResults: number;
results: any[]; // deduplicated
areas: GridSearchAreaResult[];
durationMs: number;
}
```
### Functions
```typescript
/**
* Phase 1: Walk GADM tree, enumerate areas at target level.
* No SerpAPI calls, no cost. Results are cached.
*/
export async function gridEnumerate(opts: {
region: string; // "Spain, Catalonia"
level: GridLevel; // 'cities' | 'towns' | number
}): Promise<GridEnumerateResult>
/**
* Phase 2: Run Google Maps search on each area.
* This is the iterator — pluggable per search type.
*/
export async function gridSearchGoogleMaps(opts: {
areas: GridArea[];
types: string[];
apiKey: string;
bigdata?: { key: string };
limitPerArea?: number; // default: 20
zoom?: number; // default: 14
concurrency?: number; // default: 2
}): Promise<GridSearchResult>
```
### Separation of Concerns
```
gridEnumerate() → pure GADM, no cost, cacheable
gridSearchGoogleMaps() → takes areas[], fires SerpAPI, costs credits
```
Later iterators can follow the same pattern:
- `gridSearchEmails(areas, opts)` — find emails per area
- `gridSearchEnrich(areas, opts)` — run enrichers per area
---
## Step-by-Step Logic
### `gridEnumerate()`
1. Split `region` on comma: `["Spain", "Catalonia"]`
2. Last part = `name`, first = `country` hint
3. `searchRegions({ query: name, country })` → get GID + level
4. Map level label → number (`cities=3`)
5. `getRegionNames({ admin: gid, contentLevel })` → rows with NAME/GID
6. `getBoundaryFromGpkg(gid, contentLevel)` → GeoJSON features
7. Compute bbox centroid per feature → `GridArea[]`
### `gridSearchGoogleMaps()`
1. For each area, build `searchCoord: @lat,lon,{zoom}z`
2. For each type in `types[]`:
- `searchGoogleMap(type, apiKey, opts)` → results
3. Merge results per area
4. Deduplicate globally by `place_id`
5. Return `GridSearchResult`
---
## Center Computation (No Turf.js)
```typescript
function bboxCentroid(feature: any): { lat: number; lon: number } {
let minLat = Infinity, maxLat = -Infinity;
let minLon = Infinity, maxLon = -Infinity;
const walk = (coords: any) => {
if (typeof coords[0] === 'number') {
const [lon, lat] = coords;
if (lat < minLat) minLat = lat;
if (lat > maxLat) maxLat = lat;
if (lon < minLon) minLon = lon;
if (lon > maxLon) maxLon = lon;
return;
}
for (const c of coords) walk(c);
};
walk(feature.geometry.coordinates);
return { lat: (minLat + maxLat) / 2, lon: (minLon + maxLon) / 2 };
}
```
---
## File Structure
```
server/src/products/locations/
├── gridsearch-googlemaps.ts # [NEW] gridEnumerate + gridSearchGoogleMaps
├── __tests__/
│ └── gridsearch-googlemaps.e2e.test.ts # [NEW] E2E tests
```
---
## Test Plan
```typescript
describe('Grid Search', () => {
describe('Phase 1 — Enumerate', () => {
it('enumerates Catalonia cities', async () => {
const result = await gridEnumerate({
region: 'Spain, Catalonia',
level: 'cities',
});
expect(result.areas.length).toBeGreaterThan(0);
expect(result.areas[0].center.lat).toBeTypeOf('number');
// No SerpAPI calls, no cost
});
it('enumerates Sachsen districts', async () => {
const result = await gridEnumerate({
region: 'Germany, Sachsen',
level: 'districts',
});
expect(result.areas.length).toBe(13);
});
});
describe('Phase 2 — Google Maps Search', () => {
it('searches machine shops in 2 Catalonia cities', async () => {
const enumResult = await gridEnumerate({
region: 'Spain, Catalonia',
level: 'cities',
});
// Only search first 2 areas to keep test cheap
const result = await gridSearchGoogleMaps({
areas: enumResult.areas.slice(0, 2),
types: ['machine shop'],
apiKey: config.serpapi.key,
bigdata: config.bigdata,
limitPerArea: 5,
concurrency: 1,
});
expect(result.totalResults).toBeGreaterThan(0);
expect(result.areas[0].area.name).toBeDefined();
}, 120_000);
});
});
```
### NPM Script
```json
"test:products:locations:gridsearch:googlemaps": "vitest run src/products/locations/__tests__/gridsearch-googlemaps.e2e.test.ts"
```
---
## Caching Strategy
| What | Cache Key | Storage |
|------|-----------|---------|
| Tree enumerate | `grid_enum_{gid}_{level}` | GADM file cache |
| Google Maps search | `grid_search_{gid}_{type}_{zoom}` | Supabase `place_searches` |
| Area boundaries | `boundary_{gid}` | GADM file cache (already cached) |
---
## Future Iterators (Not Phase 1)
- `gridSearchEmails(areas)` — find emails for businesses found in each area
- `gridSearchEnrich(areas)` — run meta/social enrichers per area
- PgBoss campaign integration — one child job per area
- SSE streaming — live progress as each area completes
- Cost estimation pre-flight — `areaCount × costPerSearch`

View File

@ -0,0 +1,163 @@
# Grid Search / Regional Scanning Documentation
## Overview
The Grid Search (or Regional Scanning) feature automates the discovery of leads across large, irregular geographic areas (e.g., entire cities, provinces, or countries). Instead of manual point searches, users select a defined administrative region, and the system intelligently decomposes it into a grid of optimal search points.
This functionality relies on a microservice architecture where **GADM** (Global Administrative Areas) data provides high-fidelity GeoJSON boundaries for exclusion/inclusion logic.
---
## Conceptual Architecture
### 1. Region Selection (Client)
The user select a target region (e.g., "Île-de-France, France"). The client fetches the corresponding boundary polygon from the GADM microservice (Admin Level 1/2).
### 2. Grid Decomposition (Server/Client)
The system calculates a "Search Grid" overlaying the target polygon.
- **Viewport Normalization**: A single API search at Zoom Level 15 covers roughly a 2-5km radius.
- **Bounding Box**: A rectangular grid is generated covering the polygon's extents.
- **Point-in-Polygon Filtering**: Grid centers falling *outside* the actual administrative boundary (e.g., ocean, neighboring states) are discarded using spatial analysis libraries (e.g., `Turf.js`).
### 3. Campaign Orchestration (Server)
The resulting set of valid coordinates (e.g., 450 points) is submitted as a **"Scan Campaign"**.
- **Batching**: The server does NOT run 450 searches instantly. It uses `PgBoss` to queue them as individual jobs.
- **Concurrency**: Jobs are processed with strict rate-limiting to respect SerpAPI quotas.
- **Deduplication**: Results from overlapping grid circles are merged by `place_id`.
---
## Workflow Implementation
### Step 1: User Selects Region
User interactions with the new "Region Search" UI:
1. **Search**: "California"
2. **Dropdown**: Selects "California, USA (State/Province)"
3. **Preview**: Map validates the polygon overlay.
### Step 2: Grid Generation Status
Pre-flight check displayed to user:
- **Total Area**: 423,970 km²
- **Grid Density**: High (Zoom 15)
- **Estimated Points**: ~8,500 scans (Warn: Expensive!)
- **Cost**: 8,500 Credits
- **Action**: "Confirm & Start Campaign"
### Step 3: Campaign Execution
Server receives payload:
```json
{
"regionId": "USA.5_1",
"query": "Plumbers",
"gridConfig": { "zoom": 15, "overlap": 0.2 }
}
```
Server decomposes to jobs `[Job_1, Job_2, ... Job_8500]`.
### Step 4: Live Updates
The existing SSE stream (`stream-sse`) adapts to listen for Campaign Events, updating a global progress bar:
- "Scanned 120/8500 sectors..."
- "Found 45 new leads..."
---
## Implementation TODO List
### Server-Side (`test/server`)
- [x] **GADM Integration Endpoint**:
- [x] Create route `GET /api/regions/search?q={name}` to proxy requests to the GADM microservice or query local PostGIS.
- [x] Create route `GET /api/regions/boundary/{gadm_id}` to retrieve full GeoJSON.
- [x] Create route `GET /api/regions/names?admin={code}` to fetch sub-region names.
- [ ] **Grid Logic**:
- Install `@turf/turf` for geospatial operations.
- Implement `generateGrid(boundaryFeature, zoomLevel)` function:
- Calculate `bbox`.
- Generate point grid.
- Filter `pointsWithinPolygon`.
- [ ] **Campaign Manager**:
- Create `CampaignsProduct` or extend `LocationsProduct`.
- New Job Type: `REGION_SCAN_PARENT` (decomposes into child jobs).
- New Job Type: `REGION_SCAN_CHILD` (actual search).
- [ ] **Job Queue Optimization**:
- Ensure `PgBoss` allows huge batch insertions (thousands of jobs).
- Implement "Campaign Cancellation" (kill switch for all child jobs).
### Client-Side (`test/client`)
- [ ] **Region Picker UI**:
- New Autocomplete component fetching from `/api/regions/search`.
- [ ] **Map Visualization**:
- Render the GeoJSON `Polygon` on MapLibre.
- Render the calculated `Point` grid pre-flight (allow user to manually deselect points?).
- [ ] **Campaign Dashboard**:
- New View: "Active Scans".
- Progress bars per campaign.
- "Pause/Resume" controls.
- [ ] **Result Merging**:
- Ensure the client DataGrid can handle streaming results effectively from potentially thousands of searches (Virtualization required).
---
## Existing Endpoint Reference
*(Ref. `src/products/locations/index.ts`)*
The current `LocationsProduct` is well-poised to be the parent of this logic.
- **`handleStreamGet`**: Can be adapted to accept a `campaignId` instead of a single `location`.
- **`handleStreamEmail`**: Shows the pattern for batch processing (accepting arrays of IDs). We can replicate this "Scatter-Gather" pattern for the Region Scan.
### Proposed GeoJSON Microservice Interface
We assume the existence of an internal service (or creating a dedicated module) exposing:
- `GET /gadm/v1/search?text=...` -> Returns lightweight metadata (ID, Name, Level).
- `GET /gadm/v1/feature/{id}` -> Returns heavy GeoJSON Geometry.
---
## 4. Potential Data Enrichments
To increase the value of harvested locations, the following layers can be overlaid or merged with the search results:
### Demographics & Population
- **WorldPop**: High-resolution raster data for estimating the catchment population of a specific business location.
- **Census Data**: (US Census / Eurostat) Admin-level statistics on income, age, and household size to score "Market Viability".
### Firmographics & Business Intel
- **OpenCorporates**: Verify legal entity status and official registration dates.
- **LinkedIn Organization API**: Enrich with employee count, industry tags, and recent growth signals.
- **Clearbit / Apollo.io**: Deep profile matching to find technographics (what software they use) and key decision-maker contacts.
### Environmental & Infrastructure
- **OpenStreetMap (OSM)**: Calculate "Footfall Potential" by analyzing proximity to transit hubs, parking, and density of other retail POIs.
- **WalkScore / TransitScore**: Rate the accessibility of consumer-facing businesses.
### Industry Specifics
- **TripAdvisor / Yelp**: Cross-reference hospitality ratings to find discrepancies or opportunities (e.g., highly rated on Google, poorly rated on Yelp).
- **Plastics Industry Databases**: (Specific to Polymech) Cross-referencing registered recyclers lists provided by regional environmental agencies.
<https://pygadm.readthedocs.io/en/latest/usage.html>

495
packages/ui/docs/mcp.md Normal file
View File

@ -0,0 +1,495 @@
# PoolyPress MCP Server
> Model Context Protocol (MCP) server that lets any LLM search, browse, and read content on a PoolyPress instance.
---
## Architecture
```
┌──────────────────────────────┐
│ MCP Client (Claude, etc.) │
└──────────┬───────────────────┘
│ POST /api/mcp
│ JSON-RPC 2.0
┌──────────▼───────────────────┐
│ McpProduct (Hono handler) │
│ handlers.ts │
│ ───────────────────────── │
│ initialize · tools/list │
│ tools/call → tools.ts │
└──────────┬───────────────────┘
│ direct function calls
┌──────────▼───────────────────┐
│ Server-side logic │
│ searchDirect · categories │
│ pages-data · site-scrape │
└──────────────────────────────┘
```
### Key decisions
| Decision | Choice | Rationale |
|---|---|---|
| **SDK** | None — raw JSON-RPC 2.0 | Zero deps; MCP spec is just JSON-RPC over HTTP |
| **Transport** | `POST /api/mcp` (HTTP) | Single endpoint; works with any HTTP client |
| **Auth** | `Bearer <supabase-token>` | Reuses existing `getUserCached()` — no new auth layer |
| **Code reuse** | Direct imports from `products/serving/` | No REST-over-HTTP round-trips; zero duplication |
---
## Source Files
```
server/src/products/mcp/
├── index.ts # McpProduct class (extends AbstractProduct)
├── routes.ts # POST /api/mcp route definition
├── handlers.ts # JSON-RPC 2.0 dispatcher
├── tools.ts # 17 tool definitions + handlers
└── __tests__/
└── mcp.e2e.test.ts # E2E tests
```
| File | Source | Purpose |
|---|---|---|
| [index.ts](../server/src/products/mcp/index.ts) | Product entry | Registers with platform product system |
| [routes.ts](../server/src/products/mcp/routes.ts) | Route | `POST /api/mcp` — private (auth required) |
| [handlers.ts](../server/src/products/mcp/handlers.ts) | Handler | Dispatches `initialize`, `tools/list`, `tools/call` |
| [tools.ts](../server/src/products/mcp/tools.ts) | Tools | All tool schemas + handler functions |
| [registry.ts](../server/src/products/registry.ts) | Registration | `'mcp': McpProduct` entry |
| [products.json](../server/config/products.json) | Config | `mcp` enabled, depends on `serving` |
### Upstream dependencies
| Import | Source file | Used by |
|---|---|---|
| `searchDirect()` | [db-search.ts](../server/src/products/serving/db/db-search.ts) | `search_content`, `find_pages`, `find_pictures`, `find_files` |
| `fetchCategoriesServer()` | [db-categories.ts](../server/src/products/serving/db/db-categories.ts) | `list_categories` |
| `getCategoryState()` | [db-categories.ts](../server/src/products/serving/db/db-categories.ts) | `find_by_category` |
| `filterVisibleCategories()` | [db-categories.ts](../server/src/products/serving/db/db-categories.ts) | `list_categories` |
| `getPagesState()` | [pages-data.ts](../server/src/products/serving/pages/pages-data.ts) | `get_page_content`, `find_by_category` |
| `enrichPageData()` | [pages-data.ts](../server/src/products/serving/pages/pages-data.ts) | `get_page_content` |
| `JSDOM` + `Readability` | [jsdom](https://www.npmjs.com/package/jsdom), [@mozilla/readability](https://www.npmjs.com/package/@mozilla/readability) | `markdown_scraper` |
| `getPageTranslations()` | [pages-i18n.ts](../server/src/products/serving/pages/pages-i18n.ts) | `get_page_translations`, `set_page_translations` |
| `getTranslationGaps()` | [db-i18n.ts](../server/src/products/serving/db/db-i18n.ts) | `get_translation_gaps` |
---
## Tools
### `search_content`
Full-text search across pages, posts, pictures, and VFS files.
```jsonc
{
"name": "search_content",
"inputSchema": {
"type": "object",
"properties": {
"query": { "type": "string", "description": "Search query (full-text)" },
"limit": { "type": "number", "description": "Max results (default 20, max 50)" },
"type": { "type": "string", "enum": ["all","pages","posts","pictures","files"] }
},
"required": ["query"]
}
}
```
**Returns:** `[{ id, title, description, type, rank, url, created_at }]`
**Backend:** `searchDirect({ q, limit, type, userId })`
---
### `find_pages`
Search specifically for pages.
```jsonc
{ "name": "find_pages", "inputSchema": { "properties": { "query": {}, "limit": {} }, "required": ["query"] } }
```
**Returns:** `[{ id, title, slug, description, rank, created_at }]`
---
### `find_pictures`
Search specifically for pictures/images.
```jsonc
{ "name": "find_pictures", "inputSchema": { "properties": { "query": {}, "limit": {} }, "required": ["query"] } }
```
**Returns:** `[{ id, title, description, image_url, rank, created_at }]`
---
### `find_files`
Search for files and folders in the Virtual File System (VFS).
```jsonc
{ "name": "find_files", "inputSchema": { "properties": { "query": {}, "limit": {} }, "required": ["query"] } }
```
**Returns:** `[{ id, title, path, type, url, created_at }]`
---
### `get_page_content`
Get the full content of a specific page by slug or ID.
```jsonc
{
"name": "get_page_content",
"inputSchema": {
"type": "object",
"properties": {
"slug": { "type": "string", "description": "Page slug (e.g. \"about-us\")" },
"id": { "type": "string", "description": "Page UUID (alternative to slug)" }
}
}
}
```
**Returns:** `{ id, title, slug, description, content, tags, is_public, created_at, updated_at, meta }`
---
### `list_categories`
List all content categories with hierarchy.
```jsonc
{
"name": "list_categories",
"inputSchema": {
"type": "object",
"properties": {
"parentSlug": { "type": "string", "description": "Filter children of parent" },
"includeChildren": { "type": "boolean", "description": "Include nested children (default true)" }
}
}
}
```
**Returns:** `[{ id, name, slug, description, children: [{ id, name, slug }] }]`
---
### `find_by_category`
Get all pages belonging to a category (and descendants).
```jsonc
{
"name": "find_by_category",
"inputSchema": {
"type": "object",
"properties": {
"slug": { "type": "string" },
"limit": { "type": "number", "description": "Max items (default 50)" },
"includeDescendants": { "type": "boolean", "description": "Include child categories (default true)" }
},
"required": ["slug"]
}
}
```
**Returns:** `{ category: { id, name, slug, description }, total, items: [{ id, title, slug, description, variables, created_at }] }`
---
### `markdown_scraper`
Scrape a URL and return clean Markdown.
```jsonc
{
"name": "markdown_scraper",
"inputSchema": {
"type": "object",
"properties": {
"url": { "type": "string", "description": "URL to scrape" }
},
"required": ["url"]
}
}
```
**Returns:** `{ markdown, title }` or `{ error }`
> Uses lightweight `fetch` + `Readability` + `Turndown`. For JavaScript-heavy pages, the full Scrapeless-powered endpoint at `POST /api/scrape/markdown` ([site-scrape.ts](../server/src/products/serving/site-scrape.ts)) is available separately.
---
### `get_page_translations`
Get existing translations for a page. Returns all widget translations and meta (title/description) for a specific target language.
```jsonc
{
"name": "get_page_translations",
"inputSchema": {
"type": "object",
"properties": {
"slug": { "type": "string", "description": "Page slug" },
"id": { "type": "string", "description": "Page UUID (alternative to slug)" },
"target_lang": { "type": "string", "description": "Target language code (e.g. \"es\", \"de\")" },
"source_lang": { "type": "string", "description": "Source language code (default \"en\")" }
},
"required": ["target_lang"]
}
}
```
**Returns:** `{ page_id, page_title, slug, target_lang, source_lang, translations: [{ widget_id, prop_path, source_text, translated_text, status, outdated }], summary: { total, translated, missing, outdated } }`
---
### `set_page_translations`
Save translations for a page. Batch-upserts widget translations for a target language. The LLM performs the translation — this tool persists the results.
```jsonc
{
"name": "set_page_translations",
"inputSchema": {
"type": "object",
"properties": {
"slug": { "type": "string" },
"id": { "type": "string" },
"target_lang": { "type": "string" },
"source_lang": { "type": "string", "description": "default \"en\"" },
"translations": {
"type": "array",
"items": {
"type": "object",
"properties": {
"widget_id": { "type": "string", "description": "Widget instance ID or \"__meta__\"" },
"translated_text": { "type": "string" },
"prop_path": { "type": "string", "description": "default \"content\"" },
"status": { "type": "string", "enum": ["draft","machine","reviewed","published"] }
},
"required": ["widget_id", "translated_text"]
}
}
},
"required": ["target_lang", "translations"]
}
}
```
**Returns:** `{ success, page_id, slug, target_lang, count, message }`
**Auth:** Owner only
---
### `get_translation_gaps`
Find pages/entities with missing or outdated translations for a given language.
```jsonc
{
"name": "get_translation_gaps",
"inputSchema": {
"type": "object",
"properties": {
"target_lang": { "type": "string", "description": "Target language code (e.g. \"de\")" },
"entity_type": { "type": "string", "enum": ["page","category","type"], "description": "default \"page\"" },
"mode": { "type": "string", "enum": ["missing","outdated","all"], "description": "default \"all\"" },
"source_lang": { "type": "string", "description": "default \"en\"" }
},
"required": ["target_lang"]
}
}
```
**Returns:** Array of entities with their untranslated/outdated source text
---
## Protocol
The endpoint speaks **JSON-RPC 2.0** — no MCP SDK required on either side.
### Methods
| Method | Purpose |
|---|---|
| `initialize` | Handshake — returns server info and capabilities |
| `tools/list` | Lists all 17 tools with schemas |
| `tools/call` | Execute a tool by name with arguments |
### Request format
```json
{
"jsonrpc": "2.0",
"id": 1,
"method": "tools/call",
"params": {
"name": "search_content",
"arguments": { "query": "plastic", "limit": 5 }
}
}
```
### Response format
```json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"content": [{
"type": "text",
"text": "[{\"id\":\"...\",\"title\":\"...\"}]"
}]
}
}
```
### Error codes
| Code | Meaning |
|---|---|
| `-32700` | Parse error (malformed JSON) |
| `-32600` | Invalid request (missing jsonrpc/method) |
| `-32601` | Method/tool not found |
| `-32603` | Internal error |
---
## Usage
### curl
```bash
# Initialize
curl -X POST http://localhost:3001/api/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{"jsonrpc":"2.0","id":1,"method":"initialize","params":{}}'
# List tools
curl -X POST http://localhost:3001/api/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{"jsonrpc":"2.0","id":2,"method":"tools/list"}'
# Search
curl -X POST http://localhost:3001/api/mcp \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_TOKEN" \
-d '{"jsonrpc":"2.0","id":3,"method":"tools/call","params":{"name":"search_content","arguments":{"query":"plastic","limit":3}}}'
```
### Claude Desktop / Cursor / Windsurf
These clients expect an stdio transport or an SSE endpoint. To use the HTTP endpoint, you can wrap it with a thin stdio ↔ HTTP bridge:
```json
{
"mcpServers": {
"poolypress": {
"url": "http://localhost:3001/api/mcp",
"headers": {
"Authorization": "Bearer YOUR_TOKEN"
}
}
}
}
```
> **Note:** Claude Desktop 2025+ supports HTTP MCP servers natively via the `url` field.
---
## Configuration
The MCP product is enabled in [`server/config/products.json`](../server/config/products.json):
```json
{
"name": "mcp",
"enabled": true,
"workers": 0,
"deps": ["serving"]
}
```
To disable the MCP endpoint, set `"enabled": false`.
---
## Testing
```bash
cd server
npm run test:mcp
```
Runs 13 E2E tests covering all tools, error handling, and protocol compliance.
Test file: [`server/src/products/mcp/__tests__/mcp.e2e.test.ts`](../server/src/products/mcp/__tests__/mcp.e2e.test.ts)
---
## How to Add a New Tool
1. **Define the tool** in [`tools.ts`](../server/src/products/mcp/tools.ts):
```typescript
const myNewTool: McpTool = {
name: 'my_tool',
description: 'What this tool does — shown to the LLM.',
inputSchema: {
type: 'object',
properties: {
param1: { type: 'string', description: '...' }
},
required: ['param1']
},
handler: async (args, userId) => {
// Call server-side logic directly
const result = await someServerFunction(args.param1);
return result;
}
};
```
2. **Register it** — add to the `MCP_TOOLS` array at the bottom of `tools.ts`:
```typescript
export const MCP_TOOLS: McpTool[] = [
// … existing tools …
myNewTool
];
```
That's it. The handler in `handlers.ts` auto-discovers tools via the `MCP_TOOLS_MAP`. No route changes needed.
3. **Add tests** — add a test case in [`mcp.e2e.test.ts`](../server/src/products/mcp/__tests__/mcp.e2e.test.ts) and update the tool count assertion.
### Tool design guidelines
- **Call server-side functions directly** — never make HTTP requests to your own server
- **Accept `userId`** as second argument — pass it through for visibility/ACL filtering
- **Return structured data** — the handler serializes it to JSON automatically
- **Use existing caches**`getPagesState()`, `getCategoryState()`, etc. are all cached
- **Keep schemas minimal** — LLMs work better with fewer, well-described parameters
---
## Security
- **Auth gating**: Every tool call resolves the user from the Bearer token. Anonymous requests get limited visibility (public content only).
- **VFS ACL**: File searches respect the existing ACL layer.
- **Visibility filtering**: `searchDirect()` applies owner/public/private filtering based on `userId`.
- **Rate limiting**: Inherits the platform's `apiRateLimiter` middleware.
- **Write operations**: Content creation, editing, and translation tools require authentication and verify page ownership (`userId === page.owner`). Admin-only actions are **not** available.

View File

@ -1,306 +0,0 @@
# Mux Video Integration
This project integrates [Mux](https://www.mux.com) for professional video upload, processing, and streaming capabilities.
## Overview
Mux provides:
- **Video Upload**: Drag & drop or click to upload video files
- **Automatic Processing**: Videos are automatically transcoded and optimized
- **HLS Streaming**: Adaptive bitrate streaming for smooth playback
- **Thumbnail Generation**: Automatic thumbnails and poster images
- **Analytics**: Track video views and engagement (optional)
## Architecture
### Flow
1. **Client requests upload URL** → Frontend calls our Supabase Edge Function
2. **Edge Function creates upload** → Calls Mux API to generate signed upload URL
3. **User uploads video** → Mux Uploader handles the upload with progress tracking
4. **Mux processes video** → Transcodes video, creates HLS stream, generates thumbnails
5. **Get playback ID** → Poll for asset creation, retrieve playback ID
6. **Play video** → Use Vidstack player with Mux HLS stream URL
### Components
- **MuxUploader**: React component for uploading videos (`@mux/mux-uploader-react`)
- **VideoCard**: Component for displaying videos with Vidstack player
- **mux-proxy**: Supabase Edge Function that interfaces with Mux API
## Setup
### 1. Get Mux Credentials
1. Sign up at [mux.com](https://www.mux.com)
2. Navigate to **Settings** → **Access Tokens**
3. Create a new access token with permissions:
- `Mux Video` - Read and Write
4. Copy the **Token ID** and **Token Secret**
### 2. Configure Environment Variables
Add these to your Supabase Edge Function environment variables:
```bash
MUX_TOKEN_ID=your_token_id_here
MUX_TOKEN_SECRET=your_token_secret_here
```
To set them in Supabase:
```bash
# Using Supabase CLI
supabase secrets set MUX_TOKEN_ID=your_token_id
supabase secrets set MUX_TOKEN_SECRET=your_token_secret
# Or via Supabase Dashboard
# Project Settings → Edge Functions → Secrets
```
### 3. Deploy Edge Function
```bash
supabase functions deploy mux-proxy
```
## Usage
### Upload Video
```tsx
import MuxUploader from "@mux/mux-uploader-react";
import { supabase } from "@/integrations/supabase/client";
const fetchUploadUrl = async () => {
const response = await fetch(
`${supabase.supabaseUrl}/functions/v1/mux-proxy`,
{
method: 'POST',
headers: {
'Authorization': `Bearer ${session.access_token}`,
'Content-Type': 'application/json',
},
body: JSON.stringify({ action: 'create-upload' }),
}
);
const { data } = await response.json();
return data.url;
};
function VideoUpload() {
return (
<MuxUploader
endpoint={fetchUploadUrl}
onSuccess={(event) => {
console.log('Upload complete!', event.detail);
}}
/>
);
}
```
### Play Video
Once you have the playback ID from Mux, you can play the video:
```tsx
import VideoCard from "@/components/VideoCard";
function VideoPlayer({ playbackId }: { playbackId: string }) {
const videoUrl = `https://stream.mux.com/${playbackId}.m3u8`;
const thumbnailUrl = `https://image.mux.com/${playbackId}/thumbnail.jpg`;
return (
<VideoCard
videoId="123"
videoUrl={videoUrl}
thumbnailUrl={thumbnailUrl}
title="My Video"
author="User"
authorId="user-id"
likes={0}
comments={0}
/>
);
}
```
## Mux API Actions
### create-upload
Creates a new direct upload URL.
**Request:**
```json
{
"action": "create-upload"
}
```
**Response:**
```json
{
"success": true,
"data": {
"id": "upload_abc123",
"url": "https://storage.googleapis.com/...",
"status": "waiting"
}
}
```
### get-upload
Get the status of an upload and check if asset was created.
**Request:**
```json
{
"action": "get-upload",
"uploadId": "upload_abc123"
}
```
**Response:**
```json
{
"success": true,
"data": {
"id": "upload_abc123",
"status": "asset_created",
"asset_id": "asset_xyz789"
}
}
```
### get-asset
Get asset details including playback IDs.
**Request:**
```json
{
"action": "get-asset",
"assetId": "asset_xyz789"
}
```
**Response:**
```json
{
"success": true,
"data": {
"id": "asset_xyz789",
"status": "ready",
"playback_ids": [
{
"id": "playback_def456",
"policy": "public"
}
]
}
}
```
## Database Schema
Store Mux video data in your `videos` table:
```sql
CREATE TABLE videos (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
user_id UUID NOT NULL,
title TEXT NOT NULL,
description TEXT,
video_url TEXT NOT NULL, -- https://stream.mux.com/{playback_id}.m3u8
thumbnail_url TEXT, -- https://image.mux.com/{playback_id}/thumbnail.jpg
meta JSONB, -- { mux_asset_id, mux_playback_id }
created_at TIMESTAMP DEFAULT NOW()
);
```
Store in meta:
- `mux_asset_id`: For managing the asset via Mux API
- `mux_playback_id`: For generating stream/thumbnail URLs
## Mux URLs
### Stream URL (HLS)
```
https://stream.mux.com/{PLAYBACK_ID}.m3u8
```
This is an HLS stream that works with Vidstack, Mux Player, and most video players.
### Thumbnail URL
```
https://image.mux.com/{PLAYBACK_ID}/thumbnail.jpg
```
Query parameters:
- `?width=1280` - Set width
- `?height=720` - Set height
- `?time=10` - Thumbnail at 10 seconds
### MP4 URL (if enabled)
```
https://stream.mux.com/{PLAYBACK_ID}/high.mp4
```
Available qualities: `low.mp4`, `medium.mp4`, `high.mp4`
## Webhooks (Optional)
For production, set up Mux webhooks to get notified when:
- Upload completes (`video.upload.asset_created`)
- Video is ready (`video.asset.ready`)
- Errors occur (`video.asset.errored`)
This is more efficient than polling. See [Mux Webhooks Docs](https://docs.mux.com/guides/listen-for-webhooks).
## Playground
Test the integration at `/playground/video-player`:
- **Upload tab**: Upload videos using Mux
- **Test with URL tab**: Test Vidstack player with any video URL
## Pricing
Mux charges based on:
- **Encoding**: Minutes of video processed
- **Streaming**: Minutes of video delivered
- **Storage**: GB-months of video stored
See [Mux Pricing](https://www.mux.com/pricing) for current rates.
Free tier includes:
- $20/month in free credits
- Enough for ~40 minutes of encoding + 100 hours of streaming
## Troubleshooting
### Upload fails immediately
- Check that MUX_TOKEN_ID and MUX_TOKEN_SECRET are set in Supabase
- Verify the edge function is deployed
- Check browser console for CORS errors
### Video stuck in "processing"
- Large videos can take several minutes to process
- Check Mux dashboard for asset status
- Verify the upload completed successfully
### Video won't play
- Check that playback policy is set to "public"
- Verify the HLS URL format is correct
- Check browser console for player errors
## Resources
- [Mux Documentation](https://docs.mux.com)
- [Mux Uploader Docs](https://www.mux.com/docs/guides/mux-uploader)
- [Vidstack Player Docs](https://vidstack.io)
- [Mux Dashboard](https://dashboard.mux.com)

View File

@ -0,0 +1,251 @@
# Nested Containers — Edit-Mode Architecture
How widgets that embed their own canvas (`TabsWidget`, `LayoutContainerWidget`) participate in the editor's data flow.
---
## Terminology
| Term | Description |
|---|---|
| **Parent page** | The main page being edited. Layout stored in `loadedPages` under `page-<uuid>`. |
| **Nested canvas** | A `GenericCanvas` rendered *inside* a widget, identified by a synthetic `pageId` (e.g., `tab-layout-1`, `nested-layout-xyz`). |
| **Widget props** | The JSON blob stored on the parent page's `WidgetInstance.props` — includes embedded `layoutData`. |
| **`loadedPages`** | In-memory `Map<string, PageLayout>` managed by `LayoutContext`. Holds layouts for both the parent page and every active nested canvas. |
---
## Data Flow Overview
```
┌──────────────────────────────────────────────────────────────────┐
│ UserPageEdit │
│ page-<uuid> in loadedPages │
│ └─ containers[] → widgets[] → TabsWidget (widgetId: "w1") │
│ props.tabs[0].layoutData ← embedded PageLayout │
│ props.tabs[0].layoutId = "tab-layout-1" │
└──────┬───────────────────────────────────────────────────────────┘
▼ GenericCanvas receives initialLayout = tabs[0].layoutData
┌──────────────────────────────────────────────────────────────────┐
│ GenericCanvasEdit (pageId = "tab-layout-1") │
│ Hydration: loadedPages.set("tab-layout-1", initialLayout) │
│ User edits → Commands against "tab-layout-1" in loadedPages │
└──────┬───────────────────────────────────────────────────────────┘
▼ Sync effect writes back
┌──────────────────────────────────────────────────────────────────┐
│ TabsWidget sync effect │
│ Reads loadedPages.get("tab-layout-1") │
│ Calls onPropsChange({ tabs: [...updated layoutData...] }) │
└──────┬───────────────────────────────────────────────────────────┘
▼ handlePropsChange in LayoutContainerEdit
┌──────────────────────────────────────────────────────────────────┐
│ updateWidgetProps(parentPageId, "w1", { tabs: [...] }) │
│ → UpdateWidgetSettingsCommand against parent page │
│ → parent page layout in loadedPages updated │
│ → HistoryManager tracks command (undo/redo) │
└──────────────────────────────────────────────────────────────────┘
```
---
## 1. Hydration — Loading the Nested Layout
**File:** [`GenericCanvasEdit.tsx`](file:///c:/Users/zx/Desktop/polymech/pm-pics/src/modules/layout/GenericCanvasEdit.tsx#L72-L89)
When a nested widget mounts in edit mode, `GenericCanvasEdit` receives `initialLayout` (the embedded `layoutData` from the widget's props) and a synthetic `pageId`.
```ts
useEffect(() => {
// First visit: copy prop into loadedPages
if (initialLayout && !layout) {
hydratePageLayout(pageId, initialLayout);
return;
}
// Staleness check: re-hydrate if prop is newer
if (initialLayout && layout
&& initialLayout.updatedAt > layout.updatedAt) {
hydratePageLayout(pageId, initialLayout);
return;
}
// Fallback: load from API (not used for embedded layouts)
if (!layout) {
loadPageLayout(pageId, pageName);
}
}, [pageId, pageName, layout, loadPageLayout, hydratePageLayout, initialLayout]);
```
**Key points:**
- `hydratePageLayout` is a simple `setLoadedPages(prev => new Map(prev).set(pageId, layout))` — no API call.
- The staleness check (comparing `updatedAt`) ensures that if the parent page is reloaded or the prop changes externally, the cached layout is replaced.
---
## 2. Editing — Commands Against the Nested Layout
All editor operations (add widget, move, remove, update settings) go through `LayoutContext` functions like `addWidgetToPage`, `updateWidgetProps`, etc. Each creates a `Command` (see [`commands.ts`](file:///c:/Users/zx/Desktop/polymech/pm-pics/src/modules/layout/commands.ts)) and executes it via `HistoryManager`.
Commands operate on `loadedPages` by `pageId`. For a nested canvas, the `pageId` is the synthetic ID (e.g., `tab-layout-1`), so commands target the nested layout directly:
```ts
// Adding a widget to a tab canvas:
addWidgetToPage("tab-layout-1", containerId, widgetInstance)
// → AddWidgetCommand reads loadedPages.get("tab-layout-1")
// → Modifies it, calls context.updateLayout("tab-layout-1", newLayout)
```
**This is fully tracked by undo/redo** — `HistoryManager` stores the command in its `past` stack.
---
## 3. Sync-Back — Writing Nested Changes to Parent Props
The nested layout edits live in `loadedPages["tab-layout-1"]`, but the parent page's save function only persists `loadedPages["page-<uuid>"]`. So the nested data must be **synced back** as widget props on the parent.
### TabsWidget
**File:** [`TabsWidget.tsx`](file:///c:/Users/zx/Desktop/polymech/pm-pics/src/components/widgets/TabsWidget.tsx#L100-L122)
```ts
useEffect(() => {
if (!isEditMode) return;
let changed = false;
const newTabs = tabs.map(t => {
const layout = loadedPages.get(t.layoutId);
if (layout) {
const propTimestamp = t.layoutData?.updatedAt || 0;
if (layout.updatedAt > propTimestamp) {
const layoutChanged =
JSON.stringify(layout) !== JSON.stringify(t.layoutData);
if (layoutChanged) {
changed = true;
return { ...t, layoutData: layout };
}
}
}
return t;
});
if (changed) {
onPropsChange({ tabs: newTabs });
}
}, [loadedPages, isEditMode, onPropsChange, tabs]);
```
**Important:** Iterates **all** tabs, not just the current one. This ensures that if you edit Tab A then switch to Tab B, Tab A's changes are still written back before save.
### LayoutContainerWidget
**File:** [`LayoutContainerWidget.tsx`](file:///c:/Users/zx/Desktop/polymech/pm-pics/src/components/widgets/LayoutContainerWidget.tsx#L56-L70)
Same pattern but simpler — only one nested layout:
```ts
useEffect(() => {
if (uniqueNestedPageId && isEditMode) {
const currentLayout = loadedPages.get(uniqueNestedPageId);
if (currentLayout) {
const propTimestamp = nestedLayoutData?.updatedAt || 0;
if (currentLayout.updatedAt > propTimestamp) {
const layoutChanged =
JSON.stringify(currentLayout) !== JSON.stringify(nestedLayoutData);
if (layoutChanged) {
onPropsChange({ nestedLayoutData: currentLayout });
}
}
}
}
}, [uniqueNestedPageId, loadedPages, isEditMode, onPropsChange, nestedLayoutData]);
```
### onPropsChange → updateWidgetProps → Command
**File:** [`LayoutContainerEdit.tsx`](file:///c:/Users/zx/Desktop/polymech/pm-pics/src/modules/layout/LayoutContainerEdit.tsx#L621-L627)
When `onPropsChange` is called, it flows to:
```ts
const handlePropsChange = async (newProps) => {
await updateWidgetProps(pageId, widget.id, newProps);
};
```
`updateWidgetProps` creates an [`UpdateWidgetSettingsCommand`](file:///c:/Users/zx/Desktop/polymech/pm-pics/src/modules/layout/commands.ts#L184-L261) against the **parent page's** `pageId`. This:
- Snapshots the old `widget.props` for undo
- Merges `newProps` into `widget.props`
- Updates `loadedPages[parentPageId]`
- Pushes to `HistoryManager.past` (undoable)
---
## 4. Saving — Persistence Path
**File:** [`usePageEditHandlers.ts`](file:///c:/Users/zx/Desktop/polymech/pm-pics/src/modules/pages/editor/hooks/usePageEditHandlers.ts#L187-L226)
```ts
const handleSave = async () => {
loadedPages.forEach((layout, id) => {
if (id.startsWith('page-')) {
// Save as page content
promises.push(updatePage(pId, { content: rootContent }));
} else if (id.startsWith('layout-')) {
// Save as standalone layout
promises.push(updateLayout(layoutId, { layout_json: layout }));
}
// IDs like "tab-layout-1" or "nested-layout-xyz"
// are INTENTIONALLY SKIPPED — their data lives inside
// the parent page's widget props (already synced by step 3)
});
};
```
**The nested layouts are NOT saved independently.** They are saved **embedded** within the parent page's widget props. This is correct because:
1. Sync effect (step 3) copies `loadedPages["tab-layout-1"]``widget.props.tabs[0].layoutData`
2. This updates `loadedPages["page-<uuid>"]` via `UpdateWidgetSettingsCommand`
3. `handleSave` writes `loadedPages["page-<uuid>"]` to the API
4. On reload, `widget.props.tabs[0].layoutData` is passed as `initialLayout`
---
## 5. Undo/Redo
All operations produce `Command` objects tracked by [`HistoryManager`](file:///c:/Users/zx/Desktop/polymech/pm-pics/src/modules/layout/HistoryManager.ts):
| Action | Command | Target `pageId` |
|---|---|---|
| Add widget to tab | `AddWidgetCommand` | `tab-layout-1` |
| Move widget in tab | `MoveWidgetCommand` | `tab-layout-1` |
| Sync tab data to parent | `UpdateWidgetSettingsCommand` | `page-<uuid>` |
**Undo** pops the last command from `past` and calls `command.undo(context)`. Because syncs also produce commands, undoing a sync restores the previous `tabs` array on the parent.
> **Note:** An undo of the sync command reverts the tab data at the *parent* level but doesn't automatically revert the *nested* layout in `loadedPages["tab-layout-1"]`. These are two separate entries. In practice, the user undoes the inner edit (which reverts the nested layout), and the next sync cycle propagates that reversion to the parent.
---
## 6. ID Conventions
| Prefix | Source | Saved by |
|---|---|---|
| `page-<uuid>` | Real page from database | `updatePage()` API |
| `layout-<uuid>` | Standalone reusable layout | `updateLayout()` API |
| `tab-layout-<N>` | TabsWidget tab | Embedded in parent page's widget props |
| `nested-layout-<widgetInstanceId>` | LayoutContainerWidget | Embedded in parent page's widget props |
---
## 7. Known Pitfalls
### Sync must cover all tabs
The sync effect must iterate **all** tabs, not just the active tab. Otherwise, switching tabs before saving drops the inactive tab's edits. (Fixed March 2026.)
### Re-hydration staleness
When `GenericCanvasEdit` detects that `initialLayout.updatedAt > layout.updatedAt`, it re-hydrates from the prop. Without this check, a reloaded page (with updated server data) would render stale cached data from a previous session.
### JSON.stringify comparison cost
Both `TabsWidget` and `LayoutContainerWidget` use `JSON.stringify` comparison as a safety check before calling `onPropsChange`. For very large layouts this could be slow — but it prevents infinite re-render loops where the timestamp guard alone isn't sufficient (e.g., when timestamps match but content differs due to manual edits).
### Nested nesting
`TabsWidget` and `LayoutContainerWidget` can be nested inside each other. Each level adds another entry to `loadedPages` and another sync effect. The same sync/hydration pattern applies recursively — each widget syncs its own nested layout back to its parent's props.

View File

@ -0,0 +1,155 @@
# Nested Layout Execution — Race Conditions & Solutions
## Problem
Widgets stored inside nested layouts (e.g. `TabsWidget` tabs) are not displayed in edit mode, despite working in view mode. The root cause is a **race condition** between layout hydration and the SYNC-BACK effect.
## Architecture Overview
```
UserPage → LayoutProvider (single instance)
└── GenericCanvas (main page)
└── GenericCanvasEdit
└── LayoutContainerEdit
└── TabsWidget (widget with nested layouts)
└── GenericCanvas (per-tab sub-layout)
└── GenericCanvasEdit (child)
```
### Key files
| File | Role |
|------|------|
| [LayoutContext.tsx](../src/modules/layout/LayoutContext.tsx) | Shared `loadedPages` state, `hydratePageLayout`, `loadPageLayout` |
| [GenericCanvas.tsx](../src/modules/layout/GenericCanvas.tsx) | Suspense wrapper — lazy-loads Edit, falls back to View |
| [GenericCanvasEdit.tsx](../src/modules/layout/GenericCanvasEdit.tsx) | Edit mode canvas — hydration useEffect (lines 73-100) |
| [GenericCanvasView.tsx](../src/modules/layout/GenericCanvasView.tsx) | View mode canvas — also hydrates from `initialLayout` |
| [TabsWidget.tsx](../src/components/widgets/TabsWidget.tsx) | Nested layout host — SYNC-BACK effect (lines 100-139) |
| [LayoutManager.ts](../src/modules/layout/LayoutManager.ts) | `getPageLayout` — creates empty defaults, `loadRootData` prefix matching |
| [LayoutContainerEdit.tsx](../src/modules/layout/LayoutContainerEdit.tsx) | Renders widgets with `{...widget.props}` spread (line 776) |
## Race Condition Sequence
```
1. Main page hydrates → TabsWidget mounts with stored `tabs[].layoutData`
2. TabsWidget renders <GenericCanvas initialLayout={tab.layoutData}> per tab
3. GenericCanvasEdit mounts — layout=undefined, initialLayout=stored data
4. BEFORE useEffect hydration runs:
├── Suspense fallback (GenericCanvasView) may call loadPageLayout()
│ └── loadRootData("tab-layout-1") → prefix mismatch → empty default
└── OR GenericCanvasEdit itself calls loadPageLayout() on wrong branch
5. Empty layout enters loadedPages with fresh Date.now() timestamp
6. SYNC-BACK fires → empty layout is "newer" → overwrites stored layoutData
7. Stored widgets are permanently lost for this session
```
### Prefix mismatch detail
[LayoutManager.ts](../src/modules/layout/LayoutManager.ts) `loadRootData` (line 153-154):
```ts
const isPage = pageId.startsWith('page-');
const isLayout = pageId.startsWith('layout-') || pageId.startsWith('tabs-');
```
`tab-layout-*` starts with `tab-` (not `tabs-`), so neither branch matches → returns empty default.
## Current Fix (band-aid)
[TabsWidget.tsx](../src/components/widgets/TabsWidget.tsx) lines 108-116:
SYNC-BACK compares live vs stored widget counts. If live=0 and stored>0, skips the sync and re-hydrates from stored data. This is a **heuristic** guard — it doesn't generalize to all edge cases (e.g. legitimately empty nested layouts that later get widgets added).
## Proposed Solutions
### Solution 1: `hydratedIds` set in LayoutContext ⭐ recommended
Track which layout IDs have been authoritatively populated vs created as empty defaults.
**In [LayoutContext.tsx](../src/modules/layout/LayoutContext.tsx):**
```ts
// Mutable set — no reactivity needed, checked synchronously during SYNC-BACK
const [hydratedIds] = useState(() => new Set<string>());
const hydratePageLayout = useCallback((pageId: string, layout: PageLayout) => {
hydratedIds.add(pageId); // mark as authoritatively hydrated
setLoadedPages(prev => new Map(prev).set(pageId, layout));
setIsLoading(false);
}, []);
```
Expose `hydratedIds` (or an `isHydrated(id)` helper) via context.
**In [TabsWidget.tsx](../src/components/widgets/TabsWidget.tsx) (and any future nested layout widget):**
```ts
const layout = loadedPages.get(t.layoutId);
if (layout && !isHydrated(t.layoutId) && t.layoutData) {
// Not yet authoritatively populated — skip SYNC-BACK
return t;
}
```
**State machine per layout ID:**
```
UNKNOWN ──► HYDRATING ──► READY
│ ▲
└── (no initialLayout) ──┘ (loadPageLayout = also READY)
```
**Edge cases handled:**
| Case | `initialLayout` | `hydratedIds` | SYNC-BACK |
|------|-----------------|---------------|-----------|
| Stored tab with widgets | ✅ present | set on hydrate | trusts after hydrate |
| New empty tab (user just created) | ❌ undefined | set on loadPageLayout | trusts empty layout ✅ |
| Tab inside tab (deep nesting) | ✅ per level | each level independent | each SYNC-BACK checks own children |
| Implicit container auto-creation | n/a (mutation) | doesn't change flag | no effect on SYNC-BACK |
---
### Solution 2: Gate `loadPageLayout` when `initialLayout` exists
**In [GenericCanvasEdit.tsx](../src/modules/layout/GenericCanvasEdit.tsx) line 96-99:**
```diff
-if (!layout) {
+if (!layout && !initialLayout) {
loadPageLayout(pageId, pageName);
}
```
**Same change in [GenericCanvasView.tsx](../src/modules/layout/GenericCanvasView.tsx) line 38-40.**
This prevents the empty default from ever being created when `initialLayout` is provided. The canvas will stay in loading state until the hydration effect runs (next tick).
**Pros:** 2-line fix, eliminates the root cause.
**Cons:** Doesn't protect against future patterns where a nested layout might lose its `initialLayout` prop during React reconciliation.
---
### Solution 3: Fix the prefix mismatch
**In [LayoutManager.ts](../src/modules/layout/LayoutManager.ts) line 154:**
```diff
-const isLayout = pageId.startsWith('layout-') || pageId.startsWith('tabs-');
+const isLayout = pageId.startsWith('layout-') || pageId.startsWith('tabs-') || pageId.startsWith('tab-');
```
**Pros:** 1-line fix, prevents DB miss for tab sub-layouts.
**Cons:** Only addresses one symptom — any nested layout with an unrecognized prefix would still hit the same problem.
---
## Recommended Approach
Apply **all three** in order of priority:
1. **Solution 2** (gate `loadPageLayout`) — eliminates the source of empty layouts, 2 lines
2. **Solution 1** (`hydratedIds`) — semantic guard for SYNC-BACK, generalizes to any nesting depth
3. **Solution 3** (prefix fix) — defense in depth, prevents DB misses
Then **remove** the current widget-counting heuristic from TabsWidget, since `hydratedIds` makes it redundant.

View File

@ -0,0 +1,50 @@
#!/bin/bash
# Optimize a video for web background use
# Usage: ./optimize-video-bg.sh input.mp4 [output.mp4]
INPUT="$1"
OUTPUT="${2:-${INPUT%.*}-web.mp4}"
POSTER="${OUTPUT%.*}-poster.jpg"
if [ -z "$INPUT" ]; then
echo "Usage: $0 input.mp4 [output.mp4]"
exit 1
fi
echo "=== Optimizing video for web background ==="
echo "Input: $INPUT"
echo "Output: $OUTPUT"
# 1. Encode to 720p, H.264, capped at 2 Mbps, 15s max, no audio
ffmpeg -y -i "$INPUT" \
-vf "scale=1280:720:force_original_aspect_ratio=increase,crop=1280:720" \
-c:v libx264 \
-preset slow \
-crf 28 \
-maxrate 2M \
-bufsize 4M \
-profile:v main \
-level 3.1 \
-pix_fmt yuv420p \
-movflags +faststart \
-an \
-t 15 \
"$OUTPUT"
echo ""
echo "=== Extracting poster frame ==="
# 2. Extract a poster image from the 1s mark
ffmpeg -y -i "$OUTPUT" \
-ss 1 \
-frames:v 1 \
-q:v 2 \
"$POSTER"
echo ""
echo "=== Results ==="
echo "Video: $OUTPUT ($(du -h "$OUTPUT" | cut -f1))"
echo "Poster: $POSTER ($(du -h "$POSTER" | cut -f1))"
echo ""
echo "HTML usage:"
echo "<video src=\"$OUTPUT\" poster=\"$POSTER\" autoplay muted loop playsinline></video>"

View File

@ -1,53 +0,0 @@
# Master Implementation Plan
This document serves as the central roadmap, referencing tasks from:
- [`database-todos.md`](./database-todos.md) (DB)
- [`security.md`](./security.md) (SEC)
- [`caching.md`](./caching.md) (CACHE)
## Phase 1: Foundation (Schema & Data Security)
*Goal: Secure the data layer and enable collaboration primitives.*
- [ ] **[DB] Split `profiles` into `profiles_public` & `user_secrets`**
- [ ] Create table & Migrate data (Ref: [`src/integrations/supabase/types.ts`](../src/integrations/supabase/types.ts)).
- [ ] **[SEC]** Apply RLS to `user_secrets` (`user_id = auth.uid()`).
- [ ] **[DB] Create `page_collaborators` Table**
- [ ] Define columns & Unique Constraints.
- [ ] **[SEC]** Implement RLS for shared Page access (Viewer/Editor logic).
## Phase 2: Server Core & API
*Goal: Build the "Smart Proxy" layer to handle data fetching and caching.*
- [ ] **[CACHE] Implement `CacheAdapter`**
- [ ] Create Interface (Target: `server/src/commons/cache/types.ts`).
- [ ] Implement `MemoryCache` (default) & `RedisCache` (optional).
- [ ] **[DB] Implement Server Endpoints in [`ServingProduct`](../server/src/products/serving/index.ts)**
- [ ] `GET /api/feed` (Hydrated View-Ready Feed).
- [ ] `GET /api/profile/:id` (Public Profile).
- [ ] `GET /api/me/secrets` (Secure Settings access).
- [ ] **[CACHE] Apply Caching to Endpoints**
- [ ] Cache Feed (60s) & Profiles (5m).
## Phase 3: Client Security & Refactor
*Goal: Stop leaking keys and move to the Proxy.*
- [ ] **[SEC] Critical: Remove Client-Side Key Fetching**
- [ ] Scrub `profiles` selects in [`Profile.tsx`](../src/pages/Profile.tsx) and [`db.ts`](../src/lib/db.ts).
- [ ] Remove API Key inputs from Profile UI in [`Profile.tsx`](../src/pages/Profile.tsx).
- [ ] **[DB] Client Data Layer Refactor**
- [ ] Update [`db.ts`](../src/lib/db.ts) to use `fetchFeedFromProxy` / `fetchProfileFromProxy`.
- [ ] Deprecate direct Supabase `select` calls for core content.
- [ ] **[SEC] Hardening**
- [ ] **[SEC]** Handle 404s/403s in [`Post.tsx`](../src/pages/Post.tsx) correctly.
## Phase 4: Performance & Optimization
*Goal: Instant loads and "feels native" speed.*
- [ ] **[DB] Server-Side Injection (SSR-Lite)**
- [ ] Inject `window.__INITIAL_STATE__` into `index.html` via [`ServingProduct`](../server/src/products/serving/index.ts).
- [ ] **[CACHE] Client Hydration**
- [ ] Configure React Query to hydrate from `__INITIAL_STATE__`.
- [ ] Set global `staleTime` to 5m.
- [ ] **[SEC] Rate Limiting**
- [ ] Add limits to API endpoints.

View File

@ -1,87 +0,0 @@
# Page Commands & Undo/Redo System Proposal
## Overview
To support robust Undo/Redo functionality for the User Page Builder, we propose implementing the **Command Pattern**. Every modification to the page layout (add, remove, move, resize, update settings) will be encapsulated as a `Command` object.
## Command Interface
```typescript
interface Command {
id: string;
type: string;
timestamp: number;
execute(): Promise<void>;
undo(): Promise<void>;
}
```
## Command Stack
We will maintain two stacks in the `LayoutContext` or a new `HistoryContext`:
- `past: Command[]`
- `future: Command[]`
## Proposed Commands
### 1. AddWidgetCommand
- **Execute**: Adds a widget to a specific container/index.
- **Undo**: Removes the widget with the specific ID.
### 2. RemoveWidgetCommand
- **Execute**: Removes a widget. Store the widget's state (props, ID, location) before removal.
- **Undo**: Restores the widget to its original container/index with preserved props.
### 3. MoveWidgetCommand
- **Execute**: Moves widget from (Container A, Index X) to (Container B, Index Y).
- **Undo**: Moves widget back to (Container A, Index X).
### 4. UpdateWidgetSettingsCommand
- **Execute**: Updates `widget.props` with new values. Store `previousProps`.
- **Undo**: Reverts `widget.props` to `previousProps`.
### 5. AddContainerCommand / RemoveContainerCommand
- Similar logic to widgets but for layout containers.
### 6. ResizeContainerCommand
- **Execute**: Updates container column sizes.
- **Undo**: Reverts to previous column sizes.
## Implementation Strategy
1. **Refactor `useLayout`**: Move direct state mutations into specific command classes or factory functions.
2. **Action Dispatcher**: Create a `dispatch(action)` function that:
- Creates the appropriate Command.
- Executes `command.execute()`.
- Pushes command to `past` stack.
- Clears `future` stack.
3. **Hotkeys**: Bind `Ctrl+Z` (Undo) and `Ctrl+Y / Ctrl+Shift+Z` (Redo).
## Storage Boundaries & Persistence
### 1. In-Memory Store (Primary)
- **Scope**: Current Browser Tab / Session.
- **Implementation**: React State or `useReducer` within `LayoutContext`.
- **Behavior**: fast, synchronous updates. Cleared on page reload or navigation.
### 2. Browser Storage (localStorage)
- **Role**: Crash Recovery & Session Continuity.
- **Strategy**:
- Persist the `currentLayout` state to `localStorage` on every change (debounced).
- **Proposed**: Persist the `past` and `future` command stacks to `localStorage` as well.
- **Constraint**: All `Command` objects must be strictly JSON-serializable (no function references).
- **Key**: `page_editor_history_${pageId}`.
- **Benefit**: Users can refresh the page and still Undo their last action.
### 3. Server State (Database)
- **Role**: Permanent Storage & Collaboration Source of Truth.
- **Interaction**:
- "Save" commits the current state to Supabase.
- **History Clearance**: Typically, saving *does not* clear the Undo history (allowing "Undo Save"), but navigating away does.
- **Dirty State**: If `past.length > lastSavedIndex`, the UI shows "Unsaved Changes".
### 4. Boundary Enforcement
- **Serialization**: Commands must store *copies* of data (e.g., `previousProps`), not references to live objects.
- **Isolation**: Undo operations must not trigger side effects (like API calls) other than updating the local layout state, unless explicitly designed (e.g., re-uploading a deleted image is complex; usually we just restore the *reference* to the image URL).
## Edge Cases
- **Multi-user editing**: Simple command history assumes single-player mode. Implementation complexity increases significantly with real-time collaboration (requiring OT or CRDTs). For now, we assume last-write-wins or locking.
- **Failed operations**: If `execute()` fails, the command stack should not update.

View File

@ -1,84 +0,0 @@
# UserPageEdit Refactoring Plan
The `UserPageEdit.tsx` component has grown too large (~900 lines) and handles too many responsibilities. This plan outlines the steps to decompose it into manageable, single-purpose components and custom hooks, leveraging the existing Action System for cleaner communication.
## 1. Goal
Split `UserPageEdit.tsx` to improve maintainability, verify separation of concerns, and fully utilize the `src/actions/` system to decouple UI components (like the Ribbon) from the logic.
## 2. Proposed Architecture
### 2.1. Directories
`src/pages/editor/`
- `components/`
- `hooks/`
- `UserPageEdit.tsx` (The main entry point, simplified)
### 2.2. Custom Hooks & Logic
Move state and logic into `src/pages/editor/hooks/`. Crucially, we will use `useActions` to register capabilities.
1. **`usePageEditorState.ts`**
- Manages UI state: `isSidebarCollapsed`, `showHierarchy`, `showTypeFields`, `selectedWidgetId`.
- **Action Registration**: Registers actions like `View/ToggleSidebar`, `View/ToggleHierarchy`, `View/ToggleTypeFields`.
2. **`usePageTemplates.ts`**
- Manages template state.
- **Action Registration**: Registers `File/LoadTemplate`, `File/SaveTemplate`.
3. **`useEmailActions.ts`**
- Manages email state.
- **Action Registration**: Registers `Email/SendTest`, `Email/TogglePreview`.
4. **`useEditorActions.ts` (Core Logic)**
- Wraps `useLayout` and `useLayouts` context methods.
- **Action Registration**:
- `Edit/Undo`
- `Edit/Redo`
- `File/Save`
- `File/ImportLayout`
- `File/ExportLayout`
- `Edit/AddContainer`
- `Edit/AddWidget`
- `Edit/DeletePage`
### 2.3. Components (UI Extraction)
Move UI sections into `src/pages/editor/components/`:
1. **`EditorSidebar.tsx`**
- Subscribes to UI state via context or props (managed by `UserPageEdit`).
- Handlers can trigger Actions.
- Renders `HierarchyTree`.
2. **`EditorMainArea.tsx`**
- The central workspace.
- Renders `GenericCanvas`.
3. **`EditorRightPanel.tsx`**
- The properties panel.
4. **`PageRibbonBar.tsx` (Refactor)**
- **Change**: Instead of accepting 30+ props, it will use `useActions()` to retrieve registered actions (`Save`, `Undo`, `Redo`, `ToggleVisibility`, etc.) and bind them to buttons.
- Props will be minimized to just `page` (for context) and layout specific data.
5. **`EditorDialogs.tsx`**
- A container component that renders all global dialogs (Email, Settings, Templates) based on state.
## 3. Implementation Steps
1. **Setup Directory**: Create `src/pages/editor/` structure.
2. **RefactorHooks**:
- Implement `useEditorActions` to register core actions.
- Implement `usePageEditorState` for UI toggles.
3. **Refactor `PageRibbonBar`**:
- Update it to use `useActions().getActionsByGroup('History')` etc.
- Remove prop drilling for `onUndo`, `onRedo`, `onSave`.
4. **Extract Components**:
- Move JSX to `EditorSidebar`, `EditorMainArea`, `EditorRightPanel`.
5. **Reassemble `UserPageEdit`**:
- Initialize hooks.
- Render `ActionProvider` (if not at top level) or ensure hooks run inside it.
- Pass minimal props to children.
## 4. Verification
- **Ribbon Functionality**: Verify buttons (Undo, Redo, Save) are active/disabled correctly via Action state.
- **Shortcuts**: Verify Ctrl+Z/Y work via the Action registry.
- **Layout**: Verify UI allows adding widgets/containers.

View File

@ -1,50 +0,0 @@
# Reference Image Integration in Page Generator
This document explains how user-selected reference images influence the generation process for both text and images within the AI Page Generator.
## Overview
When a user selects reference images in the AI Page Generator, these images are passed to the AI model (LLM) as part of the conversation context. This enables **multimodal generation**, where the AI can "see" the selected images and use that visual understanding to guide its output.
## data Flow
1. **Selection**: Users select images via the `ImagePickerDialog`. These are stored as `referenceImages` state in `AIPageGenerator`.
2. **Submission**: When "Generate" is clicked, the image URLs are collected and passed through `CreationWizardPopup` -> `usePageGenerator` -> `runTools`.
3. **Context Injection**: In `src/lib/openai.ts`, the `runTools` function detects the presence of images. It constructs a **multimodal user message** for the OpenAI API:
```json
{
"role": "user",
"content": [
{ "type": "text", "text": "User's text prompt..." },
{ "type": "image_url", "image_url": { "url": "https://..." } },
{ "type": "image_url", "image_url": { "url": "https://..." } }
]
}
```
## Impact on Generation
### 1. Text Generation (Direct Visual Context)
The LLM (e.g., GPT-4o) directly processes the image data. This allows it to:
* Describe the visible content of the reference images in the generated page.
* Match the tone, style, and mood of the text to the visual aesthetics of the images.
* Extract specific details (colors, objects, setting) from the images and incorporate them into the narrative.
### 2. Image Generation (Indirect Prompt Alignment)
Currently, **reference images are NOT passed as direct inputs** (img2img) to the underlying image generation tools (`generate_image` or `generate_markdown_image`).
Instead, the reference images influence image generation **indirectly via the LLM**:
1. The LLM "sees" the reference images and understands their style, composition, and subject matter.
2. When the LLM decides to generating *new* images for the page (using `generate_text_with_images`), it writes the **image generation prompts** based on this visual understanding.
3. **Result**: The newly generated images are likely to be stylistically consistent with the reference images because the prompts used to generate them were crafted by an AI that "saw" the references.
## Schema Reference
* **`runTools` (`openai.ts`)**: Accepts `images: string[]` and builds the multimodal message.
* **`generate_text_with_images` (`markdownImageTools.ts`)**: Accepts text prompts for new images, but does not accept input images.
* **`generate_image` (`openai.ts`)**: Accepts text prompts, count, and model, but does not accept input images.

View File

@ -0,0 +1,44 @@
# Page Variables
The Page Variables system allows you to toggle different UI elements of a User Page via settings defined in the page itself or its parent contexts. These variables are stored in a page's metadata under `userVariables`.
## Available System Variables
| Variable | Type | Default | Description |
|---|---|---|---|
| `showTitle` | `boolean` | `true` | Toggles the display of the main page title block. |
| `showAuthor` | `boolean` | `true` | Toggles the display of the author's name. |
| `showDate` | `boolean` | `true` | Toggles the display of the publish date. |
| `showCategories` | `boolean` | `true` | Toggles the display of the category paths. |
| `showActions` | `boolean` | `true` | Toggles the display of the top-right page actions menu. |
| `showParent` | `boolean` | `true` | Toggles the display of the parent page path above the title. |
| `showToc` | `boolean` | `true` | Toggles the display of the side table of contents and mobile TOC. |
| `showLastUpdated` | `boolean` | `true` | Toggles the display of the "Last updated" footer at the bottom of the page. |
| `showFooter` | `boolean` | `true` | Toggles the display of the global app footer site-wide. |
## Implementation References
Here is how the Page Variables system is structured and implemented across the codebase:
### 1. Variables Definition & Defaults
- [src/lib/page-variables.ts](../src/lib/page-variables.ts)
Defines `globalVariables` which sets the default fallbacks (e.g. `showTitle: true`, `showFooter: true`). Also provides the utility `mergePageVariables()` which resolves variable inheritance from a page's parent paths (organizations/categories).
### 2. Editor & Schema
- [src/components/variables/VariablesEditor.tsx](../src/components/variables/VariablesEditor.tsx)
The editor component that allows users to toggle these settings. It defines the `SYSTEM_VARIABLE_SCHEMA` mapping keys like `showToc` and `showLastUpdated` to human-readable labels and descriptions.
### 3. Usage in Page Rendering
- [src/modules/pages/UserPage.tsx](../src/modules/pages/UserPage.tsx)
Retrieves the resolved `contextVariables` using `mergePageVariables()` and uses them to determine high-level layout elements.
- Determines `showToc` and `hasTocContent` for toggling the Sidebar and Mobile Table of Contents.
- Determines `showLastUpdated` for rendering the footer details.
- Computes `showFooter` and passes it to the global Zustand AppStore to toggle the global application footer.
- [src/modules/pages/editor/UserPageDetails.tsx](../src/modules/pages/editor/UserPageDetails.tsx)
Retrieves various variables (`showTitle`, `showAuthor`, `showDate`, `showCategories`, `showActions`, `showParent`) and conditionally renders those blocks. If all are false and the page is not in edit mode, it avoids rendering the details wrapper entirely via an early return block.
### 4. Global State (e.g., Global Footer)
- [src/store/appStore.ts](../src/store/appStore.ts)
A simple Zustand store containing `showGlobalFooter` and `setShowGlobalFooter`. Used as a communication bridge.
- [src/App.tsx](../src/App.tsx)
Reads `showGlobalFooter` from `useAppStore()` and decides whether to render the `<Footer />` component in the main application layout wrapper.

View File

@ -0,0 +1,54 @@
# GADM Picker Implementation details
This document covers the architectural and interaction details of the global bounds and region picker system.
The system connects an interactive `<GadmPicker />` frontend map down to a PostGIS + Martin + PMTiles mapping backend.
## Architecture & Paths
- **Main Component**: [`GadmPicker.tsx`](../../src/modules/places/gadm-picker/GadmPicker.tsx)
- **Local Searches & IO**: [`client-searches.ts`](../../src/modules/places/gadm-picker/client-searches.ts)
- **Server Application (Vite / Express)**: [`server.ts`](../../packages/gadm/server.ts)
## API Endpoints (`/api/gadm/*`)
The picker orchestrates several custom endpoints for real-time geographic data validation, mostly routed through Express in `packages/gadm/server.ts`:
- **`GET /search?q={query}&level={level}`**
Searches the PostGIS database `gadm` view for any names matching the search vector. Often utilizes Redis caching to speed up autocomplete responses.
- **`GET /hierarchy?lat={lat}&lng={lng}`**
Triggers a point-based intersection against the `gadm` multi-polygons (`ST_Intersects`). Returns the full hierarchy (Level 0 through 5) containing the given coordinate.
- **`GET /boundary?gid={gid}&targetLevel={level}&enrich={bool}`**
Returns the exact geographic boundaries of a target region as a GeoJSON FeatureCollection.
- To maintain UI performance on large sets, queries the requested `targetLevel` limit to simplify rendering visually.
- Can optionally `enrich` the returned properties with Population and Area sizing dynamically from the PG backend.
- GeoJSON responses are statically cached locally to `packages/gadm/data/boundaries/` to ensure lightning-fast subsequent fetches.
## The Map Inspector
The map view uses MapLibre GL JS pointing to a local Martin vector tile server serving `.pmtiles`.
- **Point Queries**: Clicking anywhere on the unselected tiles translates the event into a `lat, lng` inspection.
- **Hierarchy Render**: This invokes `/hierarchy` and generates a list of administrative boundaries encompassing that specific point (from Nation down to County).
- **Highlighting**: Hovering over any inferred hierarchy option loads its bounding box dynamically (`gadm-picker-highlight`) to review before formally "adding".
## Selection Lifecycle
The selection state `selectedRegions` tracks picked regions across the UI. Due to API speeds and GeoJSON size considerations, the component features a highly customized, safe, interruptible multi-selection architecture:
- **Single Select (Default)**
Clicking an autocomplete result or clicking to inspect on the map triggers a single-select wipe. This safely terminates any currently loading polygons, cancels queued network requests, and instantly drops all existing items from the array to maintain focus down to a single element.
- **Multi-Select Queue (`ctrl + click` / `⌘ + click`)**
If the `ctrl` key is held either on the autocomplete result, the inspector UI "Add" button, or on raw Map inspection clicks, the interactions skip cancellation logic. They are placed into a `queuedInspectionsRef` Set. Network resolutions occur concurrently and stack natively into the interface.
- **Import / Export Portability**
The `<GadmPicker />` exposes IO tools to manage large or heavily tailored multi-select combinations:
- **Copy Config**: Translates the active GIDs and target levels directly to the local clipboard.
- **Export JSON**: Creates a local Blob URL and downloads the `selectedRegions` metadata explicitly (excludes raw poly-data to maintain strict minimalist file sizes).
- **Import JSON**: Triggers a hidden file input `<input type="file" />`. Firing an import automatically wipes the active UI state and iteratively pushes all imported regions into the `ctrl+click` style high-speed multi-select queue to render perfectly.
## Boundaries (`setGeojsons`)
After an entity enters `selectedRegions`, its exact representation is rendered securely on the map with the ID layer `gadm-picker-features`.
- A background `useEffect` strictly manages sync loops formatting the multiple separate boundaries into a unified `FeatureCollection`, updating the MapLibre source in real-time.
- Regions feature small layout indicators (e.g. `L0`, `L2`) mapping directly to the `targetLevel` rendering logic determining boundary complexity.

View File

@ -0,0 +1,7 @@
# AI-Assisted Content Creation
PoolyPress integrates powerful AI aids directly into your publishing workspace, fundamentally augmenting how you create and assemble content.
- **Interactive Chat Panel:** Converse directly with your content using the built-in Chat interface. You can ask the AI to draft text, attach media, or even directly drop in interactive gallery widgets. The panel is fully context-aware and hooks directly into the site's own internal search engine, allowing the AI to instantly find, query, and link your existing posts, pages, and categories.
- **AI Layout Wizard:** Say goodbye to starting from a blank page. Describe what you want—like "a product page with a hero image, a 2-column features grid, and a reviews section"—and the AI Layout Wizard autonomously generates the exact widget containers needed. It reads your page's live template variables to produce highly realistic, data-driven placeholder content.
- **Contextual Inline Tools:** Within the editor itself, intelligent markdown tools let you rewrite, expand, summarize, and translate your prose on the fly—ensuring your copy is always beautifully tailored for your target audiences without ever leaving your workflow.

Binary file not shown.

After

Width:  |  Height:  |  Size: 262 KiB

View File

@ -0,0 +1,7 @@
# Editable Home & User Profiles
PolyMech's home and user pages are fully customizable, avoiding rigid templates in favor of a dynamic, extensible widget system.
- **Versatile Layouts:** Seamlessly toggle between robust grid, list, and large gallery views to best showcase your feeds.
- **Extensible Widgets:** Effortlessly drag and drop interactive elements—from media sliders to text content—into your pages, powered by an integrated widget registry.
- **Mobile-First & Responsive:** Every layout is highly responsive, guaranteeing that your feeds and pages deliver a flawless viewing experience across all devices.

View File

@ -0,0 +1,9 @@
# Localization & i18n
Reaching a global audience requires more than just translating simple UI strings. The Poolypress platform is designed with a deep, structurally-aware translation pipeline.
- **Granular Widget Translations:** Instead of duplicating entire pages, translations are managed on a granular, widget-by-widget basis. This means you can update a single paragraph or label without disturbing the rest of the localized layout.
- **Batch Translation UI:** A powerful built-in dashboard allows you to filter and batch-translate missing content across entities (Categories, Pages, Custom Types) directly using DeepL's advanced engines.
- **DeepL Integration & Custom Glossaries:** Maintain total terminology consistency across your organization. Create bi-directional glossaries and synchronize them with DeepL to ensure brand terms and technical jargon are always translated correctly.
- **Specialized Markdown & Excel (XLSX) Support:** The translation pipeline goes beyond simple plaintext. We provide robust support for formatting-aware Specialized Markdown, and allow exporting and importing structured translations via Excel spreadsheets (XLSX).
- **Industry Standard Export (TMS):** We recognize that enterprise teams often rely on external Translation Management Systems (TMS) like Crowdin or Lokalise. The system natively supports extracting your content structure into standard formats (like XLIFF 2.0 or Flat JSON) so human translators can work in the tools they prefer, and inject the results safely right back into the platform.

View File

@ -0,0 +1,11 @@
# AI Image Wizard
The PoolyPress AI Image Wizard provides a unified studio for creating, editing, and managing assets using the latest generative models.
- **Multi-Provider Models:** Generate and refine images by seamlessly switching between top-tier AI models (e.g., Gemini 3.1 Flash, OpenAI, etc.).
- **Grounding & Search:** Toggle options like Google Search and Image Search grounding for enhanced visual accuracy and real-world relevancy.
- **Versioning:** Instantly spin off variations of an image (e.g., applying new styles, background removals) while maintaining a strict, non-destructive history of all iterations.
- **Intelligent Prompting:** Lean on built-in prompt templates, save your own presets, automatically optimize rough prompts using AI, and traverse your complete prompt history.
- **Workflow Automation:** Chain together repeatable actions—like generating the image, applying a specific style, drafting metadata, and prepping for immediate SEO-optimized publication—in a single click.
- **Quick Styling Presets:** Apply highly-tuned style presets (e.g., Enhance, Make Artistic, Cyberpunk, Portrait) directly via the quick action buttons underneath the editor.
- **Agentic Tool Integration:** The entire image generation pipeline is exposed as tools to the AI across the platform. This enables the AI assistant to autonomously write comprehensive articles and generate highly detailed, brand-aligned, and context-aware illustrations right where they are needed.

View File

@ -0,0 +1,72 @@
# Platform Overview
Built on 20 years of experience traversing the web development landscape—from traditional CMS giants like Joomla, Drupal, and WordPress, to modern static site generators such as Jekyll, Hugo, and recently Astro—PoolyPress represents the evolution of content management. It aims to provide an all-in-one, one-stop-shop experience tailored for makers, developers, and small companies. Designed as a lightweight, drop-in replacement, it operates with minimal dependencies, ensuring it runs efficiently even on low-spec devices.
PolyMech is a comprehensive, SEO-first, and highly secure full-stack media platform. It is designed to handle media management, rich content publishing, internationalization, and e-commerce, all while providing an exceptional developer and user experience.
## Core Features & Benefits
### 📸 Media Management & Delivery
Effortlessly organize and deliver high-quality media content to your audience.
* **Automated Media Pipelines:** Upload, process, and serve images and video with automatic intelligent format conversion, responsive scaling, and transcoding for smooth, adaptive playback.
* **Advanced Galleries & Versioning:** Manage multiple picture versions, utilize filmstrip navigation, and organize media through intuitive drag-and-drop.
* **Virtual File System (VFS):** A robust abstraction over local and remote storage featuring configurable mounting, personal user storage, robust access controls, and path-traversal protection.
### 📱 Seamless Mobile Experience & PWA
Offer users a native app-like experience directly from their browser.
* **Installable PWA:** Add to the home screen with offline precaching for fast, reliable access.
* **Native Device Sharing:** Leverage the Web Share Target API to share content from any mobile app straight into PolyMech for instant publishing.
* **Global Drag-and-Drop:** Intuitive upload workflows let you start a new post simply by dropping files or URLs anywhere on the page.
### <20> Rich Content & Publishing
Create engaging, interactive long-form content and dynamic feeds.
* **Widget-Based Editor:** Build dynamic pages using an MDX-powered block editor featuring built-in galleries, video banners, carousels, and custom embeddable widgets.
* **Extensive Export Options:** Publish visually rich pages, organize them into collections, and easily export them to standalone HTML, PDF, Markdown, JSON, or Email formats.
* **Smart Discovery Feeds:** Aggregated home feeds offer powerful sorting, search functionality, advanced tagging, and category-based access control.
### 🌍 Global Internationalization
Reach a worldwide audience seamlessly with native multi-language support.
* **Automated Translation:** Leverage AI-powered integration for batch translations of your content, ensuring that formatting and structure are perfectly preserved.
* **Granular i18n Control:** Every individual widget, title, and description can be translated independently, providing a deeply localized user experience.
* **Global SEO Readiness:** Automatically injects locale-aware tags, per-language sitemaps, and translated meta descriptions to secure top search rankings across regions.
### 🔍 SEO, Discoverability, & AI-Readiness
PolyMech is built from the ground up to be fully discoverable by search engines, social media platforms, AI agents, and content readers—no external plugins required.
* **Multi-Format Content Delivery:** Every piece of content can be instantly accessed in multiple formats on the fly with zero configuration.
* **Rich Meta & Structured Data:** The server automatically generates and injects Open Graph data, Twitter Cards, and context-aware JSON-LD schemas directly into the HTML for brilliant link previews.
* **Blazing Fast Performance:** Pre-fetches hydration data, provides edge-cached responsive images, and employs smart route code splitting ensuring instant page loads.
* **Syndication Feeds:** Includes built-in RSS/Atom feeds and auto-generated Merchant XML product feeds for integrated e-commerce discoverability.
* **LLM Discoverability:** Specialized machine-readable endpoints allow AI agents (like ChatGPT or Claude) to seamlessly ingest, understand, and cite your site's content.
### 🛒 Integrated E-Commerce (Optional Module)
Turn your platform into a fully-fledged storefront.
* **Frictionless Shopping:** Complete end-to-end shopping experience featuring secure cart management, streamlined checkout, shipping integrations, and automated return handling.
### 🤖 AI-Powered Content Creation
Accelerate your creative workflow with deeply integrated AI tools.
* **Context-Aware Assistance:** Connect with an AI assistant equipped with real-world tool execution capabilities right in your publishing workspace.
* **Inline Editing Magic:** Rewrite, expand, summarize, or translate text directly inside the editor without breaking your flow.
* **Autonomous Generation:** Empower the AI to build complete, structured layout pages and generate high-quality images straight from simple text prompts.
### 🛡️ Enterprise-Grade Security
Your platform is hardened against vulnerabilities and bad actors at every level, with policies applied consistently across all endpoints.
* **Strict Authentication & Authorization:** Implements secure JWT-based authentication combined with declarative route-level access controls and strict role-based admin privileges.
* **Automated Threat Mitigation:** Actively protects your infrastructure with configurable IP-based rate limiting, manual blocklisting, and an intelligent Auto-Ban system that tracks violations to permanently block malicious actors.
* **Comprehensive Transport Protection:** Implements stringent secure headers, Content Security Policies (CSP), and Cross-Origin directives to effectively thwart XSS, clickjacking, and CSRF attacks without impacting load times.
* **Deep Observability & Auditing:** Real-time logging and background geocoding provide a detailed footprint of user activity, with live streaming available for active threat monitoring and system health tracking.

View File

@ -0,0 +1,24 @@
# Enterprise-Grade Security & Access Control
PoolyPress is hardened against vulnerabilities and bad actors at every level, with policies applied consistently across all endpoints. Beyond standard authentication and transport protection, the platform features a highly granular **Virtual File System (VFS) Access Control List (ACL)** system for precise resource management.
## Virtual File System (VFS) ACLs
The Storage product implements a full ACL system for its virtual file system, ensuring that every file and folder is strictly access-controlled.
- **Isolated Mounts:** Storage is divided into isolated namespaces called "Mounts." Each mount has its own set of access control rules.
- **Granular Permissions:** Access can be explicitly granted or revoked per user, per group, or per specific path within a mount. Supported permissions include `read`, `write`, `list`, `mkdir`, `delete`, `rename`, and `copy`.
- **Special Roles:**
- **Anonymous Access:** Configurable permissions for unauthenticated users (e.g., public read/list access).
- **Authenticated Users:** Configurable baseline permissions for any logged-in user.
- **Resource Chains:** Permissions are evaluated hierarchically. If a user tries to access `/docs/sub/file.txt`, the system checks grants on `/docs/sub/file.txt`, then `/docs/sub`, then `/docs`, and finally `/`. If any level grants access, the request succeeds.
- **Visual Editor:** The platform includes an intuitive ACL Editor UI (`AclEditor`) that allows administrators to easily toggle anonymous/authenticated access, grant specific permissions to individual users or groups, and revoke active permissions.
## Comprehensive Security Posture
The VFS ACLs operate within a broader, deeply integrated security architecture:
- **Strict Authentication & Authorization:** Implements secure JWT-based authentication combined with declarative route-level access controls (`@Public`, `@Admin`).
- **Automated Threat Mitigation (Auto-Ban):** Actively protects your infrastructure by tracking rate limit violations and suspicious patterns in-memory, dynamically banning offending IP addresses, user IDs, or specific tokens.
- **Comprehensive Transport Protection:** Employs stringent secure headers, Content Security Policies (CSP), and Cross-Origin directives to effectively thwart XSS, clickjacking, and CSRF attacks without impacting load times.
- **Deep Observability & Auditing:** Real-time logging, background async geo-lookups, and SSE streams provide a detailed footprint of user activity for active threat monitoring and system health tracking.

View File

@ -1,73 +1,236 @@
# Plan: Support Multiple Images/Videos per Post
# Posts Module Documentation
## Overview
Currently, the application treats every image or video as an independent entity (a `picture` or linked `video`). We want to introduce a higher-level concept of a **Post** which can contain one or more media items (images or videos).
> **Module path:** `src/modules/posts/`
This allows:
- Grouping related photos (e.g., a photo dump, or variations).
- A unified description and comment section for the group.
- Preserving individual interactions (likes/comments) on specific images within the post if desired (as per requirements).
The posts module owns all post-related frontend functionality: creating, editing, viewing, and managing posts containing mixed media (images, videos, embeds, external links).
## Database Schema Changes
---
We will introduce a new `posts` table and link existing `pictures` to it.
## Directory Structure
### 1. New Table: `posts`
This table will hold the content for the "container".
```
src/modules/posts/
├── EditPost.tsx # Create & edit page (/post/new, /post/:id/edit)
├── NewPost.tsx # Legacy new-post page (/new)
├── PostPage.tsx # Post detail page (/post/:id)
├── client-posts.ts # Post API client (CRUD, feed, meta)
├── client-pictures.ts # Picture/media API client (CRUD, comments, likes, versions)
├── components/
│ ├── PostComposer.tsx # Post editor form (title, description, settings, image list)
│ └── PostPicker.tsx # Dialog for selecting existing posts (append-to-post flow)
└── views/
├── types.ts # Core types (PostItem, PostMediaItem, PostSettings, etc.)
├── adapters.ts # Data adapters (Supabase → frontend models)
├── PostActions.ts # Post action utilities
├── usePostActions.ts # Post action hooks
├── utils.ts # Shared utilities
├── db.ts # DB helpers
├── llm.tsx # AI/LLM integration for posts
├── components/
│ ├── SmartLightbox.tsx # Full-screen media lightbox
│ ├── DeleteDialogs.tsx # Post/picture delete confirmation
│ ├── ExportDropdown.tsx # Export menu (markdown, zip, etc.)
│ ├── TikTokDialog.tsx # TikTok embed dialog
│ ├── TikTokEmbed.tsx # TikTok embed renderer
│ └── YouTubeDialog.tsx # YouTube embed dialog
└── renderers/
├── ArticleRenderer.tsx # Blog/article-style layout
├── CompactRenderer.tsx # Default compact layout
├── EmbedRenderer.tsx # oEmbed rendering (/embed/:id)
├── ThumbsRenderer.tsx # Thumbnail grid layout
└── components/
├── CompactActionToolbar.tsx # Action bar (like, share, edit, etc.)
├── CompactFilmStrip.tsx # Horizontal thumbnail strip
├── CompactMediaDetails.tsx # Media metadata panel
├── CompactMediaViewer.tsx # Main media viewport
├── CompactPostHeader.tsx # Post header (author, date, categories)
├── Gallery.tsx # Gallery grid component
├── MobileGroupItem.tsx # Mobile-optimized post item
├── MobileGroupedFeed.tsx # Mobile grouped feed view
└── SpyGlassImage.tsx # Zoom/spy-glass image viewer
```
| Column | Type | Notes |
|Ref|---|---|
| `id` | `uuid` | Primary Key, default `gen_random_uuid()` |
| `user_id` | `uuid` | FK to `auth.users` (or profiles) |
| `title` | `text` | Main title of the post |
| `description` | `text` | Description/Caption for the whole post |
| `created_at` | `timestamptz` | default `now()` |
| `updated_at` | `timestamptz` | |
| `metadata` | `jsonb` | Flexible field for extra data |
---
### 2. Update Table: `pictures`
We link media items to the post.
## Routes
- Add column `post_id`: `uuid`, FK to `posts(id)`.
- Add column `position`: `integer`, default 0. To order images within a post.
| Route | Component | Description |
|-------|-----------|-------------|
| `/post/new` | `EditPost` | Create a new post |
| `/post/:id` | `PostPage` | View post detail |
| `/post/:id/edit` | `EditPost` | Edit an existing post |
| `/video/:id` | `PostPage` | View video post (same renderer) |
| `/new` | `NewPost` | Legacy new-post page |
| `/wizard` | `Wizard` | AI Image Wizard (separate module, not post-specific) |
> **Note**: Videos are stored in the `pictures` table with `type='mux-video'`, so this change covers both images and videos. The separate `videos` table in Supabase appears unused by the current frontend.
Route definitions: [`src/App.tsx`](../src/App.tsx)
### 3. Update Table: `comments` and `likes`
Currently, `comments` and `likes` reference `picture_id`.
- **Requirement**: "we might have also comments, and descriptions for the parent 'post'".
- **Approach**:
- Add `post_id` to `comments` and `likes` tables (nullable).
- Or create `post_comments` / `post_likes` tables if cleaner.
- *Decision*: We will start with a simple structure where `posts` have their own `description` (already in table). For comments, we might need a unified comment system later or link comments to posts. For now, let's focus on `posts` containing `pictures`.
---
## Migration Strategy (SQL)
## Supported Media Types
According to user feedback, **no backfill is required**. Old pictures will simply not be displayed in the new "Post" feed which will rely on the `posts` table.
Posts can contain mixed media items. Each `PostMediaItem` has a `type` field:
1. **Create `posts` table.**
2. **Alter `pictures` table**: Add `post_id` column.
| Type | Description | Source |
|------|-------------|--------|
| `supabase-image` | Uploaded image (stored in Supabase Storage) | File upload / drag-drop |
| `mux-video` | Video (processed by Mux) | File upload |
| `video-intern` | Legacy internal video | Migration |
| `page-external` | External link card (with OG metadata) | URL paste |
| `youtube` | YouTube embed | YouTube dialog |
| `tiktok` | TikTok embed | TikTok dialog |
## UI/UX Updates
---
### Feed (`PhotoGrid.tsx`)
- Query `posts` instead of `pictures`.
- Fetch the first linked picture for the thumbnail.
## Display Modes (Renderers)
### Post Detail (`Post.tsx`)
- Route `/post/:id` will now accept a **Post ID**.
- Fetch Post metadata.
- Fetch associated Media Items (`select * from pictures where post_id = :id order by position`).
Posts support multiple display modes, configured via `post.settings.display`:
### Creation Wizard
- Allow granular updates: "Select Multiple Files".
- Create Post -> Upload all files -> Create Picture records linked to Post.
| Mode | Renderer | Description |
|------|----------|-------------|
| `compact` (default) | [`CompactRenderer`](../src/modules/posts/views/renderers/CompactRenderer.tsx) | Side-panel layout: media viewer + filmstrip + details |
| `article` | [`ArticleRenderer`](../src/modules/posts/views/renderers/ArticleRenderer.tsx) | Blog-style: wide images with inline markdown text |
| `thumbs` | [`ThumbsRenderer`](../src/modules/posts/views/renderers/ThumbsRenderer.tsx) | Grid of thumbnails |
| embed | [`EmbedRenderer`](../src/modules/posts/views/renderers/EmbedRenderer.tsx) | For `/embed/:id` oEmbed endpoint |
## Step-by-Step Implementation
The renderer is selected in [`PostPage.tsx`](../src/modules/posts/PostPage.tsx) based on the post's settings.
1. **Supabase Migration**: Create tables, run backfill script.
2. **Codebase - Types**: Update `types.ts` (re-run codegen).
3. **Codebase - API**: Update any fetch functions to use `posts`.
4. **UI - Feed**: Switch `PhotoGrid` to use `posts`.
5. **UI - Detail**: Rewrite `Post.tsx` to handle `Post` + `Media[]`.
6. **UI - Create**: Update upload logic.
---
## Client APIs
### `client-posts.ts`
Post-level CRUD operations, all authenticated via Supabase session:
| Function | Method | Description |
|----------|--------|-------------|
| `fetchPostById(id)` | `GET /api/posts/:id` | Fetch single post with pictures |
| `fetchPostDetailsAPI(id)` | `GET /api/posts/:id` | Fetch with responsive image sizes/formats |
| `fetchPostsList(options)` | `GET /api/posts` | Paginated post list |
| `fetchFullPost(postId)` | Supabase RPC | Full post with all relations |
| `createPost(data)` | `POST /api/posts` | Create new post |
| `updatePostDetails(id, updates)` | `PATCH /api/posts/:id` | Update title, description, settings, meta |
| `updatePostMeta(id, meta)` | `PATCH /api/posts/:id` | Update meta only |
| `deletePost(id)` | `DELETE /api/posts/:id` | Delete post and associated pictures |
| `mapFeedPostsToMediaItems(posts)` | — | Convert feed posts to PhotoGrid-compatible format |
| `augmentFeedPosts(posts)` | — | Normalize API posts (add cover, author) |
### `client-pictures.ts`
Picture/media-level operations:
| Function | Method | Description |
|----------|--------|-------------|
| `createPicture(picture)` | `POST /api/pictures` | Create picture record |
| `updatePicture(id, updates)` | `PATCH /api/pictures/:id` | Update picture metadata |
| `deletePicture(id)` | `DELETE /api/pictures/:id` | Delete single picture |
| `deletePictures(ids)` | `POST /api/pictures/delete-batch` | Batch delete |
| `fetchPictures(options)` | `GET /api/pictures` | List pictures with filters |
| `fetchPictureById(id)` | `GET /api/pictures/:id` | Single picture |
| `fetchMediaItemsByIds(ids)` | `GET /api/media-items` | Batch fetch by IDs |
| `fetchVersions(item)` | `GET /api/pictures/versions` | Version tree |
| `toggleLike(userId, pictureId)` | Supabase | Toggle like on picture |
| `unlinkPictures(ids)` | `POST /api/pictures/unlink` | Unlink from post |
| `upsertPictures(pictures)` | `POST /api/pictures/upsert` | Batch upsert |
| `fetchCommentsAPI(pictureId)` | `GET /api/pictures/:id/comments` | Fetch comments |
| `addCommentAPI(pictureId, content)` | `POST /api/pictures/:id/comments` | Add comment |
| `editCommentAPI(pictureId, commentId, content)` | `PATCH` | Edit comment |
| `deleteCommentAPI(pictureId, commentId)` | `DELETE` | Delete comment |
| `toggleCommentLikeAPI(pictureId, commentId)` | `POST` | Toggle comment like |
---
## Core Types
Defined in [`views/types.ts`](../src/modules/posts/views/types.ts):
```typescript
// A media item attached to a post
type PostMediaItem = MediaItem & {
post_id: string | null;
renderKey?: string;
};
// The post entity
interface PostItem {
id: string;
title: string;
description: string | null;
user_id: string;
created_at: string;
updated_at: string;
pictures?: PostMediaItem[];
settings?: PostSettings;
meta?: PostMeta;
}
// Post display and visibility settings
interface PostSettings {
display?: 'compact' | 'thumbs';
visibility?: 'public' | 'listed' | 'private';
link?: string; // For link posts
image_url?: string;
thumbnail_url?: string;
}
// Post metadata
interface PostMeta {
slug?: string;
categoryIds?: string[];
[key: string]: any;
}
```
---
## Post Lifecycle
### Creating a Post
1. User navigates to `/post/new` (via CreationWizardPopup or direct URL)
2. [`EditPost.tsx`](../src/modules/posts/EditPost.tsx) renders in create mode (no `id` param)
3. User fills title, description, adds images via drag-drop or file picker
4. Configures visibility/display in PostComposer settings accordion
5. On publish: `publishImage()` from [`publishHandlers.ts`](../src/components/ImageWizard/handlers/publishHandlers.ts) creates the post via `createPost()`, uploads images, links them via `createPicture()`
6. Redirects to `/post/:id`
### Editing a Post
1. User clicks "Edit Post" on [`PostPage.tsx`](../src/modules/posts/PostPage.tsx) → navigates to `/post/:id/edit`
2. [`EditPost.tsx`](../src/modules/posts/EditPost.tsx) fetches post data from `fetchPostById()`
3. Converts existing pictures to `ImageFile[]` format for PostComposer
4. User modifies content; on save: `publishImage()` calls `updatePostDetails()` and updates pictures
5. Redirects back to `/post/:id`
### Viewing a Post
1. [`PostPage.tsx`](../src/modules/posts/PostPage.tsx) fetches post via `fetchPostDetailsAPI()`
2. Resolves display mode from `post.settings.display`
3. Delegates to appropriate renderer (Compact, Article, Thumbs)
4. Renderer receives all props via `PostRendererProps` interface
---
## Integration with ImageWizard
The ImageWizard (`src/components/ImageWizard.tsx`) is a **separate module** for AI image generation. It interacts with posts in two ways:
1. **Append to Post** — Images generated in the wizard can be appended to an existing post via `PostPicker` dialog
2. **Open in Wizard** — Individual post images can be opened in the wizard for AI-powered editing (variations, upscaling, etc.)
The shared interface is the `ImageFile` type from [`ImageWizard/types.ts`](../src/components/ImageWizard/types.ts) and the `publishImage()` function from [`publishHandlers.ts`](../src/components/ImageWizard/handlers/publishHandlers.ts).
> **Note:** `publishHandlers.ts` still lives in `ImageWizard/handlers/` but is functionally post-module logic. It should be migrated to `modules/posts/` in a future cleanup.
---
## Post Settings & Visibility
| Setting | Values | Effect |
|---------|--------|--------|
| `visibility` | `public` / `listed` / `private` | Controls feed inclusion and access |
| `display` | `compact` / `article` / `thumbs` | Selects renderer |
| `link` | URL string | Attaches external link to post |
Managed via the settings accordion in [`PostComposer.tsx`](../src/modules/posts/components/PostComposer.tsx).

View File

@ -0,0 +1,148 @@
# Polymech Product Worker Architecture
The Polymech server uses a hybrid architecture that delegates heavy CPU-bound tasks or isolated background processing (like queues) to dedicated native Node.js **Worker Threads**, while keeping the main event loop responsive for HTTP handling.
This document details how the worker thread system is configured, initialized, and integrated via IPC.
---
## 1. Configuration & Registry (`[config/products.json](../server/config/products.json)`)
Worker threads are defined in `config/products.json`. Each product specifies the number of dedicated threads it requires under the `workers` property.
```json
{
"images": {
"enabled": true,
"workers": 1,
"routes": true
}
}
```
The boot sequence is managed by `[src/products/registry.ts](../server/src/products/registry.ts)`:
1. It reads `products.json`.
2. For every product with `workers > 0`, it spawns up to that many `Worker` instances.
3. The instantiated worker receives `workerData: { productName: 'images' }` so it knows which product it is responsible for.
4. The main thread maintains an in-memory array exported as `nativeWorkers` representing live worker threads.
---
## 2. The Development Wrapper (`[worker_wrapper.mjs](../server/src/worker_wrapper.mjs)`)
Because the project runs in development mode via `tsx watch` (or `vite-node`), natively spawned `Worker` threads lose the TypeScript transpilation context of the main thread.
To fix this, `registry.ts` boots workers using a hybrid wrapper approach depending on the environment:
- **Production**: Boots directly into `dist-server/worker.js`.
- **Development**: Boots into `src/worker_wrapper.mjs` instead of `worker.ts`.
`worker_wrapper.mjs` manually sets up a `vite-node` API server inside the worker context to transpile `worker.ts` on the fly, and explicitly maps path aliases (like `@/*` to `src/*`) to resolve imports exactly like the main thread.
---
## 3. The Worker Entrypoint (`[src/worker.ts](../server/src/worker.ts)`)
Once `worker.ts` executes, it acts purely as an agnostic bootstrap layer. It performs no business logic of its own.
Its responsibilities are:
1. Validate `workerData.productName`.
2. Instantiate the matched `AbstractProduct` subclass (e.g., `new ImagesProduct()`).
3. Set up the IPC event listener on `parentPort`.
4. Optionally boot PG-Boss (if the product requires queue management).
5. Call `instance.start()`.
**IPC Routing:**
When `worker.ts` receives `{ type: 'job' }` messages from the main thread, it delegates the work directly back to the active product instance:
```typescript
// inside worker.ts
if (msg.type === 'job') {
const { action, jobId } = msg;
try {
const result = await instance.handleJob(action, msg);
parentPort?.postMessage({ type: 'job_result', jobId, result });
} catch (err) {
parentPort?.postMessage({ type: 'job_result', jobId, error: err.message });
}
}
```
---
## 4. Main-to-Worker IPC (`[src/commons/worker-ipc.ts](../server/src/commons/worker-ipc.ts)`)
To communicate with the worker, the main thread uses the `worker-ipc.ts` utility.
This file provides two critical functions:
1. `hasWorker(productId: string)` — A fast, synchronous check to see if a specific product (e.g., 'images') has an active native worker.
2. `dispatchToWorker(productId, action, payload, transferList?)` — Wraps the internal Node.js `postMessage` architecture inside an easy-to-use Promise interface.
### Important: Late Binding
To avoid circular dependency cycles (`registry → images → worker-ipc → registry`), the IPC module does **not** import `nativeWorkers` at the top level. Instead, it dynamically imports it inside the function body (`await import('../products/registry.js')`).
### Important: Zero-Copy Transfers
When dispatching heavy workloads (like raw image ArrayBuffers), pass the buffer in the optional 4th parameter `transferList`. This effectively *moves* the memory block to the worker thread without taking the massive performance hit of serializing/cloning it over the IPC bridge.
---
## 5. Domain Encapsulation (`[ImagesProduct.ts](../server/src/products/images/index.ts)`)
To prevent `worker.ts` from becoming a bloated monolith, all business logic lives strictly inside the individual product classes (e.g., `ImagesProduct.ts`).
Products running in hybrid mode override the `handleJob(action, msg)` method exposed by `AbstractProduct`.
```typescript
// inside ImagesProduct.ts
async handleJob(action: string, msg: any): Promise<any> {
if (action === 'process_image') {
// Runs inside the isolated worker thread
return await this.performProcessImage(...);
}
return super.handleJob(action, msg);
}
```
## 6. Hybrid Fallback (`[ImagesProduct.ts](../server/src/products/images/index.ts)`)
Because the route handlers run on the *main thread*, they must account for scenarios where a worker has failed to boot, crashed, or was disabled in `config/products.json`.
Route handlers implement a Graceful Fallback Pattern using `hasWorker()`:
```typescript
// inside ImagesProduct.ts -> handlePostImage()
if (await hasWorker('images')) {
// 1. ArrayBuffer sent to worker (zero-copy)
await dispatchToWorker('images', 'process_image', { buffer, ... }, [buffer]);
// 2. Pick up the file the worker wrote to disk
processedBuffer = await fs.readFile(filepath);
} else {
// 1. Worker unavailable or disabled. Process synchronously inline.
await this.performProcessImage(inputBuffer, filepath, ...);
processedBuffer = await fs.readFile(filepath);
}
```
By ensuring that `performProcessImage` is a standalone class method, both the worker thread (`handleJob`) and the main thread fallback path execute the exact same, DRY codebase.
---
## 7. Telemetry & Real-Time Admin Monitoring
The admin interface provides live, deep telemetry into the performance and load of the native worker threads. This ensures developers can monitor memory consumption, CPU utilization, and task concurrency without external tools.
**1. Task Concurrency (`activeJobs`)**
Inside `[src/worker.ts](../server/src/worker.ts)`, every incoming `'job'` IPC event increments an `activeJobs` internal counter. A `finally` block ensures this decrements when the job resolves or crashes. When the main thread requests a health check ping via `{ type: 'ping' }`, the worker thread replies with `{ type: 'pong', activeJobs }`.
**2. Node.js Native Worker Metrics**
During periodic polling instances in `[src/endpoints/admin.ts](../server/src/endpoints/admin.ts)`, the main thread queries internal V8 and Node performance APIs for each running worker:
- **`getHeapStatistics()`**: Collects exact V8 heap memory usage (`heapUsedMB`, `heapTotalMB`).
- **`cpuUsage()`**: Produces microsecond-level CPU deltas which can be calculated into a real-time `cpuPercent`.
- **`performance.eventLoopUtilization()`**: Returns the fractional idle-vs-busy processing ratio for the worker thread, calculated into `eluPercent`.
**3. WebSocket Push Broadcasting (`sys-stats`)**
Rather than relying on heavy REST API polling inside the Admin frontend UI, the backend uses a generic interval loop inside `admin.ts` to actively grab `getSystemInfo()` and `getWorkerPings()`. This aggregated telemetry is pushed automatically over WebSockets to all connected clients under the event `sys-stats` using `[src/commons/websocket.ts](../server/src/commons/websocket.ts)`.
On the frontend, the `SystemStats` and `WorkerThreadStats` UI components simply tap into a global `useWebSocket` React Context Hook to listen for updates, updating visuals instantly with zero repetitive HTTP overhead.

View File

@ -0,0 +1,70 @@
# FileBrowserPanel Refactoring Plan
## 1. The Problem
`FileBrowserPanel.tsx` has grown to over 1300 lines. It currently handles:
- VFS state fetching (path, mount, nodes, search queries, filter resolution).
- Keyboard shortcuts and navigation logic (global arrow keys, F3 search overlay, focus trapping).
- Selection state (`selected`, `focusIdx`, multi-select logic).
- UI display parameters (viewMode, zoom state, dual/single layout sizes).
- A heavy rendering component tree (FileGridView, FileListView, FileTree, many modals, lightboxes, AILayout wizards).
## 2. Refactoring Goals
We need to decouple the *data logic* from the *UI logic*, separate the heavy sub-views, and share common capabilities across different view renderers (`list`, `grid`, `tree`).
Crucially, the file browser UI must become a generic "Shell". The `VFS` implementation should just be one **Adapter**. By extracting the generic `FileBrowserUI`, we can plug in other data adapters in the future (like `usePageAdapter` for `UserPage.tsx`), rendering arbitrary models (Pages, Posts, Contacts) with the exact same robust `Tree/List/Grid` explorer UI.
### 2.1 Extract Custom Hooks
We should split the huge block of `useState`/`useEffect` hooks into contextual hooks:
- **`useVfsAdapter(mount, path, glob, accessToken)`**
Acts as the data provider. Translates VFS endpoints into standardized `INode` UI models. Handles fetching directories, caches, extracting `readme` data, resolving breadcrumbs.
*(Future adapters like `usePageAdapter` would conform to this exact same hook interface but fetch from `/api/pages`)*
- *Returns:* `nodes`, `sorted`, `loading`, `error`, `readmeContent`, `updatePath`, `updateMount`, `refresh()`.
- **`useSelection(sortedNodes, onSelect)`**
Handles array of selected items, focus index, and logic to select single/multiple (`ctrlKey`/`shiftKey`) elements. We can share this directly with list, tree, and thumb renderers.
- *Returns:* `selected`, `focusIdx`, `setFocusIdx`, `setSelected`, `handleItemClick`, `clearSelection`.
- **`useKeyboardNavigation(params)`**
Abstracts away global keybinds (e.g. F3 for search), container arrow navigation, copy/paste, backspace (up folder), and enter (open).
- Takes dependency on `focusIdx` and `selected` from `useSelection()`.
- **`useFilePreview(accessToken)`**
Manages the state for open lightboxes (Image/Video, Docs, Iframe, Text) rather than having all state at the root of `FileBrowserPanel`.
- *Returns:* `lightboxNode`, `setLightboxNode`, `previewComponent` (pre-rendered JSX based on what's active).
### 2.2 Break Out UI Sub-components
Currently, the rendered output mixes complex file-fetching logic with the actual modal HTML.
- **`FilePreviewContainer`**
Move all `Lightbox` instantiations (e.g., `ImageLightbox`, `SpreadsheetLightbox`, `ThreeDViewer`, `PdfLightbox`) into a single child component. Pass `selectedFile` or active view node to it.
- **`LayoutToolbarWrapper`**
Simplify how `FileBrowserToolbar` is rendered, connecting it purely to an abstracted state object rather than 30 independent props pass-throughs.
- **`SearchDialog` & `FilterDialog` Management**
Currently inline or tightly coupled; should be separated into a `DialogRenderer` wrapper or use a generic dialog context.
### 2.3 Universal Interface for Viewers
The `Tree`, `List`, and `Grid` renderers all replicate basic file display logic. By passing a shared Context or generic store (e.g., a `ViewerControlContext`), each one can trigger:
- `openFile(INode)`
- `goUp()`
- `selectFile(INode, multi)`
This standardizes the event-actions instead of passing 10+ props.
## 3. Recommended Actions & Phasing
- ~~**Phase 1: State Extraction**~~
Extract `useSelection` and `useFilePreview` hooks from `FileBrowserPanel.tsx` without moving component rendering. Validate that deep link auto-open and search selections still function correctly.
- ~~**Phase 2: VFS Extraction**~~
Extract `useVFSBrowser` so directory fetching and sorting logic becomes completely separated from React rendering constraints.
- **Phase 3: Component Cleanup**
Move all `Lightbox...` conditionally rendered objects at the bottom of the file into a `<PreviewOverlayManager />` component layer.
- **Phase 4: Shared Navigation Interface**
Connect `FileTree`, `FileListView`, `FileGridView` to the shared `useSelection`/`useKeyboardNavigation` events so behavior is strictly unified without duplicate `<div onKeyDown...` code.
## 4. Edge Cases to Preserve
1. **Deep Linking (`?file=`) / `initialFile` prop:** Needs to wait for async directory fetch completion before snapping focus and dispatching `onSelect`.
2. **Escape Key Handling:** `Tree` view requires focus on `listRef`, grids/views require focus on `containerRef`. F3 global event logic uses `[data-active-panel]`. Any refactor must strictly preserve this arbitration logic.
3. **Jail Prop Security:** When a container is "jailed" to a parent directory, `updatePath` must refuse to navigate up past the root `jailPath`.
4. **VFS Sorting:** Always display folders before files, then sort according to Name/Size/Date `(SortBy)`.

View File

@ -0,0 +1,70 @@
# Migration Plan: React Router v6 to TanStack Router
Migrating a large, dynamic application like Polymech from React Router DOM to TanStack Router requires a phased approach. TanStack Router offers superior type safety, built-in scroll restoration, and powerful data loading capabilities, but fundamentally changes how routing is configured (moving away from JSX `<Route>` trees to a static route tree).
## Phase 1: Setup and Basic Configuration
- [ ] **Install Dependencies**
- Run `npm install @tanstack/react-router @tanstack/router-vite-plugin` in `pm-pics`.
- Also install in `@polymech/ecommerce` if needed, or handle it as a peer dependency.
- [ ] **Vite Configuration**
- Add `TanStackRouterVite` to `vite.config.ts`. This enables file-based routing generation or static tree compilation.
- [ ] **Define the Root Route**
- Create `src/routes/__root.tsx`.
- Migrate the layout shell from `AppWrapper` into the Root Route, including:
- `<GlobalDragDrop>`
- `<TopNavigation>` and `<Footer>`
- Context Providers (can stay in `App.tsx` surrounding the router provider, or move to the Root Route context).
- [ ] **Initialize the Router**
- In `src/App.tsx`, remove `<BrowserRouter>` and initialize `const router = createRouter({ routeTree })`.
- Render with `<RouterProvider router={router} />`.
## Phase 2: Route Declaration and Mapping
- [ ] **Convert Static Routes**
- Convert basic pages like `/auth`, `/profile`, `/settings/providers`, `/playground/*`.
- Create route files (e.g., `src/routes/auth.tsx`) or define them in a static route tree file.
- [ ] **Convert Dynamic Routes**
- Map dynamic segments:
- `/post/:id` -> `src/routes/post.$id.tsx`
- `/user/:userId` -> `src/routes/user.$userId.tsx`
- `/tags/:tag` -> `src/routes/tags.$tag.tsx`
- `/collections/:userId/:slug` -> `src/routes/collections.$userId.$slug.tsx`
- Re-wire path parameter extraction from `useParams()` to TanStack's `Route.useParams()`.
- [ ] **Convert Catch-All / Wildcard Routes (Deeply Nested Apps)**
- Routes like `/categories/*` or `/app/filebrowser/*` will need to use TanStack's splat routing (e.g., creating a `$.tsx` file).
- Inside the component, use `Route.useParams()` to grab the `_splat` variable which contains the remaining path string.
- *Refactor Opportunity*: Consider converting deeply nested paths in nested apps (like mount/file/path) into strictly typed Search Parameters (e.g., `?mount=local&path=foo`) leveraging TanStack's built-in Zod validation.
- [ ] **Handle Global Not Found (404)**
- `*` (Not Found) needs to be handled via TanStack Router's built-in `NotFoundRoute` or router configuration, not a standard route definition.
- [ ] **Migrate Standalone Packages**
- `@polymech/ecommerce` (`EcommerceBundle.tsx`) relies on `react-router-dom` (`useLocation`, `matchPath`, `Navigate`).
- It needs to be refactored either entirely to TanStack Router, or decoupled so the host app injects the routing logic.
## Phase 3: Component Refactoring (The Long Tail)
- [ ] **Replace `useNavigate`**
- Find all instances of `useNavigate()` from `react-router-dom`.
- Replace with `useNavigate()` from `@tanstack/react-router`.
- **Crucial**: TanStack requires paths to match the route definitions strictly, which will catch broken links but requires updating all link definitions.
- [ ] **Replace `<Link>`**
- Update all `react-router-dom` `<Link to="...">` with TanStack's `<Link to="...">`.
- [ ] **Replace `useLocation` and `useSearchParams`**
- TanStack has robust typed search parameters. If `useSearchParams` is used for view modes, filtering, or tabs, define the search params schema on the Route level.
- [ ] **Replace `<Navigate>` components**
- Change programmatic `<Navigate to="..."/>` to TanStack's equivalent or throw a redirect inside a route loader.
## Phase 4: Enhancements (Optional but Recommended)
- [ ] **Implement Pre-fetching**
- Move API calls currently wrapped in `useQuery` inside `useEffect` (like home feed loading) into TanStack's `loader` functions. This eliminates waterfalls.
- [ ] **Configure Scroll Restoration**
- Remove the custom `ScrollRestoration.tsx`.
- Enable `<ScrollRestoration />` provided by TanStack in the Root Route.
- [ ] **Analytics Interception**
- Hook analytics into the router context directly (e.g., `router.subscribe`), replacing the `onRouteChange` prop we added to the custom restorer.
## Risk Assessment
- **Huge API surface change**: Touches roughly 30-50 files.
- **Ecommerce Package**: The `@polymech/ecommerce` bundle must be updated carefully so it still functions if imported independently.
- **Type errors**: Prepare for hundreds of TypeScript errors regarding route typings immediately after enabling the Vite plugin.

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 83 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 423 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 933 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 601 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 72 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 57 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 288 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 382 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 194 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 407 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 213 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 70 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 3.1 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 237 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 174 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 648 KiB

File diff suppressed because it is too large Load Diff

View File

@ -1,41 +1,375 @@
# Security Architecture — Polymech
# Security Improvement Plan
## 1. Secrets Management (Critical)
**Issue**: [`Profile.tsx`](../src/pages/Profile.tsx) currently fetches and exposes API keys (OpenAI, Google, etc.) to the client browser.
**Goal**: Never send raw API keys to the client unless explicitly requested for a "Settings" view context, and even then, mask them.
### A. Client Side (`Profile.tsx`)
- [ ] **Remove** all code that fetches `_api_key` columns from `profiles` in [`Profile.tsx`](../src/pages/Profile.tsx).
- [ ] **Remove** Input fields for API keys in the main Profile view in [`Profile.tsx`](../src/pages/Profile.tsx).
- [ ] **Create** a new "Provider Settings" page (or modal) protected by re-auth or strict checks (Target: `src/pages/ProviderSettings.tsx`).
- [ ] Use `/api/me/secrets` (Server Proxy) to manage keys, never direct DB Updates for secrets.
### B. Server Side
- [ ] Ensure `user_secrets` table has RLS `auth.uid() = user_id`.
- [ ] Ensure no public endpoints (like `/api/profile/:id`) return columns from `user_secrets` in [`ServingProduct`](../server/src/products/serving/index.ts).
Polymech implements a layered security model that covers authentication, authorization, threat mitigation, and observability. Every layer is configurable via environment variables and manageable through admin APIs.
---
## 2. Authorization & ACL
**Goal**: Secure multi-user collaboration and Organization access.
## Table of Contents
### A. Shared Pages
- [ ] Implement `page_collaborators` RLS.
- [ ] **Verify**: A user cannot edit a page they are only a 'viewer' on.
- [ ] **Verify**: Listing pages returns both owned and shared pages in [`PageManager.tsx`](../src/components/PageManager.tsx).
### B. Organization Impersonation
- [ ] **Middleware**: Implement `OrganizationMiddleware` in `server/src`.
- [ ] **Logic**: If `X-Org-Slug` header is present:
1. Check if `auth.uid()` is an Admin/Member of that Org.
2. If yes, scope all queries to that Organization's `collection_id` or Context.
3. (Advanced) Allow "Impersonation" where an Admin acts as a specific user. This requires a signed Token exchange or a Server-Side "Sudo" mode. **Decision**: For now, stick to Role-Based Access (Admin reads all Org data) rather than direct User Impersonation to avoid audit confusion.
- [Authentication](#authentication)
- [Authorization & Access Control](#authorization--access-control)
- [Threat Mitigation](#threat-mitigation)
- [Transport Security](#transport-security)
- [Observability & Auditing](#observability--auditing)
- [Admin API](#admin-api)
- [Configuration Reference](#configuration-reference)
---
## 3. General Hardening
- [ ] **404 Handling**: In [`Post.tsx`](../src/pages/Post.tsx), ensure 404s do not redirect blindly.
- [ ] If permission denied (Private post), show "Unauthorized" (403).
- [ ] If missing, show "Not Found" (404).
- [ ] **Rate Limiting**: Ensure `/api/feed` and `/api/search` have basic rate limiting (using `hono-rate-limiter` or Redis) to prevent scraping.
## Authentication
### JWT Bearer Tokens
All authenticated requests use Supabase-issued JWTs via the `Authorization: Bearer <token>` header. The server validates tokens through Supabase's `auth.getUser()`, with results cached in-memory to avoid repeated round-trips.
### Three Authentication Modes
The server provides three middleware layers that can be composed per-route:
| Middleware | Behavior |
|-----------|----------|
| **`authMiddleware`** | **Strict** — rejects any request without a valid Bearer token. Returns `401` immediately. |
| **`optionalAuthMiddleware`** | **Flexible** — resolves the user if a token is present, but allows unauthenticated access to public endpoints. Respects `REQUIRE_AUTH` env var for non-public routes. Also supports token via `?token=` query param (for SSE streams). |
| **`adminMiddleware`** | **Role-based** — checks `user_roles` table for `role = 'admin'`. Returns `403 Forbidden` if the user lacks admin privileges. Only applies to routes registered in `AdminEndpointRegistry`. |
### Request Flow
```
Request → CORS → Blocklist → Auto-Ban → Analytics → optionalAuthMiddleware → adminMiddleware → Rate Limiter → Body Limit → Route Handler
```
1. **CORS** validates origin against env-driven allowlist
2. **Blocklist** checks manual blocklist (`config/blocklist.json`)
3. **Auto-Ban** checks automatic ban list (`config/ban.json`)
4. **Analytics** logs the request (non-blocking)
5. **Optional Auth** resolves user identity if token present; validates JWT `exp` claim and caches for 30s
6. **Admin Check** enforces admin-only on registered admin routes
7. **Rate Limiter** enforces `RATE_LIMIT_MAX` requests per `RATE_LIMIT_WINDOW_MS` per IP/user
8. **Body Limit** enforces `MAX_UPLOAD_SIZE` (default 10MB) on all API requests
9. **Route Handler** executes with `c.get('userId')`, `c.get('user')`, and `c.get('isAdmin')` available
---
## Authorization & Access Control
### Route-Level Access Control
Routes are classified at definition time using decorators:
```typescript
// In route definitions:
Public(route) // Registers in PublicEndpointRegistry → no auth required
Admin(route) // Registers in AdminEndpointRegistry → admin role required
```
The `PublicEndpointRegistry` and `AdminEndpointRegistry` use pattern matching (supporting `:param` and `{param}` styles) to determine access at runtime. This means authorization is declarative — defined alongside the route, not scattered across middleware.
### Public Endpoints
All SEO and content delivery routes are public by default:
- `/feed.xml`, `/products.xml`, `/sitemap-en.xml`, `/llms.txt`
- `/post/:id.xhtml`, `/post/:id.pdf`, `/post/:id.md`, `/post/:id.json`
- `/user/:id/pages/:slug.xhtml`, `.html`, `.pdf`, `.md`, `.json`, `.email.html`
- `/api/posts/:id`, `/api/feed`, `/api/profiles`, `/api/media-items`
- `/embed/:id`, `/embed/page/:id`
### Admin-Only Endpoints
Privileged operations require both authentication and the `admin` role:
| Endpoint | Description |
|----------|-------------|
| `POST /api/admin/system/restart` | Graceful server restart |
| `GET /api/admin/bans` | View current ban list |
| `POST /api/admin/bans/unban-ip` | Remove an IP ban |
| `POST /api/admin/bans/unban-user` | Remove a user ban |
| `GET /api/admin/bans/violations` | View violation statistics |
| `POST /api/flush-cache` | Flush all server caches |
| `GET /api/analytics` | View request analytics |
| `DELETE /api/analytics` | Clear analytics data |
| `GET /api/analytics/stream` | Live analytics stream (SSE) |
### VFS (Virtual File System) ACL
The Storage product implements a full ACL system for its virtual file system:
- **Mounts** — isolated storage namespaces with per-mount access control
- **Grants** — explicit read/write permissions per user per mount
- **Revocations** — ability to revoke access without deleting the mount
- **Glob-based queries** — file listing supports `glob` patterns, scoped to authorized mounts
### Supabase RLS
Database-level security is enforced through PostgreSQL Row-Level Security:
- `user_roles` — scoped by `auth.uid() = user_id`
- `user_secrets` — API keys never exposed through public endpoints; accessed via `/api/me/secrets` proxy with masked GET and server-proxied PUT
- Content tables — owner-based access with collaboration extensions
### Secrets Management
API keys (OpenAI, Google, etc.) are stored in `user_secrets` and never returned in cleartext from any endpoint. The `/api/me/secrets` proxy returns masked values (last 4 characters only) with a `has_key` boolean indicator. Client code never accesses `user_secrets` directly.
### CSRF Protection
Bearer token auth via `Authorization` header is inherently CSRF-proof — browsers cannot attach custom headers in cross-origin form submissions. No CSRF tokens are needed.
---
## Threat Mitigation
### Blocklist (Manual)
The `blocklist.json` file in `/config/` provides static blocking of known bad actors:
```json
{
"blockedIPs": ["203.0.113.50"],
"blockedUserIds": ["malicious-user-uuid"],
"blockedTokens": ["compromised-jwt-token"]
}
```
The blocklist is loaded on startup and checked for every API request. Blocked entities receive `403 Forbidden`.
### Auto-Ban (Automatic)
The auto-ban system tracks violations in-memory and automatically bans entities that exceed configurable thresholds:
**How it works:**
1. Rate limit violations are recorded per IP or user key
2. When violations exceed `AUTO_BAN_THRESHOLD` (default: 5) within `AUTO_BAN_WINDOW_MS` (default: 10s), the entity is permanently banned
3. Bans are persisted to `config/ban.json` and survive server restarts
4. Old violation records are cleaned up periodically (`AUTO_BAN_CLEANUP_INTERVAL_MS`)
**What gets tracked:**
- Repeated rate limit violations
- Repeated auth failures
- Suspicious request patterns
**Ban types:**
| Type | Scope |
|------|-------|
| IP ban | Blocks all requests from the IP |
| User ban | Blocks all requests from the user ID |
| Token ban | Blocks requests with a specific JWT |
### Rate Limiting
Rate limiting uses `hono-rate-limiter` with configurable windows and limits:
- **Global API limiter**`RATE_LIMIT_MAX` requests per `RATE_LIMIT_WINDOW_MS` (applied to `/api/*`)
- **Custom per-endpoint limiters**`createCustomRateLimiter(limit, windowMs)` for endpoints needing different thresholds
- **Key generation** — rate limits are tracked per authenticated user (if token present) or per IP (fallback)
- **Standard headers** — responses include `RateLimit-*` headers (draft-6 spec)
- **Violation escalation** — rate limit violations are forwarded to the auto-ban system
---
## Transport Security
### Secure Headers
Applied globally via Hono's `secureHeaders` middleware:
| Header | Value | Rationale |
|--------|-------|-----------|
| **Strict-Transport-Security** | `max-age=31536000; includeSubDomains` | 1-year HSTS, enforces HTTPS |
| **X-Frame-Options** | `SAMEORIGIN` | Clickjacking protection (relaxed for `/embed/*` routes) |
| **Referrer-Policy** | `strict-origin-when-cross-origin` | Preserves analytics referrer data same-origin, protects privacy cross-origin |
| **Permissions-Policy** | `camera=(), microphone=(), geolocation=(), payment=(self)` | Restricts unused browser features; payment allowed for Stripe |
| **Content-Security-Policy** | See below | Full directive set protecting against XSS |
| **Cross-Origin-Resource-Policy** | Disabled | Media assets served cross-origin |
| **Cross-Origin-Embedder-Policy** | Disabled | Compatibility with external image/video sources |
| **Cross-Origin-Opener-Policy** | Disabled | No popup isolation needed |
#### Embed Route Override
Routes under `/embed/*` strip `X-Frame-Options` and widen `frame-ancestors` to `*`, allowing external sites to iframe embed widgets while keeping all other routes protected against clickjacking.
#### CSP Directives
| Directive | Value | Rationale |
|-----------|-------|-----------|
| `default-src` | `'self'` | Baseline deny-all |
| `script-src` | `'self' 'nonce-<per-request>' cdn.jsdelivr.net` | Nonce-based inline script execution + Scalar UI |
| `style-src` | `'self' 'unsafe-inline' fonts.googleapis.com cdn.jsdelivr.net` | Google Fonts CSS + Scalar UI (`unsafe-inline` required for dynamic styles) |
| `font-src` | `'self' fonts.gstatic.com cdn.jsdelivr.net fonts.scalar.com` | Google Fonts + Scalar fonts |
| `img-src` | `'self' data: blob: *.supabase.co *.polymech.info` | Supabase Storage + CDN assets |
| `connect-src` | `'self' *.supabase.co wss://*.supabase.co api.openai.com assets.polymech.info cdn.jsdelivr.net proxy.scalar.com` | API, Realtime, AI, Scalar |
| `media-src` | `'self' blob: *.supabase.co assets.polymech.info stream.mux.com` | Video/audio sources |
| `frame-src` | `'self' *.supabase.co` | Supabase Auth popup |
| `frame-ancestors` | `'self'` | Default: same-origin only (relaxed to `*` for `/embed/*`) |
| `object-src` | `'none'` | Block Flash/Java |
| `base-uri` | `'self'` | Prevent base-tag hijacking |
### Compression
All responses are compressed with Brotli/gzip via `hono/compress`, reducing payload sizes and improving TTFB.
### CORS
CORS origin validation is driven by `CORS_ORIGINS` env var:
```
# Production — only listed origins get Access-Control-Allow-Origin
CORS_ORIGINS=https://service.polymech.info,https://polymech.info,https://forum.polymech.info
# Development (unset / default) — falls back to origin: '*'
```
| Setting | Production | Development |
|---------|-----------|-------------|
| **Origin** | Env-driven allowlist | `*` |
| **Methods** | GET, POST, PUT, DELETE, PATCH, OPTIONS | Same |
| **Credentials** | `true` | `false` (browsers disallow `credentials: true` with `*`) |
| **Max Preflight Cache** | 600s (10 min) | Same |
Custom headers are whitelisted for client SDK compatibility (Stainless, etc.).
---
## Observability & Auditing
### Security Logging
All security events are logged via a dedicated `securityLogger` (Pino) with structured context:
- Auth failures with IP + user agent
- Admin actions with acting user ID
- Ban/unban events with target and outcome
- Rate limit violations with key and threshold
### Analytics Middleware
Every request (except static assets, doc UIs, and widget paths) is tracked:
| Field | Source |
|-------|--------|
| Method + Path | Request |
| IP Address | Hardened extraction via `getClientIpFromHono()` — validates `socket.remoteAddress` against trusted proxy ranges before trusting `X-Forwarded-For` |
| User Agent | Request header |
| Session ID | `pm_sid` cookie (30-minute sliding expiry) |
| Geo Location | Background async lookup via BigDataCloud API |
| User ID | Resolved from JWT if present |
| Response Time | Measured end-to-end |
| Status Code | Response |
**Geo-lookup resilience:**
- Results cached in-memory + disk (`cache/geoip.json`)
- Non-blocking — resolved in background after response is sent
- Circuit breaker — after 3 consecutive failures, geo lookups are disabled for 30 seconds
- Timeout — individual lookups are capped at 2 seconds
- De-duplication — concurrent lookups for the same IP share a single request
### Real-Time Streams
Security events and analytics are available as live Server-Sent Event (SSE) streams:
```
GET /api/logs/system/stream → Live system + security logs
GET /api/analytics/stream → Live request analytics
```
---
## Admin API
All admin endpoints require authentication + admin role. Documented in OpenAPI and accessible via Swagger UI / Scalar.
### Ban Management
```
GET /api/admin/bans → View all banned IPs, users, tokens
POST /api/admin/bans/unban-ip → { "ip": "203.0.113.50" }
POST /api/admin/bans/unban-user → { "userId": "user-uuid" }
GET /api/admin/bans/violations → View current violation tracking stats
```
### System Operations
```
POST /api/admin/system/restart → Graceful restart (systemd re-spawns)
POST /api/flush-cache → Flush all in-memory + disk caches
POST /api/cache/invalidate → Selective cache invalidation by path/type
GET /api/cache/inspect → View cache state, TTLs, dependency graph
```
### Analytics
```
GET /api/analytics → Historical request data
GET /api/analytics/stream → Real-time SSE stream
DELETE /api/analytics → Clear analytics data
```
---
## Configuration Reference
All security settings are configurable via environment variables:
### Authentication
| Variable | Default | Description |
|----------|---------|-------------|
| `REQUIRE_AUTH` | `false` | When `true`, all non-public API routes require authentication |
| `CORS_ORIGINS` | `*` | Comma-separated CORS allowed origins. Falls back to `*` if unset |
### Rate Limiting
| Variable | Default | Description |
|----------|---------|-------------|
| `RATE_LIMIT_MAX` | `1` | Max requests per window |
| `RATE_LIMIT_WINDOW_MS` | `50` | Window duration in milliseconds |
### Auto-Ban
| Variable | Default | Description |
|----------|---------|-------------|
| `AUTO_BAN_THRESHOLD` | `5` | Violations before auto-ban |
| `AUTO_BAN_WINDOW_MS` | `10000` | Violation counting window (ms) |
| `AUTO_BAN_CLEANUP_INTERVAL_MS` | `60000` | How often to clean up old violation records |
### API Documentation
| Variable | Default | Description |
|----------|---------|-------------|
| `SCALAR_AUTH_TOKEN` | `''` | Pre-filled Bearer token for Scalar UI |
| `NODE_ENV` | — | When `production`, Swagger/Scalar UIs are disabled |
### Files
| File | Description |
|------|-------------|
| `config/blocklist.json` | Manual IP/user/token blocklist |
| `config/ban.json` | Auto-generated ban list (persisted auto-bans) |
| `cache/geoip.json` | Geo-IP lookup cache |
---
## TODO — Pending Improvements
### High Priority
- [ ] **Swagger/Scalar in production** — Currently disabled entirely in production. Consider enabling at a protected `/admin/reference` path behind admin auth for debugging
- [-] **Audit logging** — Admin actions (unban, restart, cache flush) log to Pino but should also persist to a dedicated `audit_log` table in the database
### Medium Priority
- [ ] **Page collaboration ACL** — Implement `page_collaborators` RLS so viewers cannot edit shared pages
- [ ] **Organization impersonation** — Add `X-Org-Slug` header middleware to scope queries to organization context with role-based access (Admin reads all, Member reads own)
- [ ] **Per-route rate limiting** — Apply stricter limits to expensive endpoints (`/api/search`, `/api/serving/site-info`, image optimization proxy) using `createCustomRateLimiter`
- [ ] **Redis-backed rate limiting** — Current rate limiter is in-memory (per-instance). For multi-instance deploys, switch to a Redis-backed store via `hono-rate-limiter`
### Low Priority / Nice-to-Have
- [ ] **API key authentication** — Support `X-API-Key` header as an alternative to Bearer tokens for third-party integrations
- [ ] **Webhook signature verification** — For incoming webhooks (Stripe, etc.), verify HMAC signatures before processing
- [ ] **Geo-blocking** — Extend blocklist to support country-level blocking using the existing geo-IP cache
- [ ] **Security headers audit** — Run [securityheaders.com](https://securityheaders.com) and [Mozilla Observatory](https://observatory.mozilla.org/) checks on production and address any findings

675
packages/ui/docs/seo.md Normal file
View File

@ -0,0 +1,675 @@
# SEO & Discoverability on Polymech
Polymech is built as an SEO-first platform. Every piece of content — whether it's a media post, a CMS page, or a product listing — is automatically discoverable by search engines, social platforms, AI agents, and feed readers. No plugins, no external services, no config files. It's all baked in.
This document covers every SEO-related feature the platform offers.
---
## Table of Contents
- [Multi-Format Content Export](#multi-format-content-export)
- [Discovery Endpoints](#discovery-endpoints)
- [Open Graph & Social Meta](#open-graph--social-meta)
- [JSON-LD Structured Data](#json-ld-structured-data)
- [Server-Side Rendering & Initial State Injection](#server-side-rendering--initial-state-injection)
- [Responsive Image Optimization](#responsive-image-optimization)
- [Internationalization (i18n)](#internationalization-i18n)
- [Embeddable Content](#embeddable-content)
- [API-First Architecture](#api-first-architecture)
- [Developer Experience](#developer-experience)
- [Client-Side SEO & Performance](#client-side-seo--performance)
- [Route Reference](#route-reference)
---
## Multi-Format Content Export
Every content entity on Polymech (posts and pages) can be exported in multiple formats by simply changing the file extension in the URL. No API keys, no special headers — just append the extension.
> **Source:** Page exports → [pages-routes.ts](../server/src/products/serving/pages/pages-routes.ts), Post exports → [db-post-exports.ts](../server/src/products/serving/db/db-post-exports.ts)
### Pages
Pages are rich, widget-based documents built with a visual editor. They export to:
> [pages-rich-html.ts](../server/src/products/serving/pages/pages-rich-html.ts) · [pages-html.ts](../server/src/products/serving/pages/pages-html.ts) · [pages-pdf.ts](../server/src/products/serving/pages/pages-pdf.ts) · [pages-markdown.ts](../server/src/products/serving/pages/pages-markdown.ts) · [pages-email.ts](../server/src/products/serving/pages/pages-email.ts) · [pages-data.ts](../server/src/products/serving/pages/pages-data.ts)
| Format | URL Pattern | Content-Type | Description |
|--------|-------------|--------------|-------------|
| **XHTML** | `/user/:id/pages/:slug.xhtml` | `text/html` | Standalone rich HTML with Tailwind CSS styling, full meta tags, JSON-LD, and responsive layout. Ready to share or archive. |
| **HTML** | `/user/:id/pages/:slug.html` | `text/html` | SPA shell with injected Open Graph metadata for crawlers and social previews. |
| **PDF** | `/user/:id/pages/:slug.pdf` | `application/pdf` | Print-ready PDF export. Great for invoices, reports, or offline sharing. |
| **Markdown** | `/user/:id/pages/:slug.md` | `text/markdown` | Clean Markdown export of the page content. Useful for migration, backups, or feeding to other systems. |
| **JSON** | `/user/:id/pages/:slug.json` | `application/json` | Raw page data including content tree, metadata, and author profile. Perfect for headless CMS integrations. |
| **Email HTML** | `/user/:id/pages/:slug.email.html` | `text/html` | Email-client-optimized HTML with inlined styles and table-based layout. Compatible with Outlook, Gmail, Apple Mail, and others. |
### Posts
Posts are media-centric entries (photos, videos, link cards). They export to:
> [db-post-exports.ts](../server/src/products/serving/db/db-post-exports.ts) · [db-posts.ts](../server/src/products/serving/db/db-posts.ts)
| Format | URL Pattern | Content-Type | Description |
|--------|-------------|--------------|-------------|
| **XHTML** | `/post/:id.xhtml` | `text/html` | Standalone rich HTML with Tailwind CSS, responsive image gallery, OG meta, and JSON-LD structured data. |
| **PDF** | `/post/:id.pdf` | `application/pdf` | PDF export of the post with embedded images. |
| **Markdown** | `/post/:id.md` | `text/markdown` | Markdown with title, description, and linked images. |
| **JSON** | `/post/:id.json` | `application/json` | Full post data with pictures array and author profile. |
### How it works
The export system doesn't use templates or pre-rendered files. Each format is generated server-side on-the-fly from the same canonical content tree, which means:
- Exports are always up-to-date — no build step needed
- All formats share the same data pipeline — update once, export everywhere
- The widget-based content system is format-agnostic — markdown text, photo cards, galleries, tabs, and nested layouts all render correctly in every format
---
## Discovery Endpoints
> **Source:** [content.ts](../server/src/products/serving/content.ts) · [routes.ts](../server/src/products/serving/routes.ts)
### RSS Feed — `/feed.xml`
Standard RSS 2.0 feed of the latest posts and pages. Supports filtering by category via query parameters: → [content.ts](../server/src/products/serving/content.ts) `handleGetFeedXml`
```
/feed.xml?categorySlugs=tutorials&limit=50&sortBy=latest
```
- Image enclosures with optimized proxy URLs
- Per-item author attribution
- Category filtering (by ID or slug, including descendants)
- Configurable sort order (`latest` or `top`)
### Google Merchant Feed — `/products.xml`
A Google Merchant Center compatible XML feed for products. Automatically includes only items with pricing data set through the type system: → [content.ts](../server/src/products/serving/content.ts) `handleGetMerchantFeed`
```xml
<g:id>product-uuid</g:id>
<g:title>Product Name</g:title>
<g:price>29.99 EUR</g:price>
<g:product_type>Category > Subcategory</g:product_type>
<g:image_link>https://service.polymech.info/api/images/cache/optimized.jpg</g:image_link>
```
- Automatically resolves price, currency, and condition from the type system & page variables
- Full category path hierarchy
- Optimized product images via the image proxy
- All items link to their canonical page/post URL
### Sitemap — `/sitemap-en.xml`
Auto-generated XML sitemap of all public, visible pages: → [content.ts](../server/src/products/serving/content.ts) `handleGetSitemap`
```xml
<url>
<loc>https://polymech.info/user/username/pages/my-page</loc>
<lastmod>2025-03-01T12:00:00.000Z</lastmod>
<changefreq>weekly</changefreq>
<priority>0.8</priority>
</url>
```
- Only includes public + visible pages (respects content visibility settings)
- Uses `updated_at` for accurate `<lastmod>` timestamps
- Ready to submit to Google Search Console, Bing Webmaster Tools, etc.
### LLM-Readable Content — `/llms.txt` & `/llms.md`
Following the emerging [llms.txt standard](https://llmstxt.org/), Polymech generates a machine-readable summary of the entire site at `/llms.txt` (and `/llms.md` for Markdown content-type): → [content.ts](../server/src/products/serving/content.ts) `handleGetLLMText`
```markdown
# Polymech
> A full-stack media platform...
## Pages
- [Getting Started](https://polymech.info/user/admin/pages/getting-started): Introduction to...
- [Product Catalog](https://polymech.info/user/admin/pages/catalog): Browse our...
## Posts
- [New Release](https://polymech.info/post/abc123) by admin: Announcing...
## Public API
- Post Details JSON: /api/posts/{id}
- Page XHTML Export: /user/{username}/pages/{slug}.xhtml
- RSS Feed: /feed.xml
- Sitemap: /sitemap-en.xml
```
This endpoint is designed for AI agents (ChatGPT, Claude, Perplexity, etc.) to quickly understand what the site contains and how to access it. It includes:
- Site description from `app-config.json`
- Top 20 public pages with links and descriptions
- Top 20 recent posts with author attribution
- Full public API reference with URL patterns
### OpenAPI / Scalar API Reference — `/api/reference`
Every API endpoint is documented via OpenAPI 3.0 and served through a Scalar interactive UI. This isn't just documentation — it's a live, testable interface for every route in the system.
---
## Open Graph & Social Meta
Every content URL automatically injects proper Open Graph and Twitter Card metadata into the HTML `<head>`. This happens at the server level before the SPA loads, so crawlers and social platforms always get the right preview.
> **Source:** SPA injection → [renderer.ts](../server/src/products/serving/renderer.ts), Posts → [db-post-exports.ts](../server/src/products/serving/db/db-post-exports.ts), Pages XHTML → [pages-rich-html.ts](../server/src/products/serving/pages/pages-rich-html.ts), Pages HTML → [pages-html.ts](../server/src/products/serving/pages/pages-html.ts)
### What gets injected
| Meta Tag | Source |
|----------|--------|
| `og:title` | Page title or post title with author attribution |
| `og:description` | Page description, extracted from content, or auto-generated fallback |
| `og:image` | First photo card, gallery image, or markdown image — resolved through the image optimization proxy |
| `og:type` | `article` for pages/posts, `product` for product pages |
| `og:url` | Canonical URL |
| `twitter:card` | `summary_large_image` (when image is available) |
| `twitter:title` | Same as `og:title` |
| `twitter:image` | Same as `og:image` |
### Image resolution priority
The system walks the content tree to find the best display image:
1. **Photo Card widget** — highest priority, uses picture ID for resolution
2. **Gallery widget** — uses first image from the gallery
3. **Explicit image widget** — direct image URL
4. **Markdown image** — extracted from inline markdown `![](url)`
5. **Page meta thumbnail** — fallback from page metadata
All images are proxied through the image optimization service (see below) to ensure optimal dimensions and format for social previews.
### Home Page
The home page (`/`) gets its own meta injection using site config from `app-config.json`, with optional override from the `_site/home` system page. This includes full JSON-LD with `WebSite` and `Organization` schemas, plus a `SearchAction` for sitelinks search box.
---
## JSON-LD Structured Data
Polymech generates context-appropriate JSON-LD structured data for every content type:
### Posts → `SocialMediaPosting`
```json
{
"@context": "https://schema.org",
"@type": "SocialMediaPosting",
"headline": "Post Title",
"image": ["https://...optimized.jpg"],
"datePublished": "2025-03-01T12:00:00Z",
"author": {
"@type": "Person",
"name": "Author Name"
}
}
```
### Pages → `Article`
```json
{
"@context": "https://schema.org",
"@type": "Article",
"headline": "Page Title by Author | PolyMech",
"author": { "@type": "Person", "name": "Author" },
"description": "...",
"image": "https://..."
}
```
### Product Pages → `Product` with `Offer`
When a page belongs to a `products` category, the structured data automatically switches to the `Product` schema with pricing:
```json
{
"@context": "https://schema.org",
"@type": "Product",
"name": "Product Name",
"description": "...",
"image": "https://...",
"category": "Products > Subcategory",
"offers": {
"@type": "Offer",
"price": "29.99",
"priceCurrency": "EUR",
"availability": "https://schema.org/InStock",
"itemCondition": "https://schema.org/NewCondition"
}
}
```
Price, currency, condition, and availability are resolved from the type system / page variables — no manual JSON-LD editing needed.
### Home Page → `WebSite` + `Organization`
```json
{
"@context": "https://schema.org",
"@graph": [
{
"@type": "WebSite",
"name": "PolyMech",
"url": "https://polymech.info",
"potentialAction": {
"@type": "SearchAction",
"target": "https://polymech.info/search?q={search_term_string}",
"query-input": "required name=search_term_string"
}
},
{
"@type": "Organization",
"name": "Polymech",
"url": "https://polymech.info",
"logo": "https://..."
}
]
}
```
---
## Server-Side Rendering & Initial State Injection
Polymech is a React SPA, but it doesn't sacrifice SEO for interactivity. The server pre-fetches data and injects it into the HTML before sending it to the client:
> **Source:** Home/post/embed injection → [index.ts](../server/src/products/serving/index.ts), Embed pages → [content.ts](../server/src/products/serving/content.ts), Profile injection → [db-user.ts](../server/src/products/serving/db/db-user.ts)
- **Home page** (`/`): Feed data and site home page content are fetched in parallel and injected as `window.__INITIAL_STATE__`
- **Post pages** (`/post/:id`): Post metadata is resolved and injected as OG/Twitter/JSON-LD meta tags
- **User pages** (`/user/:id/pages/:slug`): Page content, author profile, category paths, and meta image are all resolved server-side
This means:
- **Google** sees a fully populated `<head>` with title, description, image, and structured data
- **Social platforms** (Facebook, Twitter, LinkedIn, Discord, Slack) render rich link previews immediately
- **The React app** hydrates instantly without a loading spinner — the data is already there
---
## Responsive Image Optimization
Every image served through Polymech's SEO routes is automatically optimized:
> **Source:** [db-pictures.ts](../server/src/products/serving/db/db-pictures.ts) · [html-generator.ts](../server/src/products/serving/pages/html-generator.ts)
- **Format negotiation**: Images are served in modern formats (AVIF, WebP) with JPEG fallback
- **Responsive srcsets**: Multiple size variants (320w, 640w, 1024w) are pre-generated and cached on disk
- **Aspect-ratio preservation**: Height is calculated from source metadata to prevent layout shift
- **LCP optimization**: The first image in any export gets `fetchpriority="high"`, subsequent images get `loading="lazy"`
- **Edge caching**: Optimized variants are served from `/api/images/cache/` after first generation
The XHTML exports use `<img>` tags with proper `loading` and `fetchpriority` attributes. The RSS and Merchant feeds use the image proxy URLs for optimized product images at 1200px width.
---
## Internationalization (i18n)
Polymech's SEO features are fully i18n-aware, all the way down to the widget level.
> **Source:** [pages-i18n.ts](../server/src/products/serving/pages/pages-i18n.ts) · [db-i18n.ts](../server/src/products/serving/db/db-i18n.ts)
### How it works
1. **Widget-level translations** — Each widget in a page (markdown text, photo cards, tabs, etc.) can have its content translated to any language. Translations are stored per `widget_id` + `prop_path` + `target_lang`.
2. **Page meta translations** — Title and description can be translated using a special `__meta__` sentinel in the translations table.
3. **Feed translations** — The home feed widget in XHTML exports translates page titles and descriptions when a `?lang=xx` parameter is provided.
### Where i18n applies
| Feature | i18n Support |
|---------|-------------|
| XHTML page export | ✅ `?lang=de` translates all widget content, title, and description |
| XHTML rich HTML export | ✅ Feed items within home widgets are translated |
| HTML meta injection | ✅ Translated title/description used for OG tags |
| Markdown export | ✅ Widget content translated before Markdown conversion |
| Email export | ✅ Full widget translation applied before email rendering |
| RSS feed | Pages in feed use translated descriptions |
| Sitemap | URLs point to canonical (untranslated) versions |
| llms.txt | Currently English only (descriptions from source content) |
### Usage
Append `?lang=xx` to any page export URL:
```
/user/admin/pages/about.xhtml?lang=de → German rich HTML
/user/admin/pages/about.md?lang=fr → French Markdown
/user/admin/pages/about.email.html?lang=es → Spanish email
```
Translation management is handled through the platform's built-in glossary system and widget translation API, with AI-assisted translation support.
---
## Embeddable Content
Posts and pages can be embedded in external sites via iframe using the embed routes: → [content.ts](../server/src/products/serving/content.ts)
```
/embed/:postId → Embeddable post viewer
/embed/page/:pageId → Embeddable page viewer
```
Embed pages are served with injected initial state (no API call needed on load) and include proper meta for social previews when the embed URL itself is shared.
---
## API-First Architecture
All SEO endpoints are part of the OpenAPI 3.0 spec and documented at `/api/reference`. This means:
> **Source:** Route definitions → [routes.ts](../server/src/products/serving/routes.ts), Product registration → [index.ts](../server/src/products/serving/index.ts)
- Every route has proper request/response schemas
- Rate limiting and caching headers are standardized
- Third-party tools (Zapier, n8n, custom scripts) can programmatically access all content
- The API is browsable and testable through the interactive Scalar UI
### Relevant data endpoints
| Endpoint | Description |
|----------|-------------|
| `GET /api/posts/:id` | Full post data with pictures, responsive variants, and video job status |
| `GET /api/user-page/:identifier/:slug` | Full page data with content tree, profile, and metadata |
| `GET /api/feed` | Paginated feed with category filtering, sorting, and user-specific likes |
| `GET /api/profiles?ids=...` | Batch user profile lookup |
| `GET /api/media-items?ids=...` | Batch media item lookup with responsive image generation |
| `GET /api/serving/site-info?url=...` | Extract OG/JSON-LD metadata from any external URL → [site-info.ts](../server/src/products/serving/site-info.ts) |
| `GET /api/search?q=...` | Full-text search across posts and pages → [db-search.ts](../server/src/products/serving/db/db-search.ts) |
---
## Route Reference
### Content Exports
| Route | Method | Description |
|-------|--------|-------------|
| `/post/:id.xhtml` | GET | Post as standalone rich HTML |
| `/post/:id.pdf` | GET | Post as PDF |
| `/post/:id.md` | GET | Post as Markdown |
| `/post/:id.json` | GET | Post as JSON |
| `/user/:id/pages/:slug.xhtml` | GET | Page as standalone rich HTML |
| `/user/:id/pages/:slug.html` | GET | Page with OG meta injection |
| `/user/:id/pages/:slug.pdf` | GET | Page as PDF |
| `/user/:id/pages/:slug.md` | GET | Page as Markdown |
| `/user/:id/pages/:slug.json` | GET | Page as JSON |
| `/user/:id/pages/:slug.email.html` | GET | Page as email-optimized HTML |
### Discovery & Feeds
| Route | Method | Description |
|-------|--------|-------------|
| `/feed.xml` | GET | RSS 2.0 feed |
| `/products.xml` | GET | Google Merchant XML feed |
| `/sitemap-en.xml` | GET | XML Sitemap |
| `/llms.txt` | GET | LLM-readable site summary |
| `/llms.md` | GET | LLM summary (Markdown content-type) |
| `/api/reference` | GET | Interactive OpenAPI documentation |
### Meta Injection
| Route | Method | Description |
|-------|--------|-------------|
| `/` | GET | Home page with feed injection + WebSite/Organization JSON-LD |
| `/post/:id` | GET | Post page with OG/Twitter/JSON-LD injection |
| `/user/:id/pages/:slug` | GET | Page with OG/Twitter meta injection |
| `/embed/:id` | GET | Embeddable post with initial state |
| `/embed/page/:id` | GET | Embeddable page with initial state |
---
## Developer Experience
Polymech isn't just SEO-friendly for end users — it's built to be a joy for developers integrating with or extending the platform.
> **Source:** Server entry point → [index.ts](../server/src/products/serving/index.ts) · [routes.ts](../server/src/products/serving/routes.ts)
### OpenAPI 3.1 Specification — `/doc`
The entire API is described by a machine-readable OpenAPI 3.1 spec served at `/doc`. Every route — from feed endpoints to image uploads to page CRUD — is fully typed with Zod schemas that auto-generate the spec. No hand-written YAML, no drift between code and docs.
```
GET /doc → OpenAPI 3.1 JSON spec
```
This spec can be imported directly into Postman, Insomnia, or any OpenAPI-compatible tool for instant client generation.
### Swagger UI — `/ui`
Classic Swagger UI is available at `/ui` for developers who prefer the traditional interactive API explorer. It connects to the same live OpenAPI spec:
- Try-it-out for every endpoint
- Request/response schema visualization
- Bearer token authentication built in
- Auto-generated curl commands
### Scalar API Reference — `/reference` & `/api/reference`
[Scalar](https://scalar.com/) provides a modern, polished alternative to Swagger UI. Polymech serves it at both `/reference` and `/api/reference`:
- **Beautiful, searchable interface** — grouped by tag (Serving, Posts, Media, Storage, etc.)
- **Pre-authenticated** — Bearer token auto-filled from `SCALAR_AUTH_TOKEN` env var
- **Live request testing** — send requests directly from the browser with real responses
- **Code generation** — copy-paste ready snippets in curl, JavaScript, Python, Go, and more
- **Dark mode** — because of course
### Modular Product Architecture
The server is organized as a registry of **Products** — self-contained modules that each own their routes, handlers, workers, and lifecycle:
| Product | Description |
|---------|-------------|
| **Serving** | Content delivery, SEO, feeds, exports, meta injection |
| **Images** | Upload, optimization, proxy, responsive variant generation |
| **Videos** | Upload, transcoding (HLS), thumbnail extraction |
| **Email** | Page-to-email rendering, SMTP delivery, template management |
| **Storage** | Virtual file system with ACL, mounts, and glob queries |
| **OpenAI** | AI chat, image generation, markdown tools |
| **Analytics** | Request tracking, geo-lookup, real-time streaming |
| **Ecommerce** | Cart, checkout, payment integration |
Each product registers its own OpenAPI routes via `app.openapi(route, handler)`, so the spec always reflects exactly what's deployed. Adding a new product automatically exposes it in Swagger, Scalar, and `/doc`.
### Zod-Powered Schema Validation
All request and response schemas are defined with [Zod](https://zod.dev/) using `@hono/zod-openapi`. This gives you:
- **Runtime validation** — invalid requests are rejected with structured error messages before hitting business logic
- **Type safety** — TypeScript types are inferred from schemas, zero manual type definitions
- **Auto-docs** — Zod schemas feed directly into the OpenAPI spec with examples and descriptions
- **Composability** — shared schemas (e.g., pagination, media items) are reused across products
### Background Job Queue (PgBoss)
Long-running tasks (video transcoding, email sending, cache warming) are managed through [PgBoss](https://github.com/timgit/pg-boss), a PostgreSQL-backed job queue:
- Jobs are submittable via API: `POST /api/boss/job`
- Job status is queryable: `GET /api/boss/job/:id`
- Jobs can be cancelled, resumed, completed, or failed via dedicated endpoints
- Workers auto-register on startup and process jobs in the background
### Real-Time Log Streaming
System logs and analytics are streamable in real-time via SSE (Server-Sent Events):
```
GET /api/logs/system/stream → Live system logs
GET /api/analytics/stream → Live request analytics
```
This makes debugging in staging or production trivial — just open the stream in a browser tab or curl.
### WebSocket Support
When `ENABLE_WEBSOCKETS=true`, the server initializes a WebSocket manager for real-time features like live feed updates and collaborative editing notifications.
### Security & Middleware Stack
The server applies a layered middleware stack to all routes: → see [security.md](./security.md)
> **Source:** [auth.ts](../server/src/middleware/auth.ts) · [analytics.ts](../server/src/middleware/analytics.ts) · [rateLimiter.ts](../server/src/middleware/rateLimiter.ts) · [blocklist.ts](../server/src/middleware/blocklist.ts)
| Layer | Description |
|-------|-------------|
| **CORS** | Fully permissive for API consumption from any origin |
| **Analytics** | Request tracking with IP resolution and geo-lookup |
| **Auth** | Optional JWT-based authentication via `Authorization: Bearer` header |
| **Admin** | Role-based access control for admin-only endpoints |
| **Compression** | Brotli/gzip compression on all responses |
| **Secure Headers** | CSP, X-Frame-Options (permissive for embeds), CORP disabled for cross-origin media |
| **Rate Limiting** | Configurable per-route rate limiting (disabled by default) |
---
## Client-Side SEO & Performance
The React SPA contributes to SEO through smart hydration, code splitting, and i18n support.
> **Source:** [App.tsx](../src/App.tsx) · [i18n.tsx](../src/i18n.tsx) · [formatDetection.ts](../src/utils/formatDetection.ts)
### HelmetProvider — Dynamic `<head>` Management
The app is wrapped in `react-helmet-async`'s `<HelmetProvider>`, enabling any component to dynamically inject `<title>`, `<meta>`, and `<link>` tags into the document head. This complements the server-side meta injection — the server provides OG/Twitter tags for crawlers, while Helmet handles client-side navigation.
### Route-Based Code Splitting
25+ routes use `React.lazy()` for on-demand loading, keeping the initial bundle small for faster First Contentful Paint:
- **Eagerly loaded** (in initial bundle): `Index`, `Auth`, `Profile`, `UserProfile`, `TagPage`, `SearchResults` — the high-traffic, SEO-critical pages
- **Lazy loaded**: `Post`, `UserPage`, `Wizard`, `AdminPage`, all playground routes, `FileBrowser`, `Tetris`, ecommerce routes
This split ensures that unauthenticated, view-only visitors (including crawlers) get the fastest possible load time.
### Initial State Hydration
The client reads `window.__INITIAL_STATE__` injected by the server (see [Server-Side Rendering](#server-side-rendering--initial-state-injection)) to avoid waterfall API calls on first load. This covers:
- `feed` — Home page feed data
- `siteHomePage` — Home page CMS content
- `profile` — User profile on `/user/:id` pages
### Client-Side i18n — Language Detection & `<T>` Component
> **Source:** [i18n.tsx](../src/i18n.tsx) · JSON translations in [src/i18n/*.json](../src/i18n/)
The `<T>` component wraps translatable strings and resolves them against per-language JSON dictionaries. Language is determined via a cascading priority chain:
1. **URL parameter** (`?lang=de`) — highest priority, enables shareable translated links
2. **Cookie** (`lang=de`) — persists across navigation, set when URL param is used
3. **Browser language** (`navigator.languages`) — automatic fallback
**13 supported languages:** English, Français, Kiswahili, Deutsch, Español, Nederlands, 日本語, 한국어, Português, Русский, Türkçe, 中文
Translation dictionaries are loaded eagerly via Vite's `import.meta.glob` for instant availability. Missing keys auto-collect into localStorage for dictionary building (`downloadTranslations()` exports them as JSON).
### Format Detection
On app boot, `initFormatDetection()` probes browser support for modern image formats (AVIF, WebP). This informs the responsive image system which `<source>` elements to include in `<picture>` tags, ensuring optimal Core Web Vitals scores.
---
## Summary
Polymech treats SEO as a core platform feature, not an afterthought. Every content entity is automatically:
- **Discoverable** — via sitemap, RSS, merchant feed, and LLM endpoints
- **Previewable** — with Open Graph, Twitter Cards, and JSON-LD for rich social sharing
- **Exportable** — in 6+ formats (XHTML, HTML, PDF, Markdown, JSON, Email)
- **Translatable** — with widget-level i18n that flows through all export formats
- **Optimized** — with responsive images, lazy loading, LCP prioritization, and edge caching
- **Programmable** — with a full OpenAPI spec and interactive documentation
All of this works out of the box. No configuration needed.
---
## TODO — Pending Improvements
### Critical
- [x] **Canonical URLs** — Add `<link rel="canonical">` to all XHTML/HTML exports and SPA pages to prevent duplicate content penalties across `.xhtml`, `.html`, and SPA routes
- [ ] **robots.txt** — Serve a dynamic `robots.txt` at the root with sitemap references and crawl-delay directives. Currently missing entirely
- [x] **Hreflang tags** — Add `<link rel="alternate" hreflang="...">` tags to multi-language pages so search engines serve the correct language variant per region
- [x] **Meta description per page** — Pages and posts currently inherit a generic description. Wire the post `description` / page `meta.description` field into the `<meta name="description">` tag
### High Priority
- [x] **Structured data expansion** — Add `BreadcrumbList` schema for page navigation paths and `WebSite` schema with `SearchAction` for sitelinks search box
- [-] **Sitemap pagination** — Current sitemap is a single XML file. For large catalogs (1000+ products), split into sitemap index + per-entity sitemaps (`sitemap-posts.xml`, `sitemap-pages.xml`, `sitemap-products.xml`)
- [x] **Last-modified headers** — Set `Last-Modified` and `ETag` on all content routes (posts, pages, feeds) to support conditional requests and improve crawler efficiency
- [ ] **Dynamic OG images** — Auto-generate Open Graph images for pages/posts that don't have a cover image, using title + brand overlay
- [x] **JSON-LD for products** — Add `Product` schema with `offers`, `aggregateRating`, and `brand` to product pages for rich shopping results
### Medium Priority
- [-] **AMP pages** — Generate AMP-compliant HTML exports for posts to enable AMP carousel in Google mobile search
- [ ] **RSS per-user feeds** — Currently only a global `/feed.xml`. Add per-user feeds at `/user/:id/feed.xml` so individual creators can be subscribed to
- [ ] **Merchant feed i18n** — Product feed currently exports in the default language. Generate per-locale feeds (`/products-de.xml`, `/products-fr.xml`) using the i18n translation system
- [ ] **Preconnect / DNS-prefetch hints** — Add `<link rel="preconnect">` for known external domains (CDN, image proxy, analytics) in the SPA shell
- [ ] **llms.txt expansion** — Current `llms.txt` covers posts. Extend to include pages, products, and user profiles for broader AI agent discovery → [content.ts](../server/src/products/serving/content.ts)
- [ ] **WebSub / PubSubHubbub** — Add `<link rel="hub">` to RSS feeds and implement WebSub pings on content publish for real-time feed reader updates
### Low Priority / Nice-to-Have
- [ ] **Core Web Vitals monitoring** — Integrate CrUX API or web-vitals library to track LCP, FID, CLS and surface in analytics dashboard
- [ ] **Schema.org FAQ / HowTo** — Auto-detect FAQ-style and tutorial page content and inject corresponding structured data
- [ ] **Twitter Cards validation** — Add `twitter:site` and `twitter:creator` meta tags from user profiles for proper attribution
- [ ] **Video schema** — Add `VideoObject` JSON-LD for posts containing video media items
- [ ] **IndexNow** — Implement IndexNow API pings to Bing/Yandex on content publish for near-instant indexing
---
### AEO — Answer Engine Optimization
Optimize content to be **cited as direct answers** by AI answer engines (Google AI Overviews, Bing Copilot, Perplexity, ChatGPT).
- [ ] **Answer-first content blocks** — In XHTML/HTML exports, structure pages with concise 40-60 word answer summaries at the top of each section, before the detailed explanation. AI engines pull individual passages — clarity wins
- [ ] **FAQPage schema injection** — Auto-detect Q&A patterns in page widgets (heading + paragraph pairs) and inject `FAQPage` JSON-LD. This is the #1 schema type cited by answer engines
- [ ] **QAPage schema for posts** — When a post title is phrased as a question, wrap the body in `QAPage` structured data with `acceptedAnswer`
- [ ] **Text fragment identifiers** — Add `#:~:text=` fragment links in sitemaps and llms.txt to guide AI engines to the most relevant passage in long-form pages
- [ ] **Featured snippet optimization** — Ensure XHTML exports use `<table>`, `<ol>`, and `<dl>` for comparison content, definitions, and step-by-step guides — these are the formats Google AI Overview pulls from
- [ ] **Concise `<meta name="description">` per section** — For long pages with multiple sections, consider generating per-section meta descriptions via anchor-targeted structured data
### GEO — Generative Engine Optimization
Optimize content to be **referenced and summarized** by generative AI systems (ChatGPT, Gemini, Claude, Perplexity).
- [ ] **Entity authority via JSON-LD** — Add `Organization`, `Person`, and `WebSite` schema with consistent `@id` URIs across all pages. AI models use entity graphs to determine source authority
- [ ] **E-E-A-T signals** — Inject `author` schema with credentials, link to author profile pages, and add `datePublished` / `dateModified` to all content. Generative engines weight experience and freshness
- [ ] **Comparison and "X vs Y" pages** — Create comparison page templates that AI systems frequently pull from when users ask evaluative questions
- [ ] **Fact-dense content markers** — Add `ClaimReview` or `Dataset` schema where applicable. AI models prioritize statistically-backed and verifiable claims
- [ ] **Citation-optimized exports** — In Markdown and JSON exports, include `source_url`, `author`, `published_date`, and `license` fields so AI systems can properly attribute when citing
- [ ] **AI Share of Voice tracking** — Track brand mentions across ChatGPT, Perplexity, and Google AI Overviews to measure GEO effectiveness. Consider building an internal monitoring endpoint or integrating third-party tools
### AI Crawler Management
Control and optimize how AI training bots and inference crawlers interact with the platform.
- [ ] **Dynamic `robots.txt` with AI directives** — Serve a `robots.txt` that explicitly manages AI crawlers: allow `GPTBot`, `ClaudeBot`, `PerplexityBot` on content routes, but disallow on admin/API routes. Consider `Google-Extended` for training opt-in/out
- [ ] **`llms.txt` v2** — Expand current `llms.txt` beyond posts to include: pages with summaries, product catalog overview, author profiles, and a structured capability description. Follow the emerging llms.txt spec with Markdown formatting
- [ ] **`llms-full.txt`** — Generate a comprehensive full-content version at `/llms-full.txt` with all page content flattened into Markdown for deep AI ingestion
- [ ] **AI crawler rate limiting** — Apply custom rate limits for known AI user agents (`GPTBot`, `ClaudeBot`, `CCBot`, `PerplexityBot`) to prevent content scraping from overloading the server while still allowing indexing
- [ ] **AI access analytics** — Track and surface AI bot traffic separately in the analytics dashboard: which bots, how often, which routes, and bandwidth consumed. Use the existing user-agent parsing in [analytics.ts](../server/src/middleware/analytics.ts)
- [ ] **Structured content API for AI** — Create a dedicated `/api/content` endpoint that returns semantically structured content (title, sections, facts, entities) optimized for LLM consumption, distinct from the user-facing API
- [ ] **IETF AI Preferences compliance** — Monitor the IETF "AI Preferences Working Group" (launched 2025) for the standardized machine-readable AI access rules spec. Implement when finalized — will likely supersede or extend `robots.txt` for AI
### AI-Native Content Formats
- [ ] **Markdown-first content pipeline** — Ensure all page widgets can export clean, semantic Markdown. This is the preferred format for LLM ingestion and is used by `llms.txt`, `llms-full.txt`, and AI-friendly feeds
- [ ] **Structured knowledge base export** — Generate a `/knowledge.json` endpoint that exports the entire content catalog as a structured knowledge graph (entities, relationships, facts) for RAG pipelines and enterprise AI integrations
- [ ] **MCP (Model Context Protocol) server** — Expose platform content as an MCP resource so AI assistants (Claude, Cursor, etc.) can directly query posts, pages, and products as context — leveraging the existing REST API as the backend
- [ ] **AI-friendly RSS** — Extend RSS feed items with full content (not just excerpts), structured metadata, and `<media:content>` tags so AI feed consumers get complete context without needing to crawl

117
packages/ui/docs/storage.md Normal file
View File

@ -0,0 +1,117 @@
# Virtual File System (VFS) & Access Control List (ACL)
The Polymech Storage module implements a robust Virtual File System (VFS) with an integrated Access Control List (ACL) engine. This system presents a unified file interface across different mount points, providing advanced features like bind mounting, strict path sanitization, streaming, compression, and granular permission enforcement.
## 1. VFS Mounts
Mounts define the physical or virtual storage locations exposed by the VFS API.
### Configuration (`config/vfs.json`)
Mount configs list the available storage volumes. It can be overridden per environment (e.g. `vfs-production.json`) or via the `VFS_CONFIG` environment variable.
```json
[
{
"name": "root",
"type": "local",
"path": "./data/root"
},
{
"name": "assets",
"type": "local",
"path": "./data/assets"
}
]
```
### Special Mount Types
* **Named Mounts**: Declared directly in configured JSON files. (e.g. `root`, `assets`).
* **Home Mount (`home`)**: A virtual mount that maps to `{root_mount_path}/{user_uuid}`. This provides each user with an isolated personal storage area. The directory automatically creates itself on first access.
* **User Mount (`user:{uuid}`)**: Administrative or cross-user syntax to directly refer to a specific user's home folder.
* **UUID Mount (`{uuid}`)**: Also acts as a direct alias to a specific user's home folder, often used for sharing files anonymously or cross-user.
## 2. Bind Mounts
Bind mounts map a path from one mount into a different mount. This allows data to appear in multiple locations without duplicating files, akin to Linux `mount --bind`.
### Configuration (`config/vfs-binds.json`)
Can also be overridden by environment (e.g. `vfs-binds-production.json`).
```json
[
{
"id": "home-assets",
"source": { "mount": "assets" },
"target": { "mount": "home", "path": "assets" },
"acl": "source"
}
]
```
* **`acl: "source"`**: Permissions are evaluated based on the source mount's ACL contexts.
* **`acl: "inherit"`**: Permissions evaluate against the target mount's context (e.g., if bound to a user's home, the user owns it).
## 3. Access Control Lists (ACL)
The VFS features a granular, path-based ACL system. Every physical mount root or user's home directory maintains a `vfs-settings.json` file defining ownership and permissions.
### Settings Configuration (`vfs-settings.json`)
If a `vfs-settings.json` doesn't exist, it is auto-generated with the default owner when first accessed.
```json
{
"owner": "3bb4cfbf-318b-44d3-a9d3-35680e738421",
"groups": [],
"acl": [
{
"path": "/",
"permissions": ["read", "list"],
"userId": "anonymous"
},
{
"path": "/shared",
"permissions": ["read", "list", "write", "mkdir", "delete", "rename", "copy"],
"userId": "authenticated"
}
]
}
```
### Roles and Identifiers
* **Owner (`owner` field)**: The system owner or user UUID. Has implicit full control. System mounts use a synthetic UUID (`00000000-0000-0000-0000-000000000000`).
* **`userId` values in ACL**:
* `anonymous`: Any unauthenticated public requests.
* `authenticated`: Any user with a valid Supabase token.
* `{uuid}`: A specific Supabase user ID.
* **Permissions Options**: `read`, `list`, `write`, `mkdir`, `delete`, `rename`, `copy`.
### API ACL Management
Users or Admins can manipulate ACL records on their owned mounts via explicit routes:
* `POST /api/vfs/acl/grant/{mount}`
* `POST /api/vfs/acl/revoke/{mount}`
## 4. API Endpoints and Interactions
All VFS endpoints follow a `/{action}/{mount}/{subpath}` format.
* **List / Glob Search**: `/api/vfs/ls/{mount}/*`
* Query params: `glob=*` (pattern matching), `includeSize=true` (recursively calculates folder sizing with 5-day caching).
* Automatically applies `.gitignore` filtering when listing physical local mounts via `isIgnored()`.
* **Read / Serve**:
* `/api/vfs/read/{mount}/*`: Returns raw text.
* `/api/vfs/get/{mount}/*`: Serves files with correct MIME types and supports HTTP **Range requests** (`206 Partial Content`), vital for streaming large media components (video/audio).
* **Write / Upload**:
* `/api/vfs/write/{mount}/*`: Direct HTTP PUT text replacement.
* `/api/vfs/upload/{mount}/*`: Standard `multipart/form-data` uploads. Piped directly to disk recursively to mitigate loading entirely in memory for large files.
* **ZIP Compress**: `/api/vfs/compress/{mount}/*`
* Dynamically streams a ZIP of the targeted folder structure. Rate-limited to 3 times per minute. Refuses anonymous usage implicitly.
* **Deep Search Engine**: `/api/vfs/search/{mount}/*`
* Implements deep recursive search.
* Can force-synchronize to Supabase full-text search systems via Admin hooks (`/api/vfs/admin/index`).
## 5. Security & Path Traversal Protection
The primary VFS router enforces extreme security to prevent Directory Traversal or Payload injection.
* **Bans System**:
* Detecting null-byte injection (`%00`), URL-encoded bypasses (`..=%5c`), double-encoded bypasses (`%252e`), or trailing malicious slashes will instantly enforce an IP Auto-Ban logging routine resulting in a `403 Fuck You` text response unless requested explicitly from localhost testing vectors.
* **Path Sanitization**: `sanitizeWritePath()` forcefully filters malformed characters (accidental square brackets, extra dots, invalid symbols) for created directories or files.

View File

@ -0,0 +1,150 @@
# Support Chat
Embeddable AI support chat built on `ChatPanel` with a fixed `support` preset (no header, no sidebar). System prompts are assembled from [`defaults.ts`](../src/modules/ai/defaults.ts), injecting the user's UI language and named context blocks automatically.
---
## Route
`/support-chat` — public, no auth required. Registered in `App.tsx` via the `SupportChat` page.
---
## Architecture
```
SupportChat (page/embed)
└── ChatPanel preset="support"
├── useChatEngine('support') ← isolated localStorage namespace
├── buildSupportPrompt(context) ← from defaults.ts
│ ├── BASE_SUPPORT_PROMPT
│ ├── getCurrentLang() ← from i18n.tsx
│ └── SUPPORT_CONTEXTS[key...] ← named blocks
└── seed useEffect (once on mount)
sets provider / model / tools / systemPrompt
```
### localStorage isolation
Each `ChatPanel` preset gets its own storage namespace via `useChatEngine(preset)`:
| Preset | Key prefix |
|---|---|
| `developer` / `standard` / `simple` | `developer-settings-*` etc. |
| `support` | `support-settings-*` |
This prevents PlaygroundChat from polluting SupportChat settings (and vice versa).
---
## System Prompt Assembly
**File:** `src/modules/ai/defaults.ts`
```ts
buildSupportPrompt(contexts?)
// → BASE_SUPPORT_PROMPT
// + "Always reply in Français (language code: fr)." ← auto from getCurrentLang()
// + SUPPORT_CONTEXTS['shipping-rates'] ← if in contexts
```
### Default contexts
```ts
export const DEFAULT_CONTEXTS: SupportContextKey[] = ['shipping-rates'];
```
Applied when no `context` prop is passed.
### Adding a new context block
```ts
// src/modules/ai/defaults.ts
export const SUPPORT_CONTEXTS: Record<string, string> = {
'shipping-rates': `...`,
'returns': `## Returns Policy\n...`, // ← add here
};
```
The key becomes the `SupportContextKey` union type automatically.
---
## Embedding
### Standalone page (default)
```tsx
<SupportChat />
// Uses: openrouter / gpt-4o, search tools ON, DEFAULT_CONTEXTS
```
### With custom contexts
```tsx
<SupportChat context={['shipping-rates', 'returns']} />
```
### With different model / provider
```tsx
<SupportChat
provider="openai"
model="gpt-4o-mini"
context={['shipping-rates']}
/>
```
### With live runtime context (e.g. current order)
Use `ChatPanel` directly with `getContext` — called before every send:
```tsx
<ChatPanel
preset="support"
context={['shipping-rates']}
getContext={() => `User is viewing order #${orderId}`}
/>
```
### All props
#### `SupportChat`
| Prop | Type | Default | Description |
|---|---|---|---|
| `context` | `SupportContextKey[]` | `DEFAULT_CONTEXTS` | Context blocks appended to base prompt |
| `provider` | `string` | `'openrouter'` | AI provider |
| `model` | `string` | `'openai/gpt-4o'` | AI model |
| `searchTools` | `boolean` | `true` | Site search tools |
| `pageTools` | `boolean` | `false` | Page/content tools |
| `imageTools` | `boolean` | `false` | Image generation tool |
| `vfsTools` | `boolean` | `false` | VFS file tools |
#### `ChatPanel` (additional props relevant to support embeds)
| Prop | Type | Description |
|---|---|---|
| `context` | `SupportContextKey[]` | Same as above, passed through |
| `initialSystemPrompt` | `string` | Overrides `buildSupportPrompt` entirely |
| `getContext` | `() => string \| null` | Dynamic context injected before each send |
| `extraTools` | `() => any[]` | Additional tool definitions |
---
## Navigation
A `MessageSquare` icon button is always visible in `TopNavigation` (public, no auth guard), highlighting when the route is active.
---
## Files
| File | Role |
|---|---|
| `src/pages/SupportChat.tsx` | Page component + embed wrapper |
| `src/modules/ai/defaults.ts` | Prompt registry (`BASE_SUPPORT_PROMPT`, `SUPPORT_CONTEXTS`, `buildSupportPrompt`) |
| `src/modules/ai/ChatPanel.tsx` | Panel with `support` preset + `initial*` seed props + `context` prop |
| `src/modules/ai/useChatEngine.ts` | Engine hook, namespaced via `namespace` param |
| `src/App.tsx` | `/support-chat` route |
| `src/components/TopNavigation.tsx` | Nav icon button |

35
packages/ui/docs/tiles.md Normal file
View File

@ -0,0 +1,35 @@
# GridSearch Polygon Rendering & Server Freeze Mitigation
## The Problem
When dealing with GridSearches containing over 5,000+ polygons (e.g., all towns in Spain), querying `better-sqlite3`, parsing GeoPackage WKB into JS objects, putting them into an array, and then running `JSON.stringify()` on a 100MB+ object tree freezes the V8 JavaScript engine. The garbage collector natively blocks the main event loop while traversing this massive JS object.
Even with asynchronous yields (e.g., `setTimeout(resolve, 0)`), constructing massive JavaScript arrays of multi-polygons will lock up the Node.js thread and cause other API requests to timeout.
## Architectural Options
### 1. Raw String Streaming (Lowest Effort, High Impact)
Skip building the 100MB+ V8 object tree entirely.
- **How:** Query SQLite geometries and stream the raw JSON strings directly into the HTTP response (e.g. `c.streamText()`).
- **Pros:** Peak memory drops from 500MB+ to ~1MB. The V8 engine never builds the massive object tree, preventing the GC freeze.
- **Cons:** The browser still has to download and parse a massive JSON file at once, which may freeze the frontend map rendering momentarily.
### 2. Pre-generate Static `.geojson` Files (Best Performance)
Instead of asking the database for polygons *every time* a map requests them, generate the full polygons file once.
- **How:** When `gs.enumerate()` creates a search, it also writes a `{basename}-polygons.json` file to the `searches/` directory. The UI fetches this static file directly.
- **Pros:** Perfectly zero-cost for the Node backend at request-time. NGINX or Hono streams the file instantly without touching the heavy event loop.
- **Cons:** Increases disk usage. The initial file generation still freezes the server briefly (unless offloaded to a background task like PgBoss).
### 3. Native Worker Threads (`worker_threads`)
Offload the synchronous SQLite querying to a separate Node.js thread.
- **How:** Spin up a `piscina` worker pool. The worker thread opens a separate `better-sqlite3` connection, does the parsing, stringifies it, and passes the buffer back.
- **Pros:** Main event loop remains 100% responsive.
- **Cons:** Significant architectural overhead. Transferring 100MB strings via `postMessage` still incurs a minor memory/serialization hit.
### 4. Vector Tiles / .mvt & Lazy Loading (The "Proper" GIS Way)
Maplibre GL natively supports loading data in Vector Tiles (zoom + X + Y bounding boxes) rather than pulling all 5,000 geometries at once.
- **How:** The UI requests data via `/api/locations/gridsearch/tiles/{z}/{x}/{y}`. The backend dynamically queries `better-sqlite3` strictly for polygons intersecting that tile envelope.
- **Pros:** Infinitely scalable. 5 million polygons won't freeze the server or the browser.
- **Cons:** Highest effort. Requires implementing an MVT generation proxy (using `geojson-vt` or PostGIS equivalents) and pagination in the client.
## Current Mitigation
Currently, a temporary fix uses `await new Promise(r => setTimeout(r, 0))` in the `/api/locations/gridsearch/polygons` endpoint every 50 iterations to yield to the event loop. However, moving towards **Option 2** (Static generation) or **Option 4** (Vector tiles) is strongly recommended for production stability.

View File

@ -0,0 +1,45 @@
<div class="relative flex flex-col justify-end overflow-hidden bg-gray-100 dark:bg-gray-900 bg-cover bg-center"
style="min-height: 500px; max-height: 80vh; background-image: url('https://service.polymech.info/api/images/render?url=https%3A%2F%2Fytoadlpbdguriiccjnip.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fpictures%2F3bb4cfbf-318b-44d3-a9d3-35680e738421%2F1772547102918-version.png&width=1280&format=avif');">
<video class="absolute inset-0 w-full h-full object-cover"
src="http://localhost:3333/api/videos/jobs/1cb36a43-e781-40da-bee8-b0e82b156bf4/hls/playlist.m3u8"
poster="https://service.polymech.info/api/images/render?url=https%3A%2F%2Fytoadlpbdguriiccjnip.supabase.co%2Fstorage%2Fv1%2Fobject%2Fpublic%2Fpictures%2F3bb4cfbf-318b-44d3-a9d3-35680e738421%2F1772547102918-version.png&width=1280&format=avif"
preload="metadata" autoplay muted loop playsinline aria-hidden="true"></video>
<div class="absolute inset-0 bg-gradient-to-t from-black/70 via-black/30 to-transparent"></div>
<!-- Pause/Play toggle (WCAG 2.2.2) -->
<button
onclick="var v=this.closest('div').querySelector('video'); if(v.paused){v.play();this.textContent='⏸'}else{v.pause();this.textContent='▶'}"
class="absolute top-3 right-3 z-20 bg-black/40 hover:bg-black/60 text-white rounded-full w-9 h-9 flex items-center justify-center text-sm transition-colors"
aria-label="Pause or play background video" title="Pause/Play video">⏸</button>
<div class="relative z-10 w-full pt-16 sm:pt-20 pb-4 sm:pb-6">
<div class="bg-black/10 dark:bg-black/60 px-5 sm:px-8 lg:px-10 py-6 sm:py-8">
<h1 class="text-2xl sm:text-3xl lg:text-4xl font-bold text-white drop-shadow-lg max-w-4xl">
Polymer Processing Equipment &amp; Engineering Solutions
</h1>
<p class="mt-2 sm:mt-3 text-sm sm:text-base lg:text-lg text-gray-200 drop-shadow max-w-3xl">
Extrusion, compounding &amp; recycling systems for education, labs, and small-scale production — backed
by our knowledgebase, custom software, and manufacturing services.
</p>
<div class="mt-4 sm:mt-5 flex flex-col sm:flex-row sm:space-x-4 space-y-3 sm:space-y-0">
<a href="/categories/products"
class="inline-block text-center bg-white/20 hover:bg-white/30 backdrop-blur-sm border border-white/30 text-white font-semibold py-2.5 sm:py-2 px-5 rounded-md transition-colors">
Explore Products
</a>
<a href="/user/cgo/pages/about-contact"
class="inline-block text-center bg-white/15 hover:bg-white/25 backdrop-blur-sm border border-white/30 text-white font-semibold py-2.5 sm:py-2 px-5 rounded-md transition-colors">
Contact Us
</a>
</div>
</div>
</div>
</div>
<!-- Reduced-motion: pause video for users who prefer less motion (WCAG) -->
<style>
@media (prefers-reduced-motion: reduce) {
video {
display: none !important;
}
}
</style>

File diff suppressed because it is too large Load Diff

View File

@ -5,7 +5,8 @@
"type": "module",
"types": "dist/index.d.ts",
"exports": {
".": "./dist/index.js"
".": "./dist/index.js",
"./config": "./dist/config/index.js"
},
"scripts": {
"build": "tsc",

View File

@ -0,0 +1,332 @@
import { z } from 'zod/v4'
import { ZodMetaMap } from '../map.js';
import { extendZodWithOpenApi } from '@hono/zod-openapi';
extendZodWithOpenApi(z);
export const CompetitorSchema = z.object({
place_id: z.string(),
title: z.string(),
description: z.string().optional().nullable(),
address: z.string().optional().nullable(),
gps_coordinates: z.object({
latitude: z.number(),
longitude: z.number(),
}).optional().nullable(),
phone: z.string().optional().nullable(),
website: z.string().optional().nullable(),
operating_hours: z.record(z.string(), z.any()).optional().nullable(),
thumbnail: z.string().optional().nullable(),
types: z.array(z.string()).optional().nullable(),
continent: z.string().optional().nullable(),
country: z.string().optional().nullable(),
city: z.string().optional().nullable(),
updated_at: z.string().optional().nullable(),
raw_data: z.record(z.string(), z.any()).optional().nullable()
})
// Shared schemas
export const GpsCoordinatesSchema = z.object({
latitude: z.number(),
longitude: z.number()
})
export const OperatingHoursSchema = z.object({
sunday: z.string(),
monday: z.string(),
tuesday: z.string(),
wednesday: z.string(),
thursday: z.string(),
friday: z.string(),
saturday: z.string()
})
// Extension schemas
export const AccessibilityExtensionSchema = z.object({
accessibility: z.array(z.string())
})
export const CrowdExtensionSchema = z.object({
crowd: z.array(z.string())
})
export const ExtensionsSchema = z.array(
z.union([AccessibilityExtensionSchema, CrowdExtensionSchema])
)
// Locality info schemas
export const AdministrativeAreaFullSchema = z.object({
name: z.string(),
description: z.string(),
isoName: z.string(),
order: z.number(),
adminLevel: z.number(),
isoCode: z.string(),
wikidataId: z.string(),
geonameId: z.number()
})
export const AdministrativeAreaMinimalSchema = z.object({
name: z.string(),
order: z.number(),
adminLevel: z.number()
})
export const AdministrativeAreaWithWikidataSchema = z.object({
name: z.string(),
description: z.string(),
order: z.number(),
adminLevel: z.number(),
wikidataId: z.string()
})
export const AdministrativeAreaWithGeonameSchema = z.object({
name: z.string(),
description: z.string(),
order: z.number(),
adminLevel: z.number(),
wikidataId: z.string(),
geonameId: z.number()
})
export const InformativeAreaFullSchema = z.object({
name: z.string(),
description: z.string(),
isoName: z.string(),
order: z.number(),
isoCode: z.string(),
wikidataId: z.string(),
geonameId: z.number()
})
export const InformativeAreaBasicSchema = z.object({
name: z.string(),
description: z.string(),
order: z.number()
})
export const InformativeAreaWithGeonameSchema = z.object({
name: z.string(),
description: z.string(),
order: z.number(),
wikidataId: z.string(),
geonameId: z.number()
})
export const InformativeAreaWithWikidataSchema = z.object({
name: z.string(),
description: z.string(),
order: z.number(),
wikidataId: z.string()
})
export const InformativeAreaMinimalSchema = z.object({
name: z.string(),
order: z.number()
})
export const LocalityInfoSchema = z.object({
administrative: z.array(
z.union([
AdministrativeAreaFullSchema,
AdministrativeAreaMinimalSchema,
AdministrativeAreaWithWikidataSchema,
AdministrativeAreaWithGeonameSchema
])
),
informative: z.array(
z.union([
InformativeAreaFullSchema,
InformativeAreaBasicSchema,
InformativeAreaWithGeonameSchema,
InformativeAreaWithWikidataSchema,
InformativeAreaMinimalSchema
])
)
})
// Geo schema
export const GeoSchema = z.object({
latitude: z.number(),
longitude: z.number(),
localityLanguageRequested: z.string(),
continent: z.string(),
continentCode: z.string(),
countryName: z.string(),
countryCode: z.string(),
principalSubdivision: z.string(),
principalSubdivisionCode: z.string(),
city: z.string(),
locality: z.string(),
postcode: z.string(),
plusCode: z.string(),
localityInfo: LocalityInfoSchema
})
// Google Media schemas
export const SearchMetadataSchema = z.object({
id: z.string(),
status: z.string(),
json_endpoint: z.string(),
created_at: z.string(),
processed_at: z.string(),
google_maps_photos_url: z.string(),
raw_html_file: z.string(),
prettify_html_file: z.string(),
total_time_taken: z.number()
})
export const SearchParametersSchema = z.object({
engine: z.string(),
data_id: z.string(),
hl: z.string()
})
export const CategorySchema = z.object({
title: z.string(),
id: z.string()
})
export const PhotoSchema = z.object({
thumbnail: z.string(),
image: z.string(),
photo_meta_serpapi_link: z.string()
})
export const SerpapiPaginationSchema = z.object({
next: z.string(),
next_page_token: z.string()
})
export const GoogleMediaSchema = z.object({
search_metadata: SearchMetadataSchema,
search_parameters: SearchParametersSchema,
categories: z.array(CategorySchema),
photos: z.array(PhotoSchema),
serpapi_pagination: SerpapiPaginationSchema
})
// Raw data schema
export const LocationSchema = z.object({
position: z.number(),
title: z.string(),
description: z.string(),
place_id: z.string(),
data_id: z.string(),
data_cid: z.string(),
reviews_link: z.string(),
photos_link: z.string(),
gps_coordinates: GpsCoordinatesSchema,
place_id_search: z.string(),
provider_id: z.string(),
type: z.string(),
types: z.array(z.string()),
type_id: z.string(),
type_ids: z.array(z.string()),
address: z.string(),
open_state: z.string(),
hours: z.string(),
operating_hours: OperatingHoursSchema,
phone: z.string(),
website: z.string(),
extensions: ExtensionsSchema,
thumbnail: z.string(),
serpapi_thumbnail: z.string(),
page: z.number(),
geo: GeoSchema,
meta: z.record(z.string(), z.any()),
google_media: GoogleMediaSchema
}).partial()
// Main CompetitorSchemaFull
export const CompetitorSchemaFull = z.object({
place_id: z.string(),
title: z.string(),
address: z.string().optional().nullable(),
gps_coordinates: GpsCoordinatesSchema.optional().nullable(),
phone: z.string().optional().nullable(),
website: z.string().optional().nullable(),
operating_hours: OperatingHoursSchema.optional().nullable(),
thumbnail: z.string().optional().nullable(),
types: z.array(z.string()).optional().nullable(),
raw_data: LocationSchema.optional().nullable(),
continent: z.string().optional().nullable(),
country: z.string().optional().nullable(),
city: z.string().optional().nullable(),
updated_at: z.string().optional().nullable()
})
export const CompetitorResponseSchema = z.object({
message: z.string(),
data: z.array(CompetitorSchemaFull).optional(),
})
export const CompetitorDetailResponseSchema = z.object({
message: z.string(),
data: CompetitorSchemaFull.optional(),
})
export type Competitor = z.infer<typeof CompetitorSchema>;
export type CompetitorResponse = z.infer<typeof CompetitorResponseSchema>;
export type CompetitorDetailResponse = z.infer<typeof CompetitorDetailResponseSchema>;
export type CompetitorFull = z.infer<typeof CompetitorSchemaFull>;
export type OptionsSchemaMeta = Record<string, unknown>
let schemaMap: ZodMetaMap<OptionsSchemaMeta>;
export const CompetitorRequestSchemaMap = () => {
schemaMap = ZodMetaMap.create<OptionsSchemaMeta>()
schemaMap.add(
'location',
z.string().default('dresden, germany') as any
, { 'ui:group': 'Search Parameters' })
.add(
'query',
z.string().default('plastichub') as any
, { 'ui:group': 'Search Parameters' })
.add(
'filterCity',
z.string().optional() as any
, { 'ui:group': 'Filters' })
.add(
'filterContinent',
z.string().optional() as any
, { 'ui:group': 'Filters' })
.add(
'filterCountry',
z.string().optional() as any
, { 'ui:group': 'Filters' })
.add(
'filterType',
z.string().optional() as any
, { 'ui:group': 'Filters' })
.add(
'excludedTypes',
z.array(z.string()).optional() as any
, { 'ui:group': 'Filters' })
.add(
'concurrency',
z.string().optional().default('5') as any
, { 'ui:group': 'Settings' })
.add(
'refresh',
z.boolean().optional().default(false) as any
, { 'ui:group': 'Settings' })
.add(
'limit',
z.string().optional().default('250') as any
, { 'ui:group': 'Settings' })
return schemaMap;
}
export const CompetitorRequestSchema = CompetitorRequestSchemaMap().root() as any;
export const CompetitorUISchema = CompetitorRequestSchemaMap().getUISchema();
export type LocationType = z.infer<typeof LocationSchema>;
export type CompetitorRequest = z.infer<typeof CompetitorRequestSchema>;

Some files were not shown because too many files have changed in this diff Show More