supabase
This commit is contained in:
parent
f000c5fea3
commit
f8570411d1
@ -1,14 +1,14 @@
|
||||
# Server Infrastructure Migration: Ditching Supabase (Phase 2 execution)
|
||||
# Server Infrastructure Migration: Ditching Supabase (Execution Status)
|
||||
|
||||
> **Status**: Active Migration / Implementation
|
||||
> **Target Architecture**: Self-hosted Postgres, Prisma ORM, Zitadel (Auth/Tenancy), S3-compatible Storage (MinIO or VFS)
|
||||
> **Prior Reference**: `ditch-supabase.md` (initial research)
|
||||
|
||||
This document outlines the concrete technical next steps to complete the migration of the server and backend infrastructure away from Supabase, reflecting the architectural decisions made during initial implementation.
|
||||
This document tracks concrete migration status and remaining work to remove Supabase dependencies from server infrastructure.
|
||||
|
||||
---
|
||||
|
||||
## 1. Database & ORM Strategy: Prisma
|
||||
## 1) Database & ORM: Prisma
|
||||
|
||||
While Drizzle was initially highly considered, the current CLI implementation (`pm-cli-cms`) successfully validated **Prisma** (with `@prisma/adapter-pg` and `pg.Pool`) as our query layer of choice.
|
||||
|
||||
@ -17,28 +17,28 @@ While Drizzle was initially highly considered, the current CLI implementation (`
|
||||
- The `schema.prisma` file serves as the definitive source of truth for the database layout.
|
||||
- The CLI testbed proved we can dynamically spin up connections to different tenant databases.
|
||||
|
||||
### Next Steps for Prisma:
|
||||
### Status / Next steps:
|
||||
1. **Schema Centralization**: Establish a core `schema.prisma` representing the standard tenant database.
|
||||
2. **Server-Side Integration**: Replace the ~27 server files currently making PostgREST `supabase.from()` calls with Prisma queries (`prisma.posts.findMany()`, etc.).
|
||||
3. **Connection Pooling**: Implement a caching/pooling mechanism in the server backend to hold active Prisma clients for each tenant (since Zitadel manages tenancy per-database).
|
||||
|
||||
---
|
||||
|
||||
## 2. Authentication & Tenancy: Zitadel
|
||||
## 2) Authentication & Tenancy: Zitadel
|
||||
|
||||
The architecture has evolved beyond simple email/password auth. We are utilizing **Zitadel** as our central Identity Provider (IdP). Zitadel is explicitly designed to handle B2B tenancy.
|
||||
|
||||
**The Multi-Tenant Database Model**:
|
||||
Instead of relying on Supabase Row-Level Security (RLS) to isolate tenant data inside a massive single database, we are opting for **database-per-tenant** (or schema-per-tenant) isolation mapped by Zitadel.
|
||||
|
||||
### Next Steps for Zitadel Auth:
|
||||
### Status / Next steps:
|
||||
1. **API Middleware**: Replace the Supabase JWT verifiers in Hono with Zitadel token verification.
|
||||
2. **Instance Provisioning**: When Zitadel registers a new application/tenant, our backend needs a Webhook/Event trigger to dynamically fire the `pm-cli-cms site-init` or equivalent workflow to provision an isolated Postgres database.
|
||||
3. **Database Routing**: The server backend must extract the Tenant ID from the Zitadel JWT on incoming requests and fetch/instantiate the Prisma client bound to that specific tenant's `DATABASE_URL`.
|
||||
|
||||
---
|
||||
|
||||
## 3. Storage Migration Strategy (VFS & Mounts)
|
||||
## 3) Storage Migration Strategy (VFS & Mounts)
|
||||
|
||||
We are fully moving away from Supabase Storage (Buckets). The replacement strategy relies heavily on our mature, battle-tested **Virtual File System (VFS)** located at `server/src/products/storage/api/vfs.ts`.
|
||||
|
||||
@ -46,18 +46,22 @@ We are fully moving away from Supabase Storage (Buckets). The replacement strate
|
||||
- **Native File Abstraction**: The VFS supports flexible storage mounts. We can seamlessly swap out the underlying physical backend to an S3-compatible generic provider (like MinIO) or local disk without rewriting any of our API handlers.
|
||||
- **Built-in ACL Integration**: The VFS natively integrates with `@polymech/acl` for granular permission checks (Read/Write/Delete) against file paths, meaning we no longer need Supabase bucket policies.
|
||||
|
||||
### Next Steps for Storage (VFS Migration):
|
||||
1. **CLI `storage-migrate` Tooling**:
|
||||
- Write an automation script that connects to Supabase Storage, lists all buckets, and sequentially streams files down to disk or directly to the target environment.
|
||||
- Inject these files into the physical location modeled by the new VFS `vfs_mounts`.
|
||||
- *Consideration*: Preserve VFS index paths (`vfs_index`) during transfer so database records representing logical files correspond perfectly to the newly migrated physical files.
|
||||
2. **Server-Side Cut-Over**:
|
||||
- Purge `@supabase/storage-js`.
|
||||
- Implement the generic `S3Client` mount adapter directly inside the VFS if cloud flexibility is desired, or bind directly to a local high-performance hardware mount if fully self-hosting.
|
||||
### Current implemented tooling:
|
||||
1. **`pm-cli-cms backup-store`**
|
||||
- can export Supabase bucket files directly into local VFS images root:
|
||||
- `pm-cli-cms backup-store --source ./server --target ./server/storage/images`
|
||||
- preserves `cache/...` paths and emits picture UUID aliases under `storage/<user_id>/<picture_id>.<ext>`.
|
||||
2. **`pm-cli-cms migrate-images-vfs`**
|
||||
- rewrites `pictures.image_url` to VFS URLs:
|
||||
- `<IMAGE_VFS_URL>/api/vfs/get/<IMAGE_VFS_STORE>/<user_id>/<picture_id>[.<ext>]`
|
||||
- supports direct Supabase URLs and nested `/api/images/render?url=...` wrappers.
|
||||
- resolves missing local files via hash matching and optional hydration from old URL content.
|
||||
3. **Server-side image compatibility**
|
||||
- `server/src/products/images/index.ts` contains compatibility handlers for legacy thumbnail/render URL patterns during migration.
|
||||
|
||||
---
|
||||
|
||||
## 4. Authorization & Security (ACL Strategy)
|
||||
## 4) Authorization & Security (ACL Strategy)
|
||||
|
||||
**The major milestone: RLS is deprecated.**
|
||||
We no longer rely on Supabase Row-Level Security (`auth.uid()`) at the database level.
|
||||
@ -69,7 +73,7 @@ We no longer rely on Supabase Row-Level Security (`auth.uid()`) at the database
|
||||
|
||||
---
|
||||
|
||||
## 5. Overall Execution Pipeline (Next 2 Weeks)
|
||||
## 5) Execution Pipeline
|
||||
|
||||
### Milestone 1: Data Migration Engine (Completed)
|
||||
- ✅ Validated raw self-hosted Postgres (`polymech.info:5432`)
|
||||
@ -78,12 +82,15 @@ We no longer rely on Supabase Row-Level Security (`auth.uid()`) at the database
|
||||
|
||||
### Milestone 2: Server ORM Cut-Over (In Progress)
|
||||
- 🚧 Bootstrap `prisma/schema.prisma` matching the clean vanilla Postgres schema.
|
||||
- 🚧 Systematically replace `supabase.from()` with Prisma Client instances in server modules.
|
||||
- 🚧 Systematically replace `supabase.from()` with ORM query paths in server modules.
|
||||
- ✅ Added Drizzle query CLI baseline: `pm-cli-cms db-query-drizzle` with query/transaction timing metrics and report output.
|
||||
|
||||
### Milestone 3: Zitadel & Routing
|
||||
- 🚧 Replace `@polymech/acl`'s dependency on `auth.uid()` or Supabase cookies with Zitadel JWT payload assertions.
|
||||
- 🚧 Implement the dynamic database connection router to select the right Prisma instance based on the Zitadel Tenant ID.
|
||||
|
||||
### Milestone 4: Storage Migration Cut-Over
|
||||
- 🚧 Migrate existing Supabase Bucket objects to local MinIO/S3.
|
||||
- 🚧 Update server endpoints serving files/images to route to the new generic S3 object store.
|
||||
### Milestone 4: Storage Migration Cut-Over (In Progress)
|
||||
- ✅ VFS-backed image upload path implemented (`/api/images?forward=vfs`).
|
||||
- ✅ CLI workflows added for file backup + URL rewrite (`backup-store`, `migrate-images-vfs`).
|
||||
- 🚧 Complete migration runbooks + verification reports in production.
|
||||
- 🚧 Remove temporary legacy URL compatibility code after migration is complete.
|
||||
|
||||
@ -1,202 +1,78 @@
|
||||
# Supabase -> Internal VFS (Images) Trial Plan
|
||||
# Supabase -> Internal VFS (Images) Migration
|
||||
|
||||
## Goal
|
||||
## Goal (current)
|
||||
|
||||
Move **new image uploads** from Supabase Storage to internal VFS mount configured by `IMAGE_VFS_STORE` (trial default: `images`), while keeping:
|
||||
Move image storage paths from Supabase bucket URLs to internal VFS URLs:
|
||||
|
||||
- existing `pictures` table workflow (`createPictureRecord` with meta)
|
||||
- ability to read old Supabase URLs
|
||||
- fast rollback to Supabase write path
|
||||
|
||||
This is a **write-path migration first**, not a full historical backfill cutover.
|
||||
- keep existing `pictures` workflow
|
||||
- keep historical records readable during migration
|
||||
- canonicalize to VFS URL format for all migrated rows
|
||||
|
||||
---
|
||||
|
||||
## Current State (verified)
|
||||
## Current Implemented State
|
||||
|
||||
- Client upload paths are now VFS-only:
|
||||
- `src/lib/uploadUtils.ts` always posts `forward=vfs`
|
||||
- `src/components/ImageWizard/db.ts` uses VFS upload flow
|
||||
- `src/modules/ai/imageTools.ts` uses VFS upload flow
|
||||
- `src/modules/posts/client-pictures.ts` uses VFS upload flow
|
||||
- Server image endpoint supports `forward=vfs` in `server/src/products/images/index.ts`:
|
||||
- writes through VFS APIs
|
||||
- bootstraps per-user ACL grant (`read/list/mkdir/write` on `/<user_id>`)
|
||||
- returns absolute VFS URLs via `IMAGE_VFS_URL`
|
||||
- extracts metadata from original input buffer (before Sharp strips EXIF)
|
||||
- Legacy responsive URL compatibility is implemented:
|
||||
- nested `/api/images/render?url=...` wrappers are unwrapped for source resolution
|
||||
- legacy `/api/images/cache/<pictureId>_thumb.jpg` lookup falls back to `pictures.image_url`
|
||||
|
||||
- Frontend upload goes to `POST /api/images?forward=supabase&original=true` from `src/lib/uploadUtils.ts`.
|
||||
- Server image endpoint handles `forward === 'supabase'` in `server/src/products/images/index.ts`.
|
||||
- VFS mount already exists:
|
||||
- mount: `images`
|
||||
- path: `./storage/images`
|
||||
- config: `server/config/vfs.json`
|
||||
- VFS API exists for file reads/uploads under `server/src/products/storage/api/vfs.ts`.
|
||||
|
||||
---
|
||||
|
||||
## Constraints / Requirements
|
||||
## Canonical URL Format
|
||||
|
||||
- Keep `pictures` insert behavior intact (title/meta/etc).
|
||||
- New `image_url` should be VFS-readable URL (mount=`IMAGE_VFS_STORE`, path inside user folder).
|
||||
- Rollback must be immediate and low risk.
|
||||
- Trial should be controllable without broad code churn.
|
||||
- Stored `pictures.image_url`:
|
||||
- `<IMAGE_VFS_URL>/api/vfs/get/<IMAGE_VFS_STORE>/<user_id>/<picture_id>[.<ext>]`
|
||||
- `token` is optional at request time for protected reads.
|
||||
- `token` is not persisted in DB URLs.
|
||||
|
||||
---
|
||||
|
||||
## URL Strategy (important)
|
||||
## Migration Commands (implemented)
|
||||
|
||||
Use a stable public URL shape for VFS-hosted images:
|
||||
### 1) Backup Supabase storage into local images root
|
||||
|
||||
- `image_url = <IMAGE_VFS_URL>/api/vfs/get/<IMAGE_VFS_STORE>/<user_id>/<filename>`
|
||||
```bash
|
||||
pm-cli-cms backup-store --source ./server --target ./server/storage/images
|
||||
```
|
||||
|
||||
Notes:
|
||||
Special behavior for this target:
|
||||
- writes `pictures/cache/...` directly into `server/storage/images/cache/...`
|
||||
- writes UUID aliases into `server/storage/images/storage/<user_uid>/<picture_id>.<ext>`
|
||||
|
||||
- This keeps `image_url` directly consumable by clients.
|
||||
- It avoids exposing filesystem paths.
|
||||
- It maps cleanly to VFS mount + subpath permissions.
|
||||
- Existing Supabase URLs remain untouched for historical records.
|
||||
- `token` query param is optional at request time for protected reads, but it is **not stored** in `pictures.image_url`.
|
||||
### 2) Rewrite `pictures.image_url` to VFS URLs
|
||||
|
||||
`IMAGE_VFS_URL` is the canonical server base for these URLs.
|
||||
```bash
|
||||
pm-cli-cms migrate-images-vfs --source ./server --dry-run true
|
||||
pm-cli-cms migrate-images-vfs --source ./server --dry-run false
|
||||
```
|
||||
|
||||
---
|
||||
`migrate-images-vfs` now supports:
|
||||
- direct Supabase picture URLs:
|
||||
- `/storage/v1/object/public/pictures/cache/...`
|
||||
- `/storage/v1/object/public/pictures/<user>/<name>`
|
||||
- nested render wrappers:
|
||||
- `/api/images/render?url=...` (including nested/encoded forms)
|
||||
- missing-file resolution:
|
||||
- hash-match to existing local files
|
||||
- hydrate missing file from old URL to expected VFS path
|
||||
- canonical filename rewrite:
|
||||
- target filename is `<picture_id>.<ext>` (or `<picture_id>` if no extension)
|
||||
|
||||
## Phased Execution
|
||||
|
||||
## Phase 0 - Safety Rails (no behavior change)
|
||||
|
||||
1. Add env-driven upload target on server:
|
||||
- `IMAGE_UPLOAD_TARGET=supabase|vfs` (default `supabase`)
|
||||
- `IMAGE_VFS_STORE=images` (trial default mount name)
|
||||
2. Keep query param override support:
|
||||
- `forward` query still accepted (`supabase`/`vfs`)
|
||||
- precedence: explicit query param -> env default
|
||||
3. Add structured logs:
|
||||
- chosen target, user id, filename/hash, bytes, duration, status
|
||||
|
||||
Rollback: N/A (no functional switch yet).
|
||||
|
||||
## Phase 1 - Add `forward=vfs` in image endpoint
|
||||
|
||||
In `server/src/products/images/index.ts` (`handlePostImage`):
|
||||
|
||||
1. Reuse already-produced `processedBuffer` and `filename`.
|
||||
2. Implement `if (forward === 'vfs')` branch:
|
||||
- Determine user id from auth context (same identity used today for inserts).
|
||||
- Build relative VFS path: `<user_id>/<filename>`.
|
||||
- Resolve mount from `IMAGE_VFS_STORE` (default `images`).
|
||||
- Write bytes to resolved mount destination (for `images`: `./storage/images/<user_id>/<filename>`), either:
|
||||
- through internal VFS write/upload helper (preferred), or
|
||||
- direct fs write to resolved mount path if helper coupling is expensive.
|
||||
3. Return JSON same shape as Supabase branch:
|
||||
- `url`: `<IMAGE_VFS_URL>/api/vfs/get/<IMAGE_VFS_STORE>/<user_id>/<filename>`
|
||||
- include width/height/format/size/meta as now.
|
||||
|
||||
Safety:
|
||||
|
||||
- Keep current Supabase branch unchanged.
|
||||
- If VFS write fails and trial fallback is enabled, optionally retry Supabase write.
|
||||
|
||||
## Phase 2 - Frontend trial toggle (small change)
|
||||
|
||||
In `src/lib/uploadUtils.ts`:
|
||||
|
||||
1. Replace hardcoded `forward=supabase` with configurable target:
|
||||
- `VITE_IMAGE_UPLOAD_FORWARD` (`supabase` default, `vfs` for trial users/environments)
|
||||
2. Keep response handling unchanged (`{ url, meta }`).
|
||||
3. Keep `createPictureRecord` unchanged except optional `type` marker (see below).
|
||||
|
||||
Recommended optional marker:
|
||||
|
||||
- set `type` to `vfs-image` or `supabase-image` for observability and easy SQL filtering.
|
||||
|
||||
## Phase 3 - Trial rollout
|
||||
|
||||
1. Enable `vfs` only in selected environment(s) or for selected users.
|
||||
2. Monitor:
|
||||
- upload success/failure rate
|
||||
- read success for generated `image_url`
|
||||
- latency vs Supabase path
|
||||
- disk growth under `server/storage/images`
|
||||
3. Validate ACL behavior:
|
||||
- only expected users can access private paths (or keep public semantics intentionally).
|
||||
|
||||
## Phase 4 - Decide on default
|
||||
|
||||
If trial is healthy:
|
||||
|
||||
- change default upload target to `vfs`
|
||||
- keep Supabase path available for emergency rollback for at least one release cycle
|
||||
|
||||
---
|
||||
|
||||
## Data Model / Record Semantics
|
||||
|
||||
No schema migration required for trial.
|
||||
|
||||
- Keep writing `pictures` row as-is.
|
||||
- `image_url` stores either:
|
||||
- old Supabase public URL (historical rows), or
|
||||
- new VFS URL (`<IMAGE_VFS_URL>/api/vfs/get/<mount>/...`) for new rows.
|
||||
|
||||
Strongly recommended:
|
||||
|
||||
- include `storage_backend` in `meta` (`"supabase"` or `"vfs"`), or use `type` column consistently.
|
||||
|
||||
This enables mixed-backend rendering and easier operational debugging.
|
||||
|
||||
---
|
||||
|
||||
## Rollback Plan (must be instant)
|
||||
|
||||
Primary rollback switch:
|
||||
|
||||
1. Set `IMAGE_UPLOAD_TARGET=supabase` on server.
|
||||
2. Keep `IMAGE_VFS_STORE=images` as-is (or any mount name; ignored while on supabase target).
|
||||
3. Set `VITE_IMAGE_UPLOAD_FORWARD=supabase` on frontend (if used).
|
||||
4. Redeploy/restart.
|
||||
|
||||
Behavior after rollback:
|
||||
|
||||
- New uploads go to Supabase again.
|
||||
- Existing VFS-backed records continue to load via VFS read endpoint.
|
||||
- No data loss, no record rewrite needed.
|
||||
|
||||
Emergency fallback option:
|
||||
|
||||
- In server `forward=vfs` branch, on write failure, fallback to Supabase upload and log fallback event.
|
||||
|
||||
---
|
||||
|
||||
## Test Plan
|
||||
|
||||
## Unit / integration
|
||||
|
||||
- `forward=supabase` still returns 200 and public Supabase URL.
|
||||
- `forward=vfs` returns 200 and `<IMAGE_VFS_URL>/api/vfs/get/<IMAGE_VFS_STORE>/<uid>/<filename>`.
|
||||
- VFS write failure path returns clear error (or controlled fallback).
|
||||
- metadata extraction remains present in response for both paths.
|
||||
|
||||
## E2E manual
|
||||
|
||||
1. Upload image with `forward=vfs`.
|
||||
2. Confirm file exists in `server/storage/images/<uid>/`.
|
||||
3. Open returned `image_url`; verify content-type and caching headers.
|
||||
- verify base URL works directly; for protected ACL mode, verify request-time `?token=...` also works.
|
||||
4. Confirm `pictures.image_url` and `meta/type` are correct.
|
||||
5. Switch env back to Supabase and repeat to verify rollback.
|
||||
|
||||
---
|
||||
|
||||
## Open Decisions
|
||||
|
||||
1. **Auth model for VFS read URL**
|
||||
- public read (Supabase-like behavior) vs authenticated read
|
||||
2. **Canonical URL**
|
||||
- keep absolute via `IMAGE_VFS_URL` (recommended and now selected)
|
||||
3. **Collision policy**
|
||||
- current hash-based filename is deterministic; acceptable to overwrite same hash
|
||||
4. **Backfill**
|
||||
- not needed for trial, but define later if we want full Supabase deprecation
|
||||
|
||||
---
|
||||
|
||||
## Minimal Implementation Checklist
|
||||
|
||||
- [ ] Add `forward === 'vfs'` branch in image upload endpoint.
|
||||
- [ ] Add server env defaults (`IMAGE_UPLOAD_TARGET`, `IMAGE_VFS_STORE='images'`).
|
||||
- [ ] Add frontend env toggle (`VITE_IMAGE_UPLOAD_FORWARD`) instead of hardcoded `supabase`.
|
||||
- [ ] Persist backend marker (`meta.storage_backend` and/or `type`).
|
||||
- [ ] Add metrics/logging for backend choice and failures.
|
||||
- [ ] Run trial in one environment/user cohort.
|
||||
- [ ] Keep rollback toggles documented in runbook.
|
||||
## Notes
|
||||
- During migration window, mixed URL styles can exist in DB.
|
||||
- Keep legacy URL compatibility handlers in `images/index.ts` until migration reports no unresolved rows.
|
||||
|
||||
@ -15,8 +15,8 @@ interface FileBrowserAppProps {
|
||||
}
|
||||
|
||||
const FileBrowserApp: React.FC<FileBrowserAppProps> = ({
|
||||
allowPanels = false,
|
||||
mode = 'simple',
|
||||
allowPanels = true,
|
||||
mode = 'advanced',
|
||||
index = true,
|
||||
showRibbon = true,
|
||||
}) => {
|
||||
@ -27,7 +27,7 @@ const FileBrowserApp: React.FC<FileBrowserAppProps> = ({
|
||||
<AuthProvider>
|
||||
<MemoryRouter initialEntries={[initialEntry]}>
|
||||
<div className="flex flex-col h-full w-full bg-background text-foreground">
|
||||
<FileBrowser allowPanels={allowPanels} mode={mode} index={index} />
|
||||
<FileBrowser allowPanels={allowPanels} mode={mode} index={index} showRibbon={showRibbon} />
|
||||
</div>
|
||||
<Toaster />
|
||||
</MemoryRouter>
|
||||
|
||||
@ -9,7 +9,6 @@ import {
|
||||
createPicture,
|
||||
updatePicture,
|
||||
fetchPictureById,
|
||||
uploadFileToStorage,
|
||||
addCollectionPictures,
|
||||
} from "@/modules/posts/client-pictures";
|
||||
import { uploadImage } from "@/lib/uploadUtils";
|
||||
@ -19,13 +18,6 @@ import { supabase } from "@/integrations/supabase/client";
|
||||
// Re-export for backward compat
|
||||
export { getUserOpenAIKey };
|
||||
|
||||
// Internal trial switch for wizard uploads.
|
||||
// - "vfs": upload through /api/images (current trial path)
|
||||
// - "supabase": legacy direct storage upload
|
||||
const WIZARD_UPLOAD_BACKEND = (import.meta.env.VITE_IMAGE_WIZARD_UPLOAD_BACKEND || 'vfs').toLowerCase() === 'supabase'
|
||||
? 'supabase'
|
||||
: 'vfs';
|
||||
|
||||
/**
|
||||
* Load saved wizard model from user_secrets.settings.wizard_model
|
||||
*/
|
||||
@ -84,11 +76,7 @@ export const uploadImageToStorage = async (
|
||||
): Promise<{ fileName: string; publicUrl: string } | null> => {
|
||||
const fileName = `${userId}/${Date.now()}-${suffix}.png`;
|
||||
const file = new File([blob], fileName, { type: 'image/png' });
|
||||
if (WIZARD_UPLOAD_BACKEND === 'vfs') {
|
||||
const { publicUrl } = await uploadImage(file, userId);
|
||||
return { fileName, publicUrl };
|
||||
}
|
||||
const publicUrl = await uploadFileToStorage(userId, file, fileName);
|
||||
const { publicUrl } = await uploadImage(file, userId);
|
||||
return { fileName, publicUrl };
|
||||
};
|
||||
|
||||
|
||||
@ -1,13 +1,9 @@
|
||||
import { supabase } from '@/integrations/supabase/client';
|
||||
import { getAuthToken, serverUrl } from './db';
|
||||
|
||||
const IMAGE_UPLOAD_FORWARD_PRESET = (import.meta.env.VITE_IMAGE_UPLOAD_FORWARD || 'vfs').toLowerCase() === 'supabase'
|
||||
? 'supabase'
|
||||
: 'vfs';
|
||||
|
||||
/**
|
||||
* Uploads an image file via the server API.
|
||||
* Call sites are storage-agnostic; this module enforces the internal upload target preset.
|
||||
* Call sites are storage-agnostic; uploads are forwarded to VFS.
|
||||
*/
|
||||
export const uploadImage = async (file: File, userId: string): Promise<{ publicUrl: string, meta?: any }> => {
|
||||
if (!userId) throw new Error('User ID is required for upload');
|
||||
@ -21,7 +17,7 @@ export const uploadImage = async (file: File, userId: string): Promise<{ publicU
|
||||
headers['Authorization'] = `Bearer ${token}`;
|
||||
}
|
||||
|
||||
const response = await fetch(`${serverUrl}/api/images?forward=${IMAGE_UPLOAD_FORWARD_PRESET}&original=true`, {
|
||||
const response = await fetch(`${serverUrl}/api/images?forward=vfs&original=true`, {
|
||||
method: 'POST',
|
||||
headers,
|
||||
body: formData,
|
||||
|
||||
@ -8,7 +8,6 @@
|
||||
|
||||
import { z } from 'zod';
|
||||
import { createImage as createImageRouter } from '@/lib/image-router';
|
||||
import { supabase } from '@/integrations/supabase/client';
|
||||
import { uploadGeneratedImageData } from '@/lib/uploadUtils';
|
||||
import type { RunnableToolFunctionWithParse } from 'openai/lib/RunnableFunction';
|
||||
|
||||
@ -17,61 +16,20 @@ const defaultLog: LogFunction = (level, message, data) => console.log(`[IMAGE-TO
|
||||
|
||||
// ── Upload helper ────────────────────────────────────────────────────────
|
||||
|
||||
const AI_IMAGE_UPLOAD_BACKEND = (import.meta.env.VITE_AI_IMAGE_UPLOAD_BACKEND || 'vfs').toLowerCase() === 'supabase'
|
||||
? 'supabase'
|
||||
: 'vfs';
|
||||
|
||||
const uploadToSupabaseTempBucket = async (
|
||||
imageData: ArrayBuffer,
|
||||
prompt: string,
|
||||
userId: string,
|
||||
addLog: LogFunction,
|
||||
): Promise<string | null> => {
|
||||
try {
|
||||
const ts = Date.now();
|
||||
const slug = prompt.slice(0, 20).replace(/[^a-zA-Z0-9]/g, '-');
|
||||
const fileName = `${userId}/${ts}-${slug}.png`;
|
||||
const uint8 = new Uint8Array(imageData);
|
||||
|
||||
const { error } = await supabase.storage
|
||||
.from('temp-images')
|
||||
.upload(fileName, uint8, { contentType: 'image/png', cacheControl: '3600' });
|
||||
|
||||
if (error) {
|
||||
addLog('error', 'Upload failed', error);
|
||||
return null;
|
||||
}
|
||||
|
||||
const { data: { publicUrl } } = supabase.storage
|
||||
.from('temp-images')
|
||||
.getPublicUrl(fileName);
|
||||
|
||||
addLog('info', 'Image uploaded', { fileName, publicUrl });
|
||||
return publicUrl;
|
||||
} catch (err) {
|
||||
addLog('error', 'Upload error', err);
|
||||
return null;
|
||||
}
|
||||
};
|
||||
|
||||
const uploadGeneratedImage = async (
|
||||
imageData: ArrayBuffer,
|
||||
prompt: string,
|
||||
userId: string,
|
||||
addLog: LogFunction,
|
||||
): Promise<string | null> => {
|
||||
if (AI_IMAGE_UPLOAD_BACKEND === 'vfs') {
|
||||
try {
|
||||
const { publicUrl } = await uploadGeneratedImageData(imageData, prompt, userId);
|
||||
addLog('info', 'Image uploaded via VFS flow', { publicUrl });
|
||||
return publicUrl;
|
||||
} catch (err) {
|
||||
addLog('error', 'VFS upload failed', err);
|
||||
return null;
|
||||
}
|
||||
try {
|
||||
const { publicUrl } = await uploadGeneratedImageData(imageData, prompt, userId);
|
||||
addLog('info', 'Image uploaded via VFS flow', { publicUrl });
|
||||
return publicUrl;
|
||||
} catch (err) {
|
||||
addLog('error', 'VFS upload failed', err);
|
||||
return null;
|
||||
}
|
||||
|
||||
return uploadToSupabaseTempBucket(imageData, prompt, userId, addLog);
|
||||
};
|
||||
|
||||
// ── Zod schemas ──────────────────────────────────────────────────────────
|
||||
|
||||
@ -79,29 +79,13 @@ export const fetchUserPictures = async (userId: string) => (await fetchPictures(
|
||||
/** Convenience alias: fetch N recent pictures */
|
||||
export const fetchRecentPictures = async (limit: number = 50) => (await fetchPictures({ limit })).data;
|
||||
|
||||
const POSTS_UPLOAD_BACKEND = (import.meta.env.VITE_POSTS_UPLOAD_BACKEND || 'vfs').toLowerCase() === 'supabase'
|
||||
? 'supabase'
|
||||
: 'vfs';
|
||||
|
||||
export const uploadFileToStorage = async (userId: string, file: File | Blob, fileName?: string, client?: SupabaseClient) => {
|
||||
if (POSTS_UPLOAD_BACKEND === 'vfs') {
|
||||
const uploadFile = file instanceof File
|
||||
? file
|
||||
: new File([file], fileName || `${userId}/${Date.now()}-${Math.random().toString(36).substring(7)}`, {
|
||||
type: file.type || 'application/octet-stream'
|
||||
});
|
||||
const { publicUrl } = await uploadImage(uploadFile, userId);
|
||||
return publicUrl;
|
||||
}
|
||||
|
||||
// Legacy fallback: direct Supabase storage upload.
|
||||
const { supabase } = await import("@/integrations/supabase/client");
|
||||
const db = client || supabase;
|
||||
const name = fileName || `${userId}/${Date.now()}-${Math.random().toString(36).substring(7)}`;
|
||||
const { error } = await db.storage.from('pictures').upload(name, file);
|
||||
if (error) throw error;
|
||||
|
||||
const { data: { publicUrl } } = db.storage.from('pictures').getPublicUrl(name);
|
||||
const uploadFile = file instanceof File
|
||||
? file
|
||||
: new File([file], fileName || `${userId}/${Date.now()}-${Math.random().toString(36).substring(7)}`, {
|
||||
type: file.type || 'application/octet-stream'
|
||||
});
|
||||
const { publicUrl } = await uploadImage(uploadFile, userId);
|
||||
return publicUrl;
|
||||
};
|
||||
|
||||
|
||||
@ -166,11 +166,12 @@ const FileBrowser: React.FC<{
|
||||
allowPanels?: boolean,
|
||||
mode?: 'simple' | 'advanced',
|
||||
index?: boolean,
|
||||
showRibbon?: boolean,
|
||||
disableRoutingSync?: boolean,
|
||||
initialMount?: string,
|
||||
initialChrome?: FileBrowserChrome,
|
||||
onSelect?: (node: INode | null, mount?: string) => void
|
||||
}> = ({ allowPanels, mode, index, disableRoutingSync, initialMount: propInitialMount, initialChrome, onSelect }) => {
|
||||
}> = ({ allowPanels = true, mode, index, showRibbon = true, disableRoutingSync, initialMount: propInitialMount, initialChrome, onSelect }) => {
|
||||
const location = useLocation();
|
||||
|
||||
let initialMount = propInitialMount;
|
||||
@ -229,7 +230,7 @@ const FileBrowser: React.FC<{
|
||||
} else if (ribbonParam === '0' || ribbonParam === 'false') {
|
||||
initialChromeFromUrl = 'toolbar';
|
||||
}
|
||||
const resolvedInitialChrome = initialChrome ?? initialChromeFromUrl;
|
||||
const resolvedInitialChrome = initialChrome ?? initialChromeFromUrl ?? (showRibbon ? 'ribbon' : 'toolbar');
|
||||
|
||||
// ?file= overrides path to parent dir and pre-selects the file
|
||||
let finalPath = initialPath ? `/${initialPath}` : undefined;
|
||||
|
||||
Loading…
Reference in New Issue
Block a user