kyle.pericak.com/apps/llm-client/
Static export hosted under the main blog bucket (gs://kyle.pericak.com/apps/llm-client).
A browser chat UI that talks to any OpenAI-compatible /v1/chat/completions
endpoint — llama-server,
OpenRouter, etc. Chats and settings live in
localStorage; there is no server-side storage.
Source: apps/llm-client/.
persist for statecd apps/llm-client
bin/install.sh # first time: pnpm install + playwright browsers
bin/start-dev.sh # dev server on :3100
bin/start-dev.sh 3200 # custom port
bin/kill-dev.sh # stop
bin/test.sh # unit + E2E
Requires a running llama-server (or compatible) — default http://127.0.0.1:8080,
configurable in the UI via the "Connected to" pill.
cd apps/llm-client
npm run build
bin/prod-deploy.sh
bin/prod-deploy.sh rsyncs out/ to gs://kyle.pericak.com/apps/llm-client
and sets Cache-Control: no-cache,no-store,must-revalidate on changed
files so Cloudflare picks up updates immediately.
Only Kyle deploys to prod.
Semver in package.json, injected at build via NEXT_PUBLIC_APP_VERSION
in next.config.ts, exposed through src/lib/version.ts. Bump patch for
fixes, minor for features, major for breaking changes.