reqall.app
A project management tool built around the moment of capture — voice, photos, text — with a full AI extraction pipeline and a RAG-powered assistant that answers questions across everything you've ever logged.
Project management tools assume you're already at your desk. But the real decisions happen in the car, at the whiteboard, mid-meeting — and by the time you open Notion, half the context is gone. Existing tools either ignore this entirely or bolt on a voice note feature that just dumps a transcript and calls it done. The structured work of turning a raw capture into actual tasks, decisions, and action items still falls on you.
Build the capture layer first — mobile-reliable audio recording, whiteboard photo upload, and a processing pipeline that returns structure, not just transcripts. Claude Sonnet handles the high-stakes extraction work where output quality directly determines what the user sees. Claude Haiku handles the latency-sensitive work where speed matters more than depth. Five view modes (Stream, Board, Timeline, Calendar, Canvas) and a semantic search layer sit on top, so nothing captured is ever out of reach.
- Voice capture — mobile-first recording with graceful lifecycle handling across app-switching, screen locks, and intentional pauses; real-time status updates when processing completes
- Whiteboard photos — images processed through the same extraction pipeline as audio; same schema, same output shape
- AI extraction — Claude classifies each item by type, scores its confidence, resolves relative dates, and flags gaps for missing owners or deadlines
- Confidence-bucketed triage UI — output is sorted by extraction certainty so the user reviews what needs review and acts on what doesn't
- RAG chat assistant — semantic search across all captures, streaming responses, RLS-enforced so no cross-user data leaks
- Five view modes — Stream, Board, Timeline, Calendar, and a full spatial Canvas with connected blocks, lasso selection, and zoom-to-cursor
- Email-forwarding capture — send anything to your Reqall address and it enters the pipeline
Voice in, structured tasks out, fully searchable — end to end in under 60 seconds.
Next.js · App Router · Server Actions · Vercel
Supabase — Postgres, RLS, Storage, Realtime, vector search
Claude Sonnet — structured output, confidence scoring, vision
Claude Haiku — streaming chat, subtask expansion, mind maps
OpenAI Whisper — language-aware transcription
OpenAI embeddings — semantic chunking and retrieval
dnd-kit · Framer Motion · Tiptap · Radix UI · Tailwind CSS
Resend — transactional + inbound capture forwarding
The problems that don't show up in demos. Mobile recording reliability across the full device lifecycle — including the edge case where returning to the app shouldn't resume a recording the user already stopped. Partial pipeline failure tolerance so a single bad artifact doesn't abort everything downstream. A spatial Canvas that handles multi-block drag, lasso selection in canvas-space coordinates, zoom-toward-cursor, and Bézier connections that auto-create on drop — all without state management bugs. And confidence-bucketed output triage that surfaces the right items for review without overwhelming the user with everything the model touched.
Reqall is what you build when you take the capture-first premise seriously. Not a wrapper around a transcription API — a full extraction pipeline with its own classification logic, gap detection, failure handling, and a semantic retrieval layer that makes everything searchable. The difference between a voice memo that becomes a Notion dump and one that becomes structured work is the AI layer in between. That layer is the product.