Building a FlashCard SaaS Platform with Claude AI in One Month
. Reading time: 8 minutes.
From Zero to Production in 30 Days
How AI pair programming transformed a side project into a production-ready SaaS platform on Microsoft Azure
In just 30 days, I built a production-ready flashcard SaaS platform on Azure — mastering identity, environments, cost, and observability along the way. This is how I did it.
The Challenge
To create a full-stack FlashCards application with authentication, a clean API, and modern UX.
I set two goals:
- Learn Azure for certification — Identity flows and Entra, IaC, CI/CD, Web Apps, Containers and telemetry
- Follow industry best practices — Clear architecture, unit tests, documentation, and clean delivery practices
Why FlashCards?
I needed a tool to help me learn various skills, from Azure certification to programming concepts — and I wanted seamless sync between my phone and PC. Existing solutions were either too basic, locked to a single platform, or lacked modern capabilities like AI-powered content generation.
I built exactly what I needed, and it required tackling several production-grade challenges:
- User authentication and multi-tenancy — users need isolated data
- CRUD operations — creating, managing, and organising decks and cards
- Cross-platform support — web, mobile, and desktop with real-time sync
- Scalable infrastructure — handle growth without re-architecture
- AI-powered generation — create complete decks from simple topic descriptions
Future enhancements, such as spaced repetition algorithms and advanced analytics, are planned, but the core platform needed to be solid first.
The Secret Weapon
I used Windows Copilot to sketch the architecture and sprint plan, then Claude AI (integrated within GitHub Copilot in Visual Studio) to scaffold and implement tightly scoped work items. I tracked everything in Azure DevOps, shipped to staging first, then production.
Claude became my pair programmer, rapid and reliable, but requiring precise direction: defined directions for each task, clear constraints on solutions, and explicit examples of success. The result was equal parts personal journey and great software.
What Claude handled:
- Deep implementation of features and APIs
- Complex refactoring across the codebase
- Documentation co-authoring (100+ pages)
- Architecture decision analysis
- Problem solving and debugging
- Code review and consistency checks
The Documentation Strategy
Two decisions proved crucial. First, a dedicated documents repository that doubled as the Azure DevOps wiki, every ADR, API contract, and coding standard lived there, accessible to both humans and AI. Second, a custom instructions file in each repository that automatically loaded project rules into every Claude session. This gave Claude persistent memory of project standards, no need to re-explain conventions.
Each time Claude wrote code, I reviewed it, documented what worked (and what didn’t), and those learnings fed into the next session. This cycle built up 100+ pages of standards and kept the codebase consistent.
Development Velocity
The commit history shows heavy documentation and backend scaffolding being created upfront (Claude’s strength), then steady feature development as the architecture solidified:
Lessons Learned
✅ What Surprised Me (Positively)
-
Speed of Iteration — AI reduced the “blank page” problem dramatically
-
Documentation Quality — Consistent technical writing across 100+ pages
-
Architecture Exploration — Rapid “what if” scenario testing
-
Code Consistency — Patterns reduced cognitive load
🔄 What I Would Refine
-
Context Management — Window limitations required careful session management
-
Verification Overhead — AI code needs careful review
-
Documentation Drift — Maintaining accuracy requires discipline
What I Built
flowchart TB
Web["🌐 Web"] & Mobile["📱 Mobile"]
subgraph Azure["☁️ Azure Platform"]
SWA["Static Web Apps<br/>⚛️ React 19"]
Auth["Entra ID<br/>🔐 OAuth/OIDC"]
API["Container Apps<br/>⚙️ .NET 10"]
SQL["💾 SQL"] & AI["🤖 OpenAI"]
end
Web & Mobile --> SWA
SWA --> Auth & API
Auth -.-> API
API --> SQL & AI
The result is a production-ready platform built on Microsoft Azure. The architecture follows a modular monolith pattern with Clean Architecture, internally structured for future microservices extraction, but deployed as a single unit to keep operational complexity and costs low.
Backend: Uses .NET 10 with Minimal APIs and MediatR for CQRS, providing clean separation between commands, queries, and domain logic.
Frontend: A React 19 SPA built with Vite and TypeScript, communicating with the backend via a well-defined REST API.
Authentication: User identity flows through Microsoft Entra External ID with OAuth 2.0/OIDC.
Infrastructure: Cloud resources are defined in Bicep templates and deployed via Azure Pipelines for reproducible, automated deployments.
Pay only for what you use,scales to zero when idle
Microsoft Entra ID with OAuth 2.0/OIDC
Generate flashcard decks from any topic
Web now, mobile and desktop coming soon
Technology Stack
| Layer | Technology | Key Benefit |
|---|---|---|
| Backend | .NET 10, Minimal APIs, MediatR | Clean Architecture with CQRS |
| Frontend | React 19, Vite, TypeScript | Monorepo with 75% code reuse |
| Database | Azure SQL (Serverless) | Auto-pause for cost savings |
| Auth | Microsoft Entra External ID | OAuth 2.0/OIDC, social logins |
| Hosting | Azure Container Apps | Scale-to-zero serverless |
| AI | Azure OpenAI (GPT-4o-mini) | Deck generation feature |
| IaC | Bicep templates | Reproducible deployments |
| CI/CD | Azure Pipelines | Automated build, test, deploy |
| Testing | xUnit, Playwright | 80%+ coverage, E2E tests |
CI/CD Pipeline
Every code change flows through automated quality gates before reaching production:
flowchart LR Code["💻 Push"] --> Build["🔨 Build"] Build --> Test["🧪 Test"] Test --> Quality["✅ Quality"] Quality --> Staging["📦 Staging"] Staging --> Prod["🚀 Production"]
Before any code reaches production, it must pass rigorous quality gates including 80% minimum test coverage, automated security scanning, code quality analysis, and a manual approval step after staged deployment to ensure every release is deliberate and verified.
AI-Powered Deck Generation
flowchart TB Topic["📝 Describe Topic"] --> Mode["🎯 Choose Mode"] Mode --> AI["🤖 AI Processing"] AI --> Deck["📚 Ready to Study!"]
The Wizard makes creating decks effortless,describe what you want to learn, choose your study mode, and AI generates a complete deck tailored to your choice:
Stack Mode: AI generates question and answer pairs. Study by flipping through cards one by one, revealing answers to reinforce memory through repetition.
Quiz Mode: AI generates questions with multiple answers, one being correct. Study by testing your knowledge with instant feedback,perfect for exam prep and self-assessment.
Once created, your deck is saved and ready to use. Edit cards anytime, study as often as you like, the AI only processes once during creation, so there’s no additional cost or delay when revisiting your decks. This architecture is extensible, allowing for additional quiz types and study modes in future releases.
Popular topics users have generated:
- “Spanish vocabulary for travel”
- “AWS certification prep”
- “JavaScript interview questions”
- “Medical terminology basics”
Cost Optimisation
A key achievement was designing for ultra-low operational costs, the entire platform runs on Azure for less than the price of a few coffees per month:
| Service | Configuration | Monthly Cost |
|---|---|---|
| Azure Container Apps | Consumption (scale-to-zero) | £0-5 |
| Azure SQL Database | Serverless (auto-pause) | £5-10 |
| Azure Static Web Apps | Free tier | £0 |
| Entra External ID | Free tier (50k MAU) | £0 |
| Application Insights | Free tier (5GB) | £0 |
| Total Infrastructure | £15-35/month |
This represents a 95% cost reduction compared to traditional always-on architecture (£800-950/month), making it economically viable to offer the platform free to users while maintaining production-grade infrastructure.
Note on cold starts: The scale-to-zero approach currently introduces a 30-60 second startup time when idle. We are actively optimising this with pre-warmed instances to eliminate the delay while maintaining cost efficiency.
What’s Next
The platform is designed to grow. Potential future enhancements include:
Implement SM-2 algorithm to intelligently schedule card reviews at optimal intervals, maximising long-term retention while minimising study time
Enable users to publish decks publicly or share privately with teams, fostering a community-driven library of high-quality study materials
Visualize learning progress with retention curves, identify weak areas, and track study streaks to understand what’s working
Deploy iOS and Android apps using React Native, leveraging 75% shared codebase for consistent cross-platform experience
Conclusion
Building a production-ready SaaS platform in one month proved what is possible when AI augments human expertise. Claude AI served as an expert pair programmer, documentation co-author, and architectural sounding board, compressing what would traditionally be a 3-4 month project into just four weeks.
The key insight: AI doesn’t replace engineering judgement, it amplifies engineering velocity.
AI handled the boilerplate code and research, freeing me to focus on what matters: architecture, user experience, and business value.
🎯 Ready to Try It?
Experience FlashStack
Everything discussed in this case study is live and free to use
Interested in how AI can accelerate your software development projects? Contact Smarter Business Tech to discuss your requirements.