Mars City Cloud Infrastructure
This page describes a recommended cloud architecture for production deployments of Digital Asset Manager Pro ("Mars City"). It is intended as a starting point and should be adapted to your cloud provider and compliance needs.
High-level topology
- VPC with public and private subnets across multiple availability zones.
- Public-facing load balancer (ALB / Cloud Load Balancer) handling TLS termination.
- Kubernetes cluster (EKS/GKE/AKS) or managed container service for the backend, workers, and auxiliary services.
- Managed relational database (RDS / Cloud SQL) for metadata and user accounts.
- Object storage (S3 / GCS / Blob Storage) for binary assets and backups.
- Redis or managed queue for background job coordination.
Security & Secrets
- Use a secrets manager (AWS Secrets Manager, Google Secret Manager, HashiCorp Vault) for DB credentials and
FLASK_SECRET. - Enforce HTTPS only (redirect HTTP to HTTPS) and use HSTS.
- Use IAM roles/service accounts with least privilege for storage and DB access.
Scalability & Reliability
- Autoscaling for worker and API pods based on CPU/queue-depth.
- Multi-AZ DB with automated backups and point-in-time recovery enabled.
- Use CDN for serving static UI assets and offload bandwidth from origin.
Observability
- Centralized logging (Cloud logging, ELK) and structured logs from backend and workers.
- Metrics + alerting (Prometheus/Grafana or cloud-managed metrics + alerting).
- Tracing (OpenTelemetry) for request flows across frontend/backend/workers.
CI/CD and Deployment
- Use a CI pipeline (GitHub Actions / GitLab CI / Cloud Build) to run tests and build artifacts.
- Deploy to Kubernetes via manifests / Helm / GitOps (ArgoCD, Flux).
- Automate MkDocs build and publish for documentation site; if the wiki must remain protected, deploy the Docs behind the same auth layer.
Backups & DR
- Regular backups of the database and object storage retention policies.
- Periodic verification of backup restores.
Cost & Governance
- Tag resources for cost allocation.
- Implement resource quotas and alerts for unexpected spend.
Next steps
- Add provider-specific deployment instructions (AWS/GCP/Azure) and example Terraform or Helm charts.
Developer mapping & local dev checklist
This section maps the high-level architecture to local files, runtime flows, and common development tasks so you can plan and track work.
What the app does (quick)
- Serves a static frontend (multiple pre-built pages in the repo) that calls a small Flask backend (
server.py) for authentication and asset metadata. - Stores users, settings, assets, and provider tokens in a local SQLite DB (
users.db). DB schema lives ininit_db.py. - Assets can be created via JSON, uploaded via multipart (
/api/assets/upload), and optionally stored in GCS or other external storage via connectors instorage/. - The documentation is built with MkDocs (
mkdocs.yml) and served from thesite/folder behind the protected/wiki/route.
Key files / components
server.py— main Flask app and API endpoints (login, assets CRUD, upload, auth callback, wiki serving).init_db.py— DB schema and seeding (users, assets, settings, secrets).add_user.py— helper to add/update hashed passwords inusers.db.storage/— connector scaffolding for Google Drive, OneDrive, and Google Cloud Storage (connectors.py) and DB helpers instorage/db.py.uploads/— (local) directory for temporary uploaded files (used when GCS not configured).docs/andmkdocs.yml— documentation source;site/is the built site.tests/— pytest tests that validate endpoints and flows (runs against local dev server athttp://127.0.0.1:8000).
Runtime flow (asset upload example)
- User authenticates via
/sign_in.html→/api/login(session cookie created). - Frontend submits multipart upload to
/api/assets/upload. - Server validates file size / MIME type, stores locally or uploads to GCS (if
GCS_BUCKETis set), records anassetsrow, and returns an asset object withurl(signed if GCS). - Background workers (future) would generate thumbnails, transcode, and update metadata; currently seeding and simple returns are included in
init_db.pyandserver.py.
Local dev commands (quick)
cd "d:\WebDev\DigitalAssetMgrPro\stitch_digital_asset_manager_pro"
python -m pip install -r requirements.txt
python init_db.py # one-time DB init + seed data
python server.py # run dev server (127.0.0.1:8000)
python -m mkdocs serve -a 127.0.0.1:8001 # live docs (optional)
python -m pytest # run tests (server should be running)
Things to map & test when extending the app
- External storage: create adapters (S3/GCS) and tests that mock provider calls; store short-lived credentials in
secretstable viastorage/db.py. - Background processing: add worker process (RQ/Celery) and a queue (Redis) for long-running tasks (thumbnails, ingestion).
- Auth: consider migrating to JWT/session cookies consistently and protect admin APIs.
- Docs: keep
docs/andsite/in sync; updatemkdocs.ymlnav when adding pages (pages insite/are served behind/wiki/).
Quick troubleshooting tips
- If
/wiki/redirects to/sign_in.html, you need an authenticated session or init DB seeded user (runinit_db.pyand useadmin@example.com/password123). - Check the server logs —
server.pyprints helpful debug lines for/wiki/requests (use?_debug=1on wiki paths to get internal checks). - If uploads fail, ensure env vars
UPLOAD_MAX_BYTES,UPLOAD_ALLOWED_MIMETYPES, andGCS_BUCKET(if using GCS) are set appropriately.
If you want, I can add a mapping checklist as a page under docs/ (e.g., docs/development_map.md) with linked tasks and initial issues you can track. Would you like that created now?