Compare commits

...

14 Commits

Author SHA1 Message Date
Trey T
4ec4bbbfe8 Auto-seed lookups + admin + templates on first API boot
Some checks failed
Backend CI / Test (push) Has been cancelled
Backend CI / Contract Tests (push) Has been cancelled
Backend CI / Lint (push) Has been cancelled
Backend CI / Secret Scanning (push) Has been cancelled
Backend CI / Build (push) Has been cancelled
Add a data_migration that runs seeds/001_lookups.sql,
seeds/003_admin_user.sql, and seeds/003_task_templates.sql exactly
once on startup and invalidates the Redis seeded_data cache afterwards
so /api/static_data/ returns fresh results. Removes the need to
remember `./dev.sh seed-all`; the data_migrations tracking row prevents
re-runs, and each INSERT uses ON CONFLICT DO UPDATE so re-execution is
safe.
2026-04-15 08:37:55 -05:00
Trey T
58e6997eee Fix migration numbering collision and bump Dockerfile to Go 1.25
Some checks failed
Backend CI / Test (push) Has been cancelled
Backend CI / Contract Tests (push) Has been cancelled
Backend CI / Build (push) Has been cancelled
Backend CI / Lint (push) Has been cancelled
Backend CI / Secret Scanning (push) Has been cancelled
The `000016_task_template_id` and `000017_drop_task_template_regions_join`
migrations introduced on gitea collided with the existing unpadded 016/017
migrations (authtoken_created_at, fk_indexes). Renamed them to 021/022 so
they extend the shipped sequence instead of replacing real migrations.
Also removed the padded 000012-000015 files which were duplicate content
of the shipped 012-015 unpadded migrations.

Dockerfile builder image bumped from golang:1.24-alpine to 1.25-alpine to
match go.mod's `go 1.25` directive.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 16:17:23 -05:00
Trey t
237c6b84ee Onboarding: template backlink, bulk-create endpoint, climate-region scoring
Some checks failed
Backend CI / Test (push) Has been cancelled
Backend CI / Contract Tests (push) Has been cancelled
Backend CI / Build (push) Has been cancelled
Backend CI / Lint (push) Has been cancelled
Backend CI / Secret Scanning (push) Has been cancelled
Clients that send users through a multi-task onboarding step no longer
loop N POST /api/tasks/ calls and no longer create "orphan" tasks with
no reference to the TaskTemplate they came from.

Task model
- New task_template_id column + GORM FK (migration 000016)
- CreateTaskRequest.template_id, TaskResponse.template_id
- task_service.CreateTask persists the backlink

Bulk endpoint
- POST /api/tasks/bulk/ — 1-50 tasks in a single transaction,
  returns every created row + TotalSummary. Single residence access
  check, per-entry residence_id is overridden with batch value
- task_handler.BulkCreateTasks + task_service.BulkCreateTasks using
  db.Transaction; task_repo.CreateTx + FindByIDTx helpers

Climate-region scoring
- templateConditions gains ClimateRegionID; suggestion_service scores
  residence.PostalCode -> ZipToState -> GetClimateRegionIDByState against
  the template's conditions JSON (no penalty on mismatch / unknown ZIP)
- regionMatchBonus 0.35, totalProfileFields 14 -> 15
- Standalone GET /api/tasks/templates/by-region/ removed; legacy
  task_tasktemplate_regions many-to-many dropped (migration 000017).
  Region affinity now lives entirely in the template's conditions JSON

Tests
- +11 cases across task_service_test, task_handler_test, suggestion_
  service_test: template_id persistence, bulk rollback + cap + auth,
  region match / mismatch / no-ZIP / unknown-ZIP / stacks-with-others

Docs
- docs/openapi.yaml: /tasks/bulk/ + BulkCreateTasks schemas, template_id
  on TaskResponse + CreateTaskRequest, /templates/by-region/ removed

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:23:57 -05:00
Trey t
33eee812b6 Harden prod deploy: versioned secrets, healthchecks, migration lock, dry-run
Swarm stack
- Resource limits on all services, stop_grace_period 60s on api/worker/admin
- Dozzle bound to manager loopback only (ssh -L required for access)
- Worker health server on :6060, admin /api/health endpoint
- Redis 200M LRU cap, B2/S3 env vars wired through to api service

Deploy script
- DRY_RUN=1 prints plan + exits
- Auto-rollback on failed healthcheck, docker logout at end
- Versioned-secret pruning keeps last SECRET_KEEP_VERSIONS (default 3)
- PUSH_LATEST_TAG default flipped to false
- B2 all-or-none validation before deploy

Code
- cmd/api takes pg_advisory_lock on a dedicated connection before
  AutoMigrate, serialising boot-time migrations across replicas
- cmd/worker exposes an HTTP /health endpoint with graceful shutdown

Docs
- deploy/DEPLOYING.md: step-by-step walkthrough for a real deploy
- deploy/shit_deploy_cant_do.md: manual prerequisites + recurring ops
- deploy/README.md updated with storage toggle, worker-replica caveat,
  multi-arch recipe, connection-pool tuning, renumbered sections

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-14 15:22:43 -05:00
Trey t
ca818e8478 Merge branch 'master' of github.com:akatreyt/MyCribAPI_GO
Some checks failed
Backend CI / Test (push) Has been cancelled
Backend CI / Contract Tests (push) Has been cancelled
Backend CI / Lint (push) Has been cancelled
Backend CI / Secret Scanning (push) Has been cancelled
Backend CI / Build (push) Has been cancelled
2026-04-01 20:45:43 -05:00
Trey T
bec880886b Coverage priorities 1-5: test pure functions, extract interfaces, mock-based handler tests
- Priority 1: Test NewSendEmailTask + NewSendPushTask (5 tests)
- Priority 2: Test customHTTPErrorHandler — all 15+ branches (21 tests)
- Priority 3: Extract Enqueuer interface + payload builders in worker pkg (5 tests)
- Priority 4: Extract ClassifyFile/ComputeRelPath in migrate-encrypt (6 tests)
- Priority 5: Define Handler interfaces, refactor to accept them, mock-based tests (14 tests)
- Fix .gitignore: /worker instead of worker to stop ignoring internal/worker/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 20:30:09 -05:00
Trey t
2e10822e5a Add S3-compatible storage backend (B2, MinIO, AWS S3)
Introduces a StorageBackend interface with local filesystem and S3
implementations. The StorageService delegates raw I/O to the backend
while keeping validation, encryption, and URL generation unchanged.

Backend selection is config-driven: set B2_ENDPOINT + B2_KEY_ID +
B2_APP_KEY + B2_BUCKET_NAME for S3 mode, or STORAGE_UPLOAD_DIR for
local mode. STORAGE_USE_SSL=false for in-cluster MinIO (HTTP).

All existing tests pass unchanged — the local backend preserves
identical behavior to the previous direct-filesystem implementation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 21:31:24 -05:00
Trey t
34553f3bec Add K3s dev deployment setup for single-node VPS
Mirrors the prod deploy-k3s/ setup but runs all services in-cluster
on a single node: PostgreSQL (replaces Neon), MinIO S3-compatible
storage (replaces B2), Redis, API, worker, and admin.

Includes fully automated setup scripts (00-init through 04-verify),
server hardening (SSH, fail2ban, ufw), Let's Encrypt TLS via Traefik,
network policies, RBAC, and security contexts matching prod.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 21:30:39 -05:00
Trey T
00fd674b56 Remove dead climate region code from suggestion engine
Suggestion engine now purely uses home profile features (heating,
cooling, pool, etc.) for template matching. Climate region field
and matching block removed — ZIP code is no longer collected.
2026-03-30 11:19:04 -05:00
Trey T
cb7080c460 Smart onboarding: residence home profile + suggestion engine
14 new optional residence fields (heating, cooling, water heater, roof,
pool, sprinkler, septic, fireplace, garage, basement, attic, exterior,
flooring, landscaping) with JSONB conditions on templates.

Suggestion engine scores templates against home profile: string match
+0.25, bool +0.3, property type +0.15, universal base 0.3. Graceful
degradation from minimal to full profile info.

GET /api/tasks/suggestions/?residence_id=X returns ranked templates.
54 template conditions across 44 templates in seed data.
8 suggestion service tests.
2026-03-30 09:02:03 -05:00
Trey T
4c9a818bd9 Comprehensive TDD test suite for task logic — ~80 new tests
Predicates (20 cases): IsRecurring, IsOneTime, IsDueSoon,
HasCompletions, GetCompletionCount, IsUpcoming edge cases

Task creation (10): NextDueDate initialization, all frequency types,
past dates, all optional fields, access validation

One-time completion (8): NextDueDate→nil, InProgress reset,
notes/cost/rating, double completion, backdated completed_at

Recurring completion (16): Daily/Weekly/BiWeekly/Monthly/Quarterly/
Yearly/Custom frequencies, late/early completion timing, multiple
sequential completions, no-original-DueDate, CompletedFromColumn capture

QuickComplete (5): one-time, recurring, widget notes, 404, 403

State transitions (10): Cancel→Complete, Archive→Complete, InProgress
cycles, recurring full lifecycle, Archive→Unarchive column restore

Kanban column priority (7): verify chain priority order for all columns

Optimistic locking (7): correct/stale version, conflict on complete/
cancel/archive/mark-in-progress, rollback verification

Deletion (5): single/multi/middle completion deletion, NextDueDate
recalculation, InProgress restore behavior documented

Edge cases (9): boundary dates, late/early recurring, nil/zero frequency
days, custom intervals, version conflicts

Handler validation (4): rating bounds, title/description length,
custom interval validation

All 679 tests pass.
2026-03-26 17:36:50 -05:00
Trey T
7f0300cc95 Add custom_interval_days to TaskResponse DTO
Field existed in Task model but was missing from API response.
Aligns Go API contract with KMM mobile model.
2026-03-26 17:06:34 -05:00
Trey T
6df27f203b Add rate limit response headers (X-RateLimit-*, Retry-After)
Custom rate limiter replacing Echo built-in, with per-IP token bucket.
Every response includes X-RateLimit-Limit, Remaining, Reset headers.
429 responses additionally include Retry-After (seconds).
CORS updated to expose rate limit headers to mobile clients.
4 unit tests for header behavior and per-IP isolation.
2026-03-26 14:36:48 -05:00
Trey T
b679f28e55 Production hardening: security, resilience, observability, and compliance
Password complexity: custom validator requiring uppercase, lowercase, digit (min 8 chars)
Token expiry: 90-day token lifetime with refresh endpoint (60-90 day renewal window)
Health check: /api/health/ now pings Postgres + Redis, returns 503 on failure
Audit logging: async audit_log table for auth events (login, register, delete, etc.)
Circuit breaker: APNs/FCM push sends wrapped with 5-failure threshold, 30s recovery
FK indexes: 27 missing foreign key indexes across all tables (migration 017)
CSP header: default-src 'none'; frame-ancestors 'none'
Gzip compression: level 5 with media endpoint skipper
Prometheus metrics: /metrics endpoint using existing monitoring service
External timeouts: 15s push, 30s SMTP, context timeouts on all external calls

Migrations: 016 (token created_at), 017 (FK indexes), 018 (audit_log)
Tests: circuit breaker (15), audit service (8), token refresh (7), health (4),
       middleware expiry (5), validator (new)
2026-03-26 14:05:28 -05:00
222 changed files with 34154 additions and 1153 deletions

View File

@@ -12,7 +12,9 @@
"Bash(git add:*)",
"Bash(docker ps:*)",
"Bash(git commit:*)",
"Bash(git push:*)"
"Bash(git push:*)",
"Bash(docker info:*)",
"Bash(curl:*)"
]
},
"enableAllProjectMcpServers": true,

54
.dockerignore Normal file
View File

@@ -0,0 +1,54 @@
# Git
.git
.gitignore
.gitattributes
.github
.gitea
# Deploy inputs (never bake into images)
deploy/*.env
deploy/secrets/*.txt
deploy/secrets/*.p8
deploy/scripts/
# Local env files
.env
.env.*
!.env.example
# Node (admin)
admin/node_modules
admin/.next
admin/out
admin/.turbo
admin/.vercel
admin/npm-debug.log*
# Go build artifacts
bin/
dist/
tmp/
*.test
*.out
coverage.out
coverage.html
# Tooling / editor
.vscode
.idea
*.swp
*.swo
.DS_Store
# Logs
*.log
logs/
# Tests / docs (not needed at runtime)
docs/
*.md
!README.md
# CI/compose locals (not needed for swarm image build)
docker-compose*.yml
Makefile

2
.gitignore vendored
View File

@@ -6,7 +6,7 @@
# Binaries
bin/
api
worker
/worker
/admin
!admin/
*.exe

View File

@@ -16,7 +16,7 @@ COPY admin/ .
RUN npm run build
# Go build stage
FROM --platform=$BUILDPLATFORM golang:1.24-alpine AS builder
FROM --platform=$BUILDPLATFORM golang:1.25-alpine AS builder
ARG TARGETARCH
# Install build dependencies

View File

@@ -65,8 +65,10 @@ func main() {
log.Error().Err(dbErr).Msg("Failed to connect to database - API will start but database operations will fail")
} else {
defer database.Close()
// Run database migrations only if connected
if err := database.Migrate(); err != nil {
// Run database migrations only if connected.
// MigrateWithLock serialises parallel replica starts via a Postgres
// advisory lock so concurrent AutoMigrate calls don't race on DDL.
if err := database.MigrateWithLock(); err != nil {
log.Error().Err(err).Msg("Failed to run database migrations")
}
}
@@ -79,6 +81,13 @@ func main() {
cache = nil
} else {
defer cache.Close()
if database.SeedInitialDataApplied {
if err := cache.InvalidateSeededData(context.Background()); err != nil {
log.Warn().Err(err).Msg("Failed to invalidate seeded data cache after initial seed")
} else {
log.Info().Msg("Invalidated seeded_data cache after initial seed migration")
}
}
}
// Initialize monitoring service (if Redis is available)
@@ -122,19 +131,13 @@ func main() {
Msg("Email service not configured - emails will not be sent")
}
// Initialize storage service for file uploads
// Initialize storage service for file uploads (local filesystem or S3-compatible)
var storageService *services.StorageService
if cfg.Storage.UploadDir != "" {
if cfg.Storage.UploadDir != "" || cfg.Storage.IsS3() {
storageService, err = services.NewStorageService(&cfg.Storage)
if err != nil {
log.Warn().Err(err).Msg("Failed to initialize storage service - uploads disabled")
} else {
log.Info().
Str("upload_dir", cfg.Storage.UploadDir).
Str("base_url", cfg.Storage.BaseURL).
Int64("max_file_size", cfg.Storage.MaxFileSize).
Msg("Storage service initialized")
// Initialize file encryption at rest if configured
if cfg.Storage.EncryptionKey != "" {
encSvc, encErr := services.NewEncryptionService(cfg.Storage.EncryptionKey)

View File

@@ -0,0 +1,61 @@
package main
import (
"testing"
"time"
)
func TestClassifyCompletion_CompletedAfterDue_ReturnsOverdue(t *testing.T) {
due := time.Date(2025, 6, 1, 0, 0, 0, 0, time.UTC)
completed := time.Date(2025, 6, 5, 14, 0, 0, 0, time.UTC)
got := classifyCompletion(completed, due, 7)
if got != "overdue_tasks" {
t.Errorf("got %q, want overdue_tasks", got)
}
}
func TestClassifyCompletion_CompletedOnDueDate_ReturnsDueSoon(t *testing.T) {
due := time.Date(2025, 6, 1, 0, 0, 0, 0, time.UTC)
completed := time.Date(2025, 6, 1, 10, 0, 0, 0, time.UTC)
got := classifyCompletion(completed, due, 7)
if got != "due_soon_tasks" {
t.Errorf("got %q, want due_soon_tasks", got)
}
}
func TestClassifyCompletion_CompletedWithinThreshold_ReturnsDueSoon(t *testing.T) {
due := time.Date(2025, 6, 10, 0, 0, 0, 0, time.UTC)
completed := time.Date(2025, 6, 5, 0, 0, 0, 0, time.UTC) // 5 days before due, threshold 7
got := classifyCompletion(completed, due, 7)
if got != "due_soon_tasks" {
t.Errorf("got %q, want due_soon_tasks", got)
}
}
func TestClassifyCompletion_CompletedAtExactThreshold_ReturnsDueSoon(t *testing.T) {
due := time.Date(2025, 6, 10, 0, 0, 0, 0, time.UTC)
completed := time.Date(2025, 6, 3, 0, 0, 0, 0, time.UTC) // exactly 7 days before due
got := classifyCompletion(completed, due, 7)
if got != "due_soon_tasks" {
t.Errorf("got %q, want due_soon_tasks", got)
}
}
func TestClassifyCompletion_CompletedBeyondThreshold_ReturnsUpcoming(t *testing.T) {
due := time.Date(2025, 6, 30, 0, 0, 0, 0, time.UTC)
completed := time.Date(2025, 6, 1, 0, 0, 0, 0, time.UTC) // 29 days before due, threshold 7
got := classifyCompletion(completed, due, 7)
if got != "upcoming_tasks" {
t.Errorf("got %q, want upcoming_tasks", got)
}
}
func TestClassifyCompletion_TimeNormalization_SameDayDifferentTimes(t *testing.T) {
due := time.Date(2025, 6, 1, 23, 59, 59, 0, time.UTC)
completed := time.Date(2025, 6, 1, 0, 0, 1, 0, time.UTC) // same day, different times
got := classifyCompletion(completed, due, 7)
// Same day → daysBefore == 0 → within threshold → due_soon
if got != "due_soon_tasks" {
t.Errorf("got %q, want due_soon_tasks", got)
}
}

View File

@@ -0,0 +1,50 @@
package main
import (
"path/filepath"
"strings"
)
// isEncrypted checks if a file path ends with .enc
func isEncrypted(path string) bool {
return strings.HasSuffix(path, ".enc")
}
// encryptedPath appends .enc to the file path.
func encryptedPath(path string) string {
return path + ".enc"
}
// shouldProcessFile returns true if the file should be encrypted.
func shouldProcessFile(isDir bool, path string) bool {
return !isDir && !isEncrypted(path)
}
// FileAction represents the decision about what to do with a file during encryption migration.
type FileAction int
const (
ActionSkipDir FileAction = iota // Directory, skip
ActionSkipEncrypted // Already encrypted, skip
ActionDryRun // Would encrypt (dry run mode)
ActionEncrypt // Should encrypt
)
// ClassifyFile determines what action to take for a file during the walk.
func ClassifyFile(isDir bool, path string, dryRun bool) FileAction {
if isDir {
return ActionSkipDir
}
if isEncrypted(path) {
return ActionSkipEncrypted
}
if dryRun {
return ActionDryRun
}
return ActionEncrypt
}
// ComputeRelPath computes the relative path from base to path.
func ComputeRelPath(base, path string) (string, error) {
return filepath.Rel(base, path)
}

View File

@@ -0,0 +1,96 @@
package main
import "testing"
func TestIsEncrypted_EncFile_True(t *testing.T) {
if !isEncrypted("photo.jpg.enc") {
t.Error("expected true for .enc file")
}
}
func TestIsEncrypted_PdfFile_False(t *testing.T) {
if isEncrypted("doc.pdf") {
t.Error("expected false for .pdf file")
}
}
func TestIsEncrypted_DotEncOnly_True(t *testing.T) {
if !isEncrypted(".enc") {
t.Error("expected true for '.enc'")
}
}
func TestEncryptedPath_AppendsDotEnc(t *testing.T) {
got := encryptedPath("uploads/photo.jpg")
want := "uploads/photo.jpg.enc"
if got != want {
t.Errorf("got %q, want %q", got, want)
}
}
func TestShouldProcessFile_RegularFile_True(t *testing.T) {
if !shouldProcessFile(false, "photo.jpg") {
t.Error("expected true for regular file")
}
}
func TestShouldProcessFile_Directory_False(t *testing.T) {
if shouldProcessFile(true, "uploads") {
t.Error("expected false for directory")
}
}
func TestShouldProcessFile_AlreadyEncrypted_False(t *testing.T) {
if shouldProcessFile(false, "photo.jpg.enc") {
t.Error("expected false for already encrypted file")
}
}
// --- ClassifyFile ---
func TestClassifyFile_Directory_SkipDir(t *testing.T) {
if got := ClassifyFile(true, "uploads", false); got != ActionSkipDir {
t.Errorf("got %d, want ActionSkipDir", got)
}
}
func TestClassifyFile_EncryptedFile_SkipEncrypted(t *testing.T) {
if got := ClassifyFile(false, "photo.jpg.enc", false); got != ActionSkipEncrypted {
t.Errorf("got %d, want ActionSkipEncrypted", got)
}
}
func TestClassifyFile_DryRun_DryRun(t *testing.T) {
if got := ClassifyFile(false, "photo.jpg", true); got != ActionDryRun {
t.Errorf("got %d, want ActionDryRun", got)
}
}
func TestClassifyFile_Normal_Encrypt(t *testing.T) {
if got := ClassifyFile(false, "photo.jpg", false); got != ActionEncrypt {
t.Errorf("got %d, want ActionEncrypt", got)
}
}
// --- ComputeRelPath ---
func TestComputeRelPath_Valid(t *testing.T) {
got, err := ComputeRelPath("/uploads", "/uploads/photo.jpg")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if got != "photo.jpg" {
t.Errorf("got %q, want %q", got, "photo.jpg")
}
}
func TestComputeRelPath_NestedPath(t *testing.T) {
got, err := ComputeRelPath("/uploads", "/uploads/2024/01/photo.jpg")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
want := "2024/01/photo.jpg"
if got != want {
t.Errorf("got %q, want %q", got, want)
}
}

View File

@@ -13,7 +13,6 @@ import (
"flag"
"os"
"path/filepath"
"strings"
"time"
"github.com/rs/zerolog"
@@ -87,13 +86,11 @@ func main() {
return nil
}
// Skip directories
if info.IsDir() {
action := ClassifyFile(info.IsDir(), path, *dryRun)
switch action {
case ActionSkipDir:
return nil
}
// Skip files already encrypted
if strings.HasSuffix(path, ".enc") {
case ActionSkipEncrypted:
skipped++
return nil
}
@@ -101,14 +98,14 @@ func main() {
totalFiles++
// Compute the relative path from upload dir
relPath, err := filepath.Rel(absUploadDir, path)
relPath, err := ComputeRelPath(absUploadDir, path)
if err != nil {
log.Warn().Err(err).Str("path", path).Msg("Failed to compute relative path")
errCount++
return nil
}
if *dryRun {
if action == ActionDryRun {
log.Info().Str("file", relPath).Msg("[DRY RUN] Would encrypt")
return nil
}

View File

@@ -2,9 +2,11 @@ package main
import (
"context"
"net/http"
"os"
"os/signal"
"syscall"
"time"
"github.com/hibiken/asynq"
"github.com/redis/go-redis/v9"
@@ -20,6 +22,8 @@ import (
"github.com/treytartt/honeydue-api/pkg/utils"
)
const workerHealthAddr = ":6060"
func main() {
// Initialize logger
utils.InitLogger(true)
@@ -188,6 +192,25 @@ func main() {
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
// Health server (for container healthchecks; not externally published)
healthMux := http.NewServeMux()
healthMux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
w.Header().Set("Content-Type", "application/json")
w.WriteHeader(http.StatusOK)
_, _ = w.Write([]byte(`{"status":"ok"}`))
})
healthSrv := &http.Server{
Addr: workerHealthAddr,
Handler: healthMux,
ReadHeaderTimeout: 5 * time.Second,
}
go func() {
log.Info().Str("addr", workerHealthAddr).Msg("Health server listening")
if err := healthSrv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
log.Warn().Err(err).Msg("Health server terminated")
}
}()
// Start scheduler in goroutine
go func() {
if err := scheduler.Run(); err != nil {
@@ -207,6 +230,9 @@ func main() {
log.Info().Msg("Shutting down worker...")
// Graceful shutdown
shutdownCtx, shutdownCancel := context.WithTimeout(context.Background(), 5*time.Second)
defer shutdownCancel()
_ = healthSrv.Shutdown(shutdownCtx)
srv.Shutdown()
scheduler.Shutdown()

24
cmd/worker/startup.go Normal file
View File

@@ -0,0 +1,24 @@
package main
import "github.com/treytartt/honeydue-api/internal/worker/jobs"
// queuePriorities returns the Asynq queue priority map.
func queuePriorities() map[string]int {
return map[string]int{
"critical": 6,
"default": 3,
"low": 1,
}
}
// allJobTypes returns all registered job type strings.
func allJobTypes() []string {
return []string{
jobs.TypeSmartReminder,
jobs.TypeDailyDigest,
jobs.TypeSendEmail,
jobs.TypeSendPush,
jobs.TypeOnboardingEmails,
jobs.TypeReminderLogCleanup,
}
}

View File

@@ -0,0 +1,45 @@
package main
import (
"testing"
)
func TestQueuePriorities_CriticalHighest(t *testing.T) {
p := queuePriorities()
if p["critical"] <= p["default"] || p["critical"] <= p["low"] {
t.Errorf("critical (%d) should be highest", p["critical"])
}
}
func TestQueuePriorities_ThreeQueues(t *testing.T) {
p := queuePriorities()
if len(p) != 3 {
t.Errorf("len = %d, want 3", len(p))
}
}
func TestAllJobTypes_Count(t *testing.T) {
types := allJobTypes()
if len(types) != 6 {
t.Errorf("len = %d, want 6", len(types))
}
}
func TestAllJobTypes_NoDuplicates(t *testing.T) {
types := allJobTypes()
seen := make(map[string]bool)
for _, typ := range types {
if seen[typ] {
t.Errorf("duplicate job type: %q", typ)
}
seen[typ] = true
}
}
func TestAllJobTypes_AllNonEmpty(t *testing.T) {
for _, typ := range allJobTypes() {
if typ == "" {
t.Error("found empty job type")
}
}
}

13
deploy-k3s-dev/.gitignore vendored Normal file
View File

@@ -0,0 +1,13 @@
# Single config file (contains tokens and credentials)
config.yaml
# Generated files
kubeconfig
# Secret files
secrets/*.txt
secrets/*.p8
secrets/*.pem
secrets/*.key
secrets/*.crt
!secrets/README.md

78
deploy-k3s-dev/README.md Normal file
View File

@@ -0,0 +1,78 @@
# honeyDue — K3s Dev Deployment
Single-node K3s dev environment that replicates the production setup with all services running locally.
**Architecture**: 1-node K3s, in-cluster PostgreSQL + Redis + MinIO (S3-compatible), Let's Encrypt TLS.
**Domains**: `devapi.myhoneydue.com`, `devadmin.myhoneydue.com`
---
## Quick Start
```bash
cd honeyDueAPI-go/deploy-k3s-dev
# 1. Fill in config
cp config.yaml.example config.yaml
# Edit config.yaml — fill in ALL empty values
# 2. Create secret files (see secrets/README.md)
echo "your-postgres-password" > secrets/postgres_password.txt
openssl rand -base64 48 > secrets/secret_key.txt
echo "your-smtp-password" > secrets/email_host_password.txt
echo "your-fcm-key" > secrets/fcm_server_key.txt
openssl rand -base64 24 > secrets/minio_root_password.txt
cp /path/to/AuthKey.p8 secrets/apns_auth_key.p8
# 3. Install K3s → Create secrets → Deploy
./scripts/01-setup-k3s.sh
./scripts/02-setup-secrets.sh
./scripts/03-deploy.sh
# 4. Point DNS at the server IP, then verify
./scripts/04-verify.sh
curl https://devapi.myhoneydue.com/api/health/
```
## Prod vs Dev
| Component | Prod (`deploy-k3s/`) | Dev (`deploy-k3s-dev/`) |
|---|---|---|
| Nodes | 3x CX33 (HA etcd) | 1 node (any VPS) |
| PostgreSQL | Neon (managed) | In-cluster container |
| File storage | Backblaze B2 | MinIO (S3-compatible) |
| Redis | In-cluster | In-cluster (identical) |
| TLS | Cloudflare origin cert | Let's Encrypt (or Cloudflare) |
| Replicas | api=3, worker=2 | All 1 |
| HPA/PDB | Enabled | Not deployed |
| Network policies | Same | Same + postgres/minio rules |
| Security contexts | Same | Same (except postgres) |
| Deploy workflow | Same scripts | Same scripts |
| Docker images | Same | Same |
## TLS Modes
**Let's Encrypt** (default): Traefik auto-provisions certs. Set `tls.letsencrypt_email` in config.yaml.
**Cloudflare**: Same as prod. Set `tls.mode: cloudflare`, add origin cert files to `secrets/`.
## Storage Note
MinIO provides the same S3-compatible API as Backblaze B2. The Go API uses the same env vars (`B2_KEY_ID`, `B2_APP_KEY`, `B2_BUCKET_NAME`, `B2_ENDPOINT`) — it connects to MinIO instead of B2 without code changes.
An additional env var `STORAGE_USE_SSL=false` is set since MinIO runs in-cluster over HTTP. If the Go storage service hardcodes HTTPS, it may need a small change to respect this flag.
## Monitoring
```bash
stern -n honeydue . # All logs
kubectl logs -n honeydue deploy/api -f # API logs
kubectl top pods -n honeydue # Resource usage
```
## Rollback
```bash
./scripts/rollback.sh
```

View File

@@ -0,0 +1,103 @@
# config.yaml — single source of truth for honeyDue K3s DEV deployment
# Copy to config.yaml, fill in all empty values, then run scripts in order.
# This file is gitignored — never commit it with real values.
# --- Server ---
server:
host: "" # Server IP or SSH config alias
user: root # SSH user
ssh_key: ~/.ssh/id_ed25519
# --- Domains ---
domains:
api: devapi.myhoneydue.com
admin: devadmin.myhoneydue.com
base: dev.myhoneydue.com
# --- Container Registry (GHCR) ---
registry:
server: ghcr.io
namespace: "" # GitHub username or org
username: "" # GitHub username
token: "" # PAT with read:packages, write:packages
# --- Database (in-cluster PostgreSQL) ---
database:
name: honeydue_dev
user: honeydue
# password goes in secrets/postgres_password.txt
max_open_conns: 10
max_idle_conns: 5
max_lifetime: "600s"
# --- Email (Fastmail) ---
email:
host: smtp.fastmail.com
port: 587
user: "" # Fastmail email address
from: "honeyDue DEV <noreply@myhoneydue.com>"
use_tls: true
# --- Push Notifications ---
push:
apns_key_id: ""
apns_team_id: ""
apns_topic: com.tt.honeyDue
apns_production: false
apns_use_sandbox: true # Sandbox for dev
# --- Object Storage (in-cluster MinIO — S3-compatible, replaces B2) ---
storage:
minio_root_user: honeydue # MinIO access key
# minio_root_password goes in secrets/minio_root_password.txt
bucket: honeydue-dev
max_file_size: 10485760
allowed_types: "image/jpeg,image/png,image/gif,image/webp,application/pdf"
# --- Worker Schedules (UTC hours) ---
worker:
task_reminder_hour: 14
overdue_reminder_hour: 15
daily_digest_hour: 3
# --- Feature Flags ---
features:
push_enabled: true
email_enabled: false # Disabled for dev by default
webhooks_enabled: false
onboarding_emails_enabled: false
pdf_reports_enabled: true
worker_enabled: true
# --- Redis ---
redis:
password: "" # Set a strong password
# --- Admin Panel ---
admin:
basic_auth_user: "" # HTTP basic auth username
basic_auth_password: "" # HTTP basic auth password
# --- TLS ---
tls:
mode: letsencrypt # "letsencrypt" or "cloudflare"
letsencrypt_email: "" # Required if mode=letsencrypt
# If mode=cloudflare, create secrets/cloudflare-origin.crt and .key
# --- Apple Auth / IAP (optional) ---
apple_auth:
client_id: ""
team_id: ""
iap_key_id: ""
iap_issuer_id: ""
iap_bundle_id: ""
iap_key_path: ""
iap_sandbox: true
# --- Google Auth / IAP (optional) ---
google_auth:
client_id: ""
android_client_id: ""
ios_client_id: ""
iap_package_name: ""
iap_service_account_path: ""

View File

@@ -0,0 +1,94 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app.kubernetes.io/name: admin
template:
metadata:
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: admin
imagePullSecrets:
- name: ghcr-credentials
securityContext:
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
seccompProfile:
type: RuntimeDefault
containers:
- name: admin
image: IMAGE_PLACEHOLDER # Replaced by 03-deploy.sh
ports:
- containerPort: 3000
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: PORT
value: "3000"
- name: HOSTNAME
value: "0.0.0.0"
- name: NEXT_PUBLIC_API_URL
valueFrom:
configMapKeyRef:
name: honeydue-config
key: NEXT_PUBLIC_API_URL
volumeMounts:
- name: nextjs-cache
mountPath: /app/.next/cache
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
startupProbe:
httpGet:
path: /admin/
port: 3000
failureThreshold: 12
periodSeconds: 5
readinessProbe:
httpGet:
path: /admin/
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
httpGet:
path: /admin/
port: 3000
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
volumes:
- name: nextjs-cache
emptyDir:
sizeLimit: 256Mi
- name: tmp
emptyDir:
sizeLimit: 64Mi

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: admin
ports:
- port: 3000
targetPort: 3000
protocol: TCP

View File

@@ -0,0 +1,56 @@
# API Ingress — TLS via Let's Encrypt (default) or Cloudflare origin cert
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: honeydue-api
namespace: honeydue
labels:
app.kubernetes.io/part-of: honeydue
annotations:
# TLS_ANNOTATIONS_PLACEHOLDER — replaced by 03-deploy.sh based on tls.mode
traefik.ingress.kubernetes.io/router.middlewares: honeydue-security-headers@kubernetescrd,honeydue-rate-limit@kubernetescrd
spec:
tls:
- hosts:
- API_DOMAIN_PLACEHOLDER
secretName: TLS_SECRET_PLACEHOLDER
rules:
- host: API_DOMAIN_PLACEHOLDER
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 8000
---
# Admin Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: honeydue-admin
namespace: honeydue
labels:
app.kubernetes.io/part-of: honeydue
annotations:
# TLS_ANNOTATIONS_PLACEHOLDER — replaced by 03-deploy.sh based on tls.mode
traefik.ingress.kubernetes.io/router.middlewares: honeydue-security-headers@kubernetescrd,honeydue-rate-limit@kubernetescrd,honeydue-admin-auth@kubernetescrd
spec:
tls:
- hosts:
- ADMIN_DOMAIN_PLACEHOLDER
secretName: TLS_SECRET_PLACEHOLDER
rules:
- host: ADMIN_DOMAIN_PLACEHOLDER
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin
port:
number: 3000

View File

@@ -0,0 +1,45 @@
# Traefik CRD middleware for rate limiting
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: rate-limit
namespace: honeydue
spec:
rateLimit:
average: 100
burst: 200
period: 1m
---
# Security headers
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: security-headers
namespace: honeydue
spec:
headers:
frameDeny: true
contentTypeNosniff: true
browserXssFilter: true
referrerPolicy: "strict-origin-when-cross-origin"
customResponseHeaders:
X-Content-Type-Options: "nosniff"
X-Frame-Options: "DENY"
Strict-Transport-Security: "max-age=31536000; includeSubDomains"
Content-Security-Policy: "default-src 'self'; frame-ancestors 'none'"
Permissions-Policy: "camera=(), microphone=(), geolocation=()"
X-Permitted-Cross-Domain-Policies: "none"
---
# Admin basic auth — additional auth layer for admin panel
# Secret created by 02-setup-secrets.sh from config.yaml credentials
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: admin-auth
namespace: honeydue
spec:
basicAuth:
secret: admin-basic-auth
realm: "honeyDue Admin"

View File

@@ -0,0 +1,81 @@
# One-shot job to create the default bucket in MinIO.
# Applied by 03-deploy.sh after MinIO is running.
# Re-running is safe — mc mb --ignore-existing is idempotent.
apiVersion: batch/v1
kind: Job
metadata:
name: minio-create-bucket
namespace: honeydue
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
spec:
ttlSecondsAfterFinished: 300
backoffLimit: 5
template:
metadata:
labels:
app.kubernetes.io/name: minio-init
app.kubernetes.io/part-of: honeydue
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: mc
image: minio/mc:latest
command:
- sh
- -c
- |
echo "Waiting for MinIO to be ready..."
until mc alias set honeydue http://minio.honeydue.svc.cluster.local:9000 "$MINIO_ROOT_USER" "$MINIO_ROOT_PASSWORD" 2>/dev/null; do
sleep 2
done
echo "Creating bucket: $BUCKET_NAME"
mc mb --ignore-existing "honeydue/$BUCKET_NAME"
echo "Bucket ready."
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: MINIO_ROOT_USER
valueFrom:
configMapKeyRef:
name: honeydue-config
key: MINIO_ROOT_USER
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: honeydue-secrets
key: MINIO_ROOT_PASSWORD
- name: BUCKET_NAME
valueFrom:
configMapKeyRef:
name: honeydue-config
key: B2_BUCKET_NAME
volumeMounts:
- name: tmp
mountPath: /tmp
- name: mc-config
mountPath: /.mc
resources:
requests:
cpu: 50m
memory: 32Mi
limits:
cpu: 200m
memory: 64Mi
volumes:
- name: tmp
emptyDir:
sizeLimit: 16Mi
- name: mc-config
emptyDir:
sizeLimit: 16Mi
restartPolicy: OnFailure

View File

@@ -0,0 +1,89 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio
namespace: honeydue
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: Recreate # ReadWriteOnce PVC — can't attach to two pods
selector:
matchLabels:
app.kubernetes.io/name: minio
template:
metadata:
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: minio
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: minio
image: minio/minio:latest
args: ["server", "/data", "--console-address", ":9001"]
ports:
- name: api
containerPort: 9000
protocol: TCP
- name: console
containerPort: 9001
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: MINIO_ROOT_USER
valueFrom:
configMapKeyRef:
name: honeydue-config
key: MINIO_ROOT_USER
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: honeydue-secrets
key: MINIO_ROOT_PASSWORD
volumeMounts:
- name: minio-data
mountPath: /data
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
readinessProbe:
httpGet:
path: /minio/health/ready
port: 9000
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
httpGet:
path: /minio/health/live
port: 9000
initialDelaySeconds: 15
periodSeconds: 30
timeoutSeconds: 5
volumes:
- name: minio-data
persistentVolumeClaim:
claimName: minio-data
- name: tmp
emptyDir:
sizeLimit: 64Mi

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-data
namespace: honeydue
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 10Gi

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: honeydue
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: minio
ports:
- name: api
port: 9000
targetPort: 9000
protocol: TCP
- name: console
port: 9001
targetPort: 9001
protocol: TCP

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: honeydue
labels:
app.kubernetes.io/part-of: honeydue

View File

@@ -0,0 +1,305 @@
# Network Policies — default-deny with explicit allows
# Same pattern as prod, with added rules for in-cluster postgres and minio.
# --- Default deny all ingress and egress ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: honeydue
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# --- Allow DNS for all pods (required for service discovery) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: honeydue
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
---
# --- API: allow ingress from Traefik (kube-system namespace) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-api
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: api
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: TCP
port: 8000
---
# --- Admin: allow ingress from Traefik (kube-system namespace) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-admin
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: admin
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: TCP
port: 3000
---
# --- Redis: allow ingress ONLY from api + worker pods ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-redis
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: redis
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
- podSelector:
matchLabels:
app.kubernetes.io/name: worker
ports:
- protocol: TCP
port: 6379
---
# --- PostgreSQL: allow ingress ONLY from api + worker pods ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-postgres
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: postgres
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
- podSelector:
matchLabels:
app.kubernetes.io/name: worker
ports:
- protocol: TCP
port: 5432
---
# --- MinIO: allow ingress from api + worker + minio-init job pods ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-minio
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: minio
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
- podSelector:
matchLabels:
app.kubernetes.io/name: worker
- podSelector:
matchLabels:
app.kubernetes.io/name: minio-init
ports:
- protocol: TCP
port: 9000
- protocol: TCP
port: 9001
---
# --- API: allow egress to Redis, PostgreSQL, MinIO, external services ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-api
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: api
policyTypes:
- Egress
egress:
# Redis (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: redis
ports:
- protocol: TCP
port: 6379
# PostgreSQL (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: postgres
ports:
- protocol: TCP
port: 5432
# MinIO (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- protocol: TCP
port: 9000
# External services: SMTP (587), HTTPS (443 — APNs, FCM, PostHog)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 587
- protocol: TCP
port: 443
---
# --- Worker: allow egress to Redis, PostgreSQL, MinIO, external services ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-worker
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: worker
policyTypes:
- Egress
egress:
# Redis (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: redis
ports:
- protocol: TCP
port: 6379
# PostgreSQL (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: postgres
ports:
- protocol: TCP
port: 5432
# MinIO (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- protocol: TCP
port: 9000
# External services: SMTP (587), HTTPS (443 — APNs, FCM)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 587
- protocol: TCP
port: 443
---
# --- Admin: allow egress to API (internal) for SSR ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-admin
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: admin
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
ports:
- protocol: TCP
port: 8000
---
# --- MinIO init job: allow egress to MinIO ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-minio-init
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: minio-init
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- protocol: TCP
port: 9000

View File

@@ -0,0 +1,93 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: honeydue
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: Recreate # ReadWriteOnce PVC — can't attach to two pods
selector:
matchLabels:
app.kubernetes.io/name: postgres
template:
metadata:
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: postgres
# Note: postgres image entrypoint requires root initially to set up
# permissions, then drops to the postgres user. runAsNonRoot is not set
# here because of this requirement. This differs from prod which uses
# managed Neon PostgreSQL (no container to secure).
securityContext:
fsGroup: 999
seccompProfile:
type: RuntimeDefault
containers:
- name: postgres
image: postgres:17-alpine
ports:
- containerPort: 5432
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
env:
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: honeydue-config
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: honeydue-config
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: honeydue-secrets
key: POSTGRES_PASSWORD
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
- name: run
mountPath: /var/run/postgresql
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: "1"
memory: 1Gi
readinessProbe:
exec:
command: ["pg_isready", "-U", "honeydue"]
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command: ["pg_isready", "-U", "honeydue"]
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-data
- name: run
emptyDir: {}
- name: tmp
emptyDir:
sizeLimit: 64Mi

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-data
namespace: honeydue
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/part-of: honeydue
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 10Gi

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: honeydue
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: postgres
ports:
- port: 5432
targetPort: 5432
protocol: TCP

View File

@@ -0,0 +1,68 @@
# RBAC — Dedicated service accounts with no K8s API access
# Each pod gets its own SA with automountServiceAccountToken: false,
# so a compromised pod cannot query the Kubernetes API.
apiVersion: v1
kind: ServiceAccount
metadata:
name: api
namespace: honeydue
labels:
app.kubernetes.io/name: api
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: worker
namespace: honeydue
labels:
app.kubernetes.io/name: worker
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: postgres
namespace: honeydue
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: minio
namespace: honeydue
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false

View File

@@ -0,0 +1,105 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: Recreate # ReadWriteOnce PVC — can't attach to two pods
selector:
matchLabels:
app.kubernetes.io/name: redis
template:
metadata:
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: redis
# No nodeSelector — single node dev cluster
securityContext:
runAsNonRoot: true
runAsUser: 999
runAsGroup: 999
fsGroup: 999
seccompProfile:
type: RuntimeDefault
containers:
- name: redis
image: redis:7-alpine
command:
- sh
- -c
- |
ARGS="--appendonly yes --appendfsync everysec --maxmemory 256mb --maxmemory-policy noeviction"
if [ -n "$REDIS_PASSWORD" ]; then
ARGS="$ARGS --requirepass $REDIS_PASSWORD"
fi
exec redis-server $ARGS
ports:
- containerPort: 6379
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: honeydue-secrets
key: REDIS_PASSWORD
optional: true
volumeMounts:
- name: redis-data
mountPath: /data
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 500m
memory: 512Mi
readinessProbe:
exec:
command:
- sh
- -c
- |
if [ -n "$REDIS_PASSWORD" ]; then
redis-cli -a "$REDIS_PASSWORD" ping 2>/dev/null | grep -q PONG
else
redis-cli ping | grep -q PONG
fi
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- |
if [ -n "$REDIS_PASSWORD" ]; then
redis-cli -a "$REDIS_PASSWORD" ping 2>/dev/null | grep -q PONG
else
redis-cli ping | grep -q PONG
fi
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 5
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data
- name: tmp
emptyDir:
medium: Memory
sizeLimit: 64Mi

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 5Gi

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: redis
ports:
- port: 6379
targetPort: 6379
protocol: TCP

View File

@@ -0,0 +1,16 @@
# Configure K3s's built-in Traefik with Let's Encrypt ACME.
# Applied by 03-deploy.sh only when tls.mode=letsencrypt.
# The email placeholder is replaced by the deploy script.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
additionalArguments:
- "--certificatesresolvers.letsencrypt.acme.email=LETSENCRYPT_EMAIL_PLACEHOLDER"
- "--certificatesresolvers.letsencrypt.acme.storage=/data/acme.json"
- "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
persistence:
enabled: true

235
deploy-k3s-dev/scripts/00-init.sh Executable file
View File

@@ -0,0 +1,235 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEPLOY_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
SECRETS_DIR="${DEPLOY_DIR}/secrets"
CONFIG_FILE="${DEPLOY_DIR}/config.yaml"
log() { printf '[init] %s\n' "$*"; }
warn() { printf '[init][warn] %s\n' "$*" >&2; }
die() { printf '[init][error] %s\n' "$*" >&2; exit 1; }
# --- Prerequisites ---
command -v openssl >/dev/null 2>&1 || die "Missing: openssl"
command -v python3 >/dev/null 2>&1 || die "Missing: python3"
echo ""
echo "============================================"
echo " honeyDue Dev Server — Initial Setup"
echo "============================================"
echo ""
echo "This script will:"
echo " 1. Generate any missing random secrets"
echo " 2. Ask for anything not already filled in"
echo " 3. Create config.yaml with everything filled in"
echo ""
mkdir -p "${SECRETS_DIR}"
# --- Generate random secrets (skip if already exist) ---
generate_if_missing() {
local file="$1" label="$2" cmd="$3"
if [[ -f "${file}" && -s "${file}" ]]; then
log " ${label} — already exists, keeping"
else
eval "${cmd}" > "${file}"
log " ${label} — generated"
fi
}
log "Checking secrets..."
generate_if_missing "${SECRETS_DIR}/secret_key.txt" "secrets/secret_key.txt" "openssl rand -base64 48"
generate_if_missing "${SECRETS_DIR}/postgres_password.txt" "secrets/postgres_password.txt" "openssl rand -base64 24"
generate_if_missing "${SECRETS_DIR}/minio_root_password.txt" "secrets/minio_root_password.txt" "openssl rand -base64 24"
generate_if_missing "${SECRETS_DIR}/email_host_password.txt" "secrets/email_host_password.txt" "echo PLACEHOLDER"
log " secrets/fcm_server_key.txt — skipped (Android not ready)"
generate_if_missing "${SECRETS_DIR}/apns_auth_key.p8" "secrets/apns_auth_key.p8" "echo ''"
REDIS_PW="$(openssl rand -base64 24)"
log " Redis password — generated"
# --- Collect only what's missing ---
ask() {
local var_name="$1" prompt="$2" default="${3:-}"
local val
if [[ -n "${default}" ]]; then
read -rp "${prompt} [${default}]: " val
val="${val:-${default}}"
else
read -rp "${prompt}: " val
fi
eval "${var_name}='${val}'"
}
echo ""
echo "--- Server ---"
ask SERVER_HOST "Server IP or SSH alias" "honeyDueDevUpdate"
[[ -n "${SERVER_HOST}" ]] || die "Server host is required"
ask SERVER_USER "SSH user" "root"
ask SSH_KEY "SSH key path" "~/.ssh/id_ed25519"
echo ""
echo "--- Container Registry (GHCR) ---"
ask GHCR_USER "GitHub username" "treytartt"
[[ -n "${GHCR_USER}" ]] || die "GitHub username is required"
ask GHCR_TOKEN "GitHub PAT (read:packages, write:packages)" "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"
[[ -n "${GHCR_TOKEN}" ]] || die "GitHub PAT is required"
echo ""
echo "--- TLS ---"
ask LE_EMAIL "Let's Encrypt email" "treytartt@fastmail.com"
echo ""
echo "--- Admin Panel ---"
ask ADMIN_USER "Admin basic auth username" "admin"
ADMIN_PW="$(openssl rand -base64 16)"
# --- Known values from existing Dokku setup ---
EMAIL_USER="treytartt@fastmail.com"
APNS_KEY_ID="9R5Q7ZX874"
APNS_TEAM_ID="V3PF3M6B6U"
log ""
log "Pre-filled from existing dev server:"
log " Email user: ${EMAIL_USER}"
log " APNS Key ID: ${APNS_KEY_ID}"
log " APNS Team ID: ${APNS_TEAM_ID}"
# --- Generate config.yaml ---
log "Generating config.yaml..."
cat > "${CONFIG_FILE}" <<YAML
# config.yaml — auto-generated by 00-init.sh
# This file is gitignored — never commit it with real values.
# --- Server ---
server:
host: "${SERVER_HOST}"
user: "${SERVER_USER}"
ssh_key: "${SSH_KEY}"
# --- Domains ---
domains:
api: devapi.myhoneydue.com
admin: devadmin.myhoneydue.com
base: dev.myhoneydue.com
# --- Container Registry (GHCR) ---
registry:
server: ghcr.io
namespace: "${GHCR_USER}"
username: "${GHCR_USER}"
token: "${GHCR_TOKEN}"
# --- Database (in-cluster PostgreSQL) ---
database:
name: honeydue_dev
user: honeydue
max_open_conns: 10
max_idle_conns: 5
max_lifetime: "600s"
# --- Email (Fastmail) ---
email:
host: smtp.fastmail.com
port: 587
user: "${EMAIL_USER}"
from: "honeyDue DEV <${EMAIL_USER}>"
use_tls: true
# --- Push Notifications ---
push:
apns_key_id: "${APNS_KEY_ID}"
apns_team_id: "${APNS_TEAM_ID}"
apns_topic: com.tt.honeyDue
apns_production: false
apns_use_sandbox: true
# --- Object Storage (in-cluster MinIO) ---
storage:
minio_root_user: honeydue
bucket: honeydue-dev
max_file_size: 10485760
allowed_types: "image/jpeg,image/png,image/gif,image/webp,application/pdf"
# --- Worker Schedules (UTC hours) ---
worker:
task_reminder_hour: 14
overdue_reminder_hour: 15
daily_digest_hour: 3
# --- Feature Flags ---
features:
push_enabled: true
email_enabled: false
webhooks_enabled: false
onboarding_emails_enabled: false
pdf_reports_enabled: true
worker_enabled: true
# --- Redis ---
redis:
password: "${REDIS_PW}"
# --- Admin Panel ---
admin:
basic_auth_user: "${ADMIN_USER}"
basic_auth_password: "${ADMIN_PW}"
# --- TLS ---
tls:
mode: letsencrypt
letsencrypt_email: "${LE_EMAIL}"
# --- Apple Auth / IAP ---
apple_auth:
client_id: "com.tt.honeyDue"
team_id: "${APNS_TEAM_ID}"
iap_key_id: ""
iap_issuer_id: ""
iap_bundle_id: ""
iap_key_path: ""
iap_sandbox: true
# --- Google Auth / IAP ---
google_auth:
client_id: ""
android_client_id: ""
ios_client_id: ""
iap_package_name: ""
iap_service_account_path: ""
YAML
# --- Summary ---
echo ""
echo "============================================"
echo " Setup Complete"
echo "============================================"
echo ""
echo "Generated:"
echo " config.yaml"
echo " secrets/secret_key.txt"
echo " secrets/postgres_password.txt"
echo " secrets/minio_root_password.txt"
echo " secrets/email_host_password.txt"
echo " secrets/fcm_server_key.txt"
echo " secrets/apns_auth_key.p8"
echo ""
echo "Admin panel credentials:"
echo " Username: ${ADMIN_USER}"
echo " Password: ${ADMIN_PW}"
echo " (save these — they won't be shown again)"
echo ""
echo "Next steps:"
echo " ./scripts/01-setup-k3s.sh"
echo " ./scripts/02-setup-secrets.sh"
echo " ./scripts/03-deploy.sh"
echo " ./scripts/04-verify.sh"
echo ""

View File

@@ -0,0 +1,146 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
log() { printf '[setup] %s\n' "$*"; }
die() { printf '[setup][error] %s\n' "$*" >&2; exit 1; }
# --- Local prerequisites ---
command -v kubectl >/dev/null 2>&1 || die "Missing locally: kubectl (https://kubernetes.io/docs/tasks/tools/)"
# --- Server connection ---
SERVER_HOST="$(cfg_require server.host "Server IP or SSH alias")"
SERVER_USER="$(cfg server.user)"
SERVER_USER="${SERVER_USER:-root}"
SSH_KEY="$(cfg server.ssh_key | sed "s|~|${HOME}|g")"
SSH_OPTS=()
if [[ -n "${SSH_KEY}" && -f "${SSH_KEY}" ]]; then
SSH_OPTS+=(-i "${SSH_KEY}")
fi
SSH_OPTS+=(-o StrictHostKeyChecking=accept-new)
ssh_cmd() {
ssh "${SSH_OPTS[@]}" "${SERVER_USER}@${SERVER_HOST}" "$@"
}
log "Testing SSH connection to ${SERVER_USER}@${SERVER_HOST}..."
ssh_cmd "echo 'SSH connection OK'" || die "Cannot SSH into ${SERVER_HOST}"
# --- Server prerequisites ---
log "Setting up server prerequisites..."
ssh_cmd 'bash -s' <<'REMOTE_SETUP'
set -euo pipefail
log() { printf '[setup][remote] %s\n' "$*"; }
# --- System updates ---
log "Updating system packages..."
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apt-get upgrade -y -qq
# --- SSH hardening ---
log "Hardening SSH..."
sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin prohibit-password/' /etc/ssh/sshd_config
systemctl reload sshd 2>/dev/null || systemctl reload ssh 2>/dev/null || true
# --- fail2ban ---
if ! command -v fail2ban-client >/dev/null 2>&1; then
log "Installing fail2ban..."
apt-get install -y -qq fail2ban
systemctl enable --now fail2ban
else
log "fail2ban already installed"
fi
# --- Unattended security upgrades ---
if ! dpkg -l | grep -q unattended-upgrades; then
log "Installing unattended-upgrades..."
apt-get install -y -qq unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades
else
log "unattended-upgrades already installed"
fi
# --- Firewall (ufw) ---
if command -v ufw >/dev/null 2>&1; then
log "Configuring firewall..."
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp # SSH
ufw allow 443/tcp # HTTPS (Traefik)
ufw allow 6443/tcp # K3s API
ufw allow 80/tcp # HTTP (Let's Encrypt ACME challenge)
ufw --force enable
else
log "Installing ufw..."
apt-get install -y -qq ufw
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp
ufw allow 443/tcp
ufw allow 6443/tcp
ufw allow 80/tcp
ufw --force enable
fi
log "Server prerequisites complete."
REMOTE_SETUP
# --- Install K3s ---
log "Installing K3s on ${SERVER_HOST}..."
log " This takes about 1-2 minutes."
ssh_cmd "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='server --secrets-encryption' sh -"
# --- Wait for K3s to be ready ---
log "Waiting for K3s to be ready..."
ssh_cmd "until kubectl get nodes >/dev/null 2>&1; do sleep 2; done"
# --- Copy kubeconfig ---
KUBECONFIG_PATH="${DEPLOY_DIR}/kubeconfig"
log "Copying kubeconfig..."
ssh_cmd "sudo cat /etc/rancher/k3s/k3s.yaml" > "${KUBECONFIG_PATH}"
# Replace 127.0.0.1 with the server's actual IP/hostname
# If SERVER_HOST is an SSH alias, resolve the actual IP
ACTUAL_HOST="${SERVER_HOST}"
if ! echo "${SERVER_HOST}" | grep -qE '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$'; then
# Try to resolve from SSH config
RESOLVED="$(ssh -G "${SERVER_HOST}" 2>/dev/null | awk '/^hostname / {print $2}')"
if [[ -n "${RESOLVED}" && "${RESOLVED}" != "${SERVER_HOST}" ]]; then
ACTUAL_HOST="${RESOLVED}"
fi
fi
sed -i.bak "s|https://127.0.0.1:6443|https://${ACTUAL_HOST}:6443|g" "${KUBECONFIG_PATH}"
rm -f "${KUBECONFIG_PATH}.bak"
chmod 600 "${KUBECONFIG_PATH}"
# --- Verify ---
export KUBECONFIG="${KUBECONFIG_PATH}"
log "Verifying cluster..."
kubectl get nodes
log ""
log "K3s installed successfully on ${SERVER_HOST}."
log "Server hardened: SSH key-only, fail2ban, ufw firewall, unattended-upgrades."
log ""
log "Next steps:"
log " export KUBECONFIG=${KUBECONFIG_PATH}"
log " kubectl get nodes"
log " ./scripts/02-setup-secrets.sh"

View File

@@ -0,0 +1,153 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
SECRETS_DIR="${DEPLOY_DIR}/secrets"
NAMESPACE="honeydue"
log() { printf '[secrets] %s\n' "$*"; }
warn() { printf '[secrets][warn] %s\n' "$*" >&2; }
die() { printf '[secrets][error] %s\n' "$*" >&2; exit 1; }
# --- Prerequisites ---
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
kubectl get namespace "${NAMESPACE}" >/dev/null 2>&1 || {
log "Creating namespace ${NAMESPACE}..."
kubectl apply -f "${DEPLOY_DIR}/manifests/namespace.yaml"
}
# --- Validate secret files ---
require_file() {
local path="$1" label="$2"
[[ -f "${path}" ]] || die "Missing: ${path} (${label})"
[[ -s "${path}" ]] || die "Empty: ${path} (${label})"
}
require_file "${SECRETS_DIR}/postgres_password.txt" "Postgres password"
require_file "${SECRETS_DIR}/secret_key.txt" "SECRET_KEY"
require_file "${SECRETS_DIR}/email_host_password.txt" "SMTP password"
# FCM server key is optional (Android not yet ready)
if [[ -f "${SECRETS_DIR}/fcm_server_key.txt" && -s "${SECRETS_DIR}/fcm_server_key.txt" ]]; then
FCM_CONTENT="$(tr -d '\r\n' < "${SECRETS_DIR}/fcm_server_key.txt")"
if [[ "${FCM_CONTENT}" == "PLACEHOLDER" ]]; then
warn "fcm_server_key.txt is a placeholder — FCM push disabled"
FCM_CONTENT=""
fi
else
warn "fcm_server_key.txt not found — FCM push disabled"
FCM_CONTENT=""
fi
require_file "${SECRETS_DIR}/apns_auth_key.p8" "APNS private key"
require_file "${SECRETS_DIR}/minio_root_password.txt" "MinIO root password"
# Validate APNS key format
if ! grep -q "BEGIN PRIVATE KEY" "${SECRETS_DIR}/apns_auth_key.p8"; then
die "APNS key file does not look like a private key: ${SECRETS_DIR}/apns_auth_key.p8"
fi
# Validate secret_key length (minimum 32 chars)
SECRET_KEY_LEN="$(tr -d '\r\n' < "${SECRETS_DIR}/secret_key.txt" | wc -c | tr -d ' ')"
if (( SECRET_KEY_LEN < 32 )); then
die "secret_key.txt must be at least 32 characters (got ${SECRET_KEY_LEN})."
fi
# Validate MinIO password length (minimum 8 chars)
MINIO_PW_LEN="$(tr -d '\r\n' < "${SECRETS_DIR}/minio_root_password.txt" | wc -c | tr -d ' ')"
if (( MINIO_PW_LEN < 8 )); then
die "minio_root_password.txt must be at least 8 characters (got ${MINIO_PW_LEN})."
fi
# --- Read optional config values ---
REDIS_PASSWORD="$(cfg redis.password 2>/dev/null || true)"
ADMIN_AUTH_USER="$(cfg admin.basic_auth_user 2>/dev/null || true)"
ADMIN_AUTH_PASSWORD="$(cfg admin.basic_auth_password 2>/dev/null || true)"
TLS_MODE="$(cfg tls.mode 2>/dev/null || echo "letsencrypt")"
# --- Create app secrets ---
log "Creating honeydue-secrets..."
SECRET_ARGS=(
--namespace="${NAMESPACE}"
--from-literal="POSTGRES_PASSWORD=$(tr -d '\r\n' < "${SECRETS_DIR}/postgres_password.txt")"
--from-literal="SECRET_KEY=$(tr -d '\r\n' < "${SECRETS_DIR}/secret_key.txt")"
--from-literal="EMAIL_HOST_PASSWORD=$(tr -d '\r\n' < "${SECRETS_DIR}/email_host_password.txt")"
--from-literal="FCM_SERVER_KEY=${FCM_CONTENT}"
--from-literal="MINIO_ROOT_PASSWORD=$(tr -d '\r\n' < "${SECRETS_DIR}/minio_root_password.txt")"
)
if [[ -n "${REDIS_PASSWORD}" ]]; then
log " Including REDIS_PASSWORD in secrets"
SECRET_ARGS+=(--from-literal="REDIS_PASSWORD=${REDIS_PASSWORD}")
fi
kubectl create secret generic honeydue-secrets \
"${SECRET_ARGS[@]}" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Create APNS key secret ---
log "Creating honeydue-apns-key..."
kubectl create secret generic honeydue-apns-key \
--namespace="${NAMESPACE}" \
--from-file="apns_auth_key.p8=${SECRETS_DIR}/apns_auth_key.p8" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Create GHCR registry credentials ---
REGISTRY_SERVER="$(cfg registry.server)"
REGISTRY_USER="$(cfg registry.username)"
REGISTRY_TOKEN="$(cfg registry.token)"
if [[ -n "${REGISTRY_SERVER}" && -n "${REGISTRY_USER}" && -n "${REGISTRY_TOKEN}" ]]; then
log "Creating ghcr-credentials..."
kubectl create secret docker-registry ghcr-credentials \
--namespace="${NAMESPACE}" \
--docker-server="${REGISTRY_SERVER}" \
--docker-username="${REGISTRY_USER}" \
--docker-password="${REGISTRY_TOKEN}" \
--dry-run=client -o yaml | kubectl apply -f -
else
warn "Registry credentials incomplete in config.yaml — skipping ghcr-credentials."
fi
# --- Create Cloudflare origin cert (only if cloudflare mode) ---
if [[ "${TLS_MODE}" == "cloudflare" ]]; then
require_file "${SECRETS_DIR}/cloudflare-origin.crt" "Cloudflare origin cert"
require_file "${SECRETS_DIR}/cloudflare-origin.key" "Cloudflare origin key"
log "Creating cloudflare-origin-cert..."
kubectl create secret tls cloudflare-origin-cert \
--namespace="${NAMESPACE}" \
--cert="${SECRETS_DIR}/cloudflare-origin.crt" \
--key="${SECRETS_DIR}/cloudflare-origin.key" \
--dry-run=client -o yaml | kubectl apply -f -
fi
# --- Create admin basic auth secret ---
if [[ -n "${ADMIN_AUTH_USER}" && -n "${ADMIN_AUTH_PASSWORD}" ]]; then
command -v htpasswd >/dev/null 2>&1 || die "Missing: htpasswd (install apache2-utils)"
log "Creating admin-basic-auth secret..."
HTPASSWD="$(htpasswd -nb "${ADMIN_AUTH_USER}" "${ADMIN_AUTH_PASSWORD}")"
kubectl create secret generic admin-basic-auth \
--namespace="${NAMESPACE}" \
--from-literal=users="${HTPASSWD}" \
--dry-run=client -o yaml | kubectl apply -f -
else
warn "admin.basic_auth_user/password not set in config.yaml — skipping admin-basic-auth."
warn "Admin panel will NOT have basic auth protection."
fi
# --- Done ---
log ""
log "All secrets created in namespace '${NAMESPACE}'."
log "Verify: kubectl get secrets -n ${NAMESPACE}"

View File

@@ -0,0 +1,193 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
REPO_DIR="$(cd "${DEPLOY_DIR}/.." && pwd)"
NAMESPACE="honeydue"
MANIFESTS="${DEPLOY_DIR}/manifests"
log() { printf '[deploy] %s\n' "$*"; }
warn() { printf '[deploy][warn] %s\n' "$*" >&2; }
die() { printf '[deploy][error] %s\n' "$*" >&2; exit 1; }
# --- Parse arguments ---
SKIP_BUILD=false
DEPLOY_TAG=""
while (( $# > 0 )); do
case "$1" in
--skip-build) SKIP_BUILD=true; shift ;;
--tag)
[[ -n "${2:-}" ]] || die "--tag requires a value"
DEPLOY_TAG="$2"; shift 2 ;;
-h|--help)
cat <<'EOF'
Usage: ./scripts/03-deploy.sh [OPTIONS]
Options:
--skip-build Skip Docker build/push, use existing images
--tag <tag> Image tag (default: git short SHA)
-h, --help Show this help
EOF
exit 0 ;;
*) die "Unknown argument: $1" ;;
esac
done
# --- Prerequisites ---
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
command -v docker >/dev/null 2>&1 || die "Missing: docker"
if [[ -z "${DEPLOY_TAG}" ]]; then
DEPLOY_TAG="$(git -C "${REPO_DIR}" rev-parse --short HEAD 2>/dev/null || echo "latest")"
fi
# --- Read config ---
REGISTRY_SERVER="$(cfg_require registry.server "Container registry server")"
REGISTRY_NS="$(cfg_require registry.namespace "Registry namespace")"
REGISTRY_USER="$(cfg_require registry.username "Registry username")"
REGISTRY_TOKEN="$(cfg_require registry.token "Registry token")"
TLS_MODE="$(cfg tls.mode 2>/dev/null || echo "letsencrypt")"
API_DOMAIN="$(cfg_require domains.api "API domain")"
ADMIN_DOMAIN="$(cfg_require domains.admin "Admin domain")"
REGISTRY_PREFIX="${REGISTRY_SERVER%/}/${REGISTRY_NS#/}"
API_IMAGE="${REGISTRY_PREFIX}/honeydue-api:${DEPLOY_TAG}"
WORKER_IMAGE="${REGISTRY_PREFIX}/honeydue-worker:${DEPLOY_TAG}"
ADMIN_IMAGE="${REGISTRY_PREFIX}/honeydue-admin:${DEPLOY_TAG}"
# --- Build and push ---
if [[ "${SKIP_BUILD}" == "false" ]]; then
log "Logging in to ${REGISTRY_SERVER}..."
printf '%s' "${REGISTRY_TOKEN}" | docker login "${REGISTRY_SERVER}" -u "${REGISTRY_USER}" --password-stdin >/dev/null
log "Building API image: ${API_IMAGE}"
docker build --target api -t "${API_IMAGE}" "${REPO_DIR}"
log "Building Worker image: ${WORKER_IMAGE}"
docker build --target worker -t "${WORKER_IMAGE}" "${REPO_DIR}"
log "Building Admin image: ${ADMIN_IMAGE}"
docker build --target admin -t "${ADMIN_IMAGE}" "${REPO_DIR}"
log "Pushing images..."
docker push "${API_IMAGE}"
docker push "${WORKER_IMAGE}"
docker push "${ADMIN_IMAGE}"
# Also tag and push :latest
docker tag "${API_IMAGE}" "${REGISTRY_PREFIX}/honeydue-api:latest"
docker tag "${WORKER_IMAGE}" "${REGISTRY_PREFIX}/honeydue-worker:latest"
docker tag "${ADMIN_IMAGE}" "${REGISTRY_PREFIX}/honeydue-admin:latest"
docker push "${REGISTRY_PREFIX}/honeydue-api:latest"
docker push "${REGISTRY_PREFIX}/honeydue-worker:latest"
docker push "${REGISTRY_PREFIX}/honeydue-admin:latest"
else
warn "Skipping build. Using images for tag: ${DEPLOY_TAG}"
fi
# --- Generate and apply ConfigMap from config.yaml ---
log "Generating env from config.yaml..."
ENV_FILE="$(mktemp)"
trap 'rm -f "${ENV_FILE}"' EXIT
generate_env > "${ENV_FILE}"
log "Creating ConfigMap..."
kubectl create configmap honeydue-config \
--namespace="${NAMESPACE}" \
--from-env-file="${ENV_FILE}" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Configure TLS ---
if [[ "${TLS_MODE}" == "letsencrypt" ]]; then
LE_EMAIL="$(cfg_require tls.letsencrypt_email "Let's Encrypt email")"
log "Configuring Traefik with Let's Encrypt (${LE_EMAIL})..."
sed "s|LETSENCRYPT_EMAIL_PLACEHOLDER|${LE_EMAIL}|" \
"${MANIFESTS}/traefik/helmchartconfig.yaml" | kubectl apply -f -
TLS_SECRET="letsencrypt-cert"
TLS_ANNOTATION="traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt"
elif [[ "${TLS_MODE}" == "cloudflare" ]]; then
log "Using Cloudflare origin cert for TLS..."
TLS_SECRET="cloudflare-origin-cert"
TLS_ANNOTATION=""
else
die "Unknown tls.mode: ${TLS_MODE} (expected: letsencrypt or cloudflare)"
fi
# --- Apply manifests ---
log "Applying manifests..."
kubectl apply -f "${MANIFESTS}/namespace.yaml"
kubectl apply -f "${MANIFESTS}/rbac.yaml"
kubectl apply -f "${MANIFESTS}/postgres/"
kubectl apply -f "${MANIFESTS}/redis/"
kubectl apply -f "${MANIFESTS}/minio/deployment.yaml"
kubectl apply -f "${MANIFESTS}/minio/pvc.yaml"
kubectl apply -f "${MANIFESTS}/minio/service.yaml"
kubectl apply -f "${MANIFESTS}/ingress/middleware.yaml"
# Apply ingress with domain and TLS substitution
sed -e "s|API_DOMAIN_PLACEHOLDER|${API_DOMAIN}|g" \
-e "s|ADMIN_DOMAIN_PLACEHOLDER|${ADMIN_DOMAIN}|g" \
-e "s|TLS_SECRET_PLACEHOLDER|${TLS_SECRET}|g" \
-e "s|# TLS_ANNOTATIONS_PLACEHOLDER|${TLS_ANNOTATION}|g" \
"${MANIFESTS}/ingress/ingress.yaml" | kubectl apply -f -
# Apply app deployments with image substitution
sed "s|image: IMAGE_PLACEHOLDER|image: ${API_IMAGE}|" "${MANIFESTS}/api/deployment.yaml" | kubectl apply -f -
kubectl apply -f "${MANIFESTS}/api/service.yaml"
sed "s|image: IMAGE_PLACEHOLDER|image: ${WORKER_IMAGE}|" "${MANIFESTS}/worker/deployment.yaml" | kubectl apply -f -
sed "s|image: IMAGE_PLACEHOLDER|image: ${ADMIN_IMAGE}|" "${MANIFESTS}/admin/deployment.yaml" | kubectl apply -f -
kubectl apply -f "${MANIFESTS}/admin/service.yaml"
# Apply network policies
kubectl apply -f "${MANIFESTS}/network-policies.yaml"
# --- Wait for infrastructure rollouts ---
log "Waiting for infrastructure rollouts..."
kubectl rollout status deployment/postgres -n "${NAMESPACE}" --timeout=120s
kubectl rollout status deployment/redis -n "${NAMESPACE}" --timeout=120s
kubectl rollout status deployment/minio -n "${NAMESPACE}" --timeout=120s
# --- Create MinIO bucket ---
log "Creating MinIO bucket..."
# Delete previous job run if it exists (jobs are immutable)
kubectl delete job minio-create-bucket -n "${NAMESPACE}" 2>/dev/null || true
kubectl apply -f "${MANIFESTS}/minio/create-bucket-job.yaml"
kubectl wait --for=condition=complete job/minio-create-bucket -n "${NAMESPACE}" --timeout=120s
# --- Wait for app rollouts ---
log "Waiting for app rollouts..."
kubectl rollout status deployment/api -n "${NAMESPACE}" --timeout=300s
kubectl rollout status deployment/worker -n "${NAMESPACE}" --timeout=300s
kubectl rollout status deployment/admin -n "${NAMESPACE}" --timeout=300s
# --- Done ---
log ""
log "Deploy completed successfully."
log "Tag: ${DEPLOY_TAG}"
log "TLS: ${TLS_MODE}"
log "Images:"
log " API: ${API_IMAGE}"
log " Worker: ${WORKER_IMAGE}"
log " Admin: ${ADMIN_IMAGE}"
log ""
log "Run ./scripts/04-verify.sh to check cluster health."

View File

@@ -0,0 +1,161 @@
#!/usr/bin/env bash
set -euo pipefail
NAMESPACE="honeydue"
log() { printf '[verify] %s\n' "$*"; }
sep() { printf '\n%s\n' "--- $1 ---"; }
ok() { printf '[verify] ✓ %s\n' "$*"; }
fail() { printf '[verify] ✗ %s\n' "$*"; }
command -v kubectl >/dev/null 2>&1 || { echo "Missing: kubectl" >&2; exit 1; }
sep "Node"
kubectl get nodes -o wide
sep "Pods"
kubectl get pods -n "${NAMESPACE}" -o wide
sep "Services"
kubectl get svc -n "${NAMESPACE}"
sep "Ingress"
kubectl get ingress -n "${NAMESPACE}"
sep "PVCs"
kubectl get pvc -n "${NAMESPACE}"
sep "Secrets (names only)"
kubectl get secrets -n "${NAMESPACE}"
sep "ConfigMap keys"
kubectl get configmap honeydue-config -n "${NAMESPACE}" -o jsonpath='{.data}' 2>/dev/null | python3 -c "
import json, sys
try:
d = json.load(sys.stdin)
for k in sorted(d.keys()):
v = d[k]
if any(s in k.upper() for s in ['PASSWORD', 'SECRET', 'TOKEN', 'KEY']):
v = '***REDACTED***'
print(f' {k}={v}')
except:
print(' (could not parse)')
" 2>/dev/null || log "ConfigMap not found or not parseable"
sep "Warning Events (last 15 min)"
kubectl get events -n "${NAMESPACE}" --field-selector type=Warning --sort-by='.lastTimestamp' 2>/dev/null | tail -20 || log "No warning events"
sep "Pod Restart Counts"
kubectl get pods -n "${NAMESPACE}" -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{range .status.containerStatuses[*]}{.restartCount}{end}{"\n"}{end}' 2>/dev/null || true
# =============================================================================
# Infrastructure Health
# =============================================================================
sep "PostgreSQL Health"
PG_POD="$(kubectl get pods -n "${NAMESPACE}" -l app.kubernetes.io/name=postgres -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)"
if [[ -n "${PG_POD}" ]]; then
kubectl exec -n "${NAMESPACE}" "${PG_POD}" -- pg_isready -U honeydue 2>/dev/null && ok "PostgreSQL is ready" || fail "PostgreSQL is NOT ready"
else
fail "No PostgreSQL pod found"
fi
sep "Redis Health"
REDIS_POD="$(kubectl get pods -n "${NAMESPACE}" -l app.kubernetes.io/name=redis -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)"
if [[ -n "${REDIS_POD}" ]]; then
kubectl exec -n "${NAMESPACE}" "${REDIS_POD}" -- sh -c 'if [ -n "$REDIS_PASSWORD" ]; then redis-cli -a "$REDIS_PASSWORD" ping 2>/dev/null; else redis-cli ping; fi' 2>/dev/null | grep -q PONG && ok "Redis is ready" || fail "Redis is NOT ready"
else
fail "No Redis pod found"
fi
sep "MinIO Health"
MINIO_POD="$(kubectl get pods -n "${NAMESPACE}" -l app.kubernetes.io/name=minio -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)"
if [[ -n "${MINIO_POD}" ]]; then
kubectl exec -n "${NAMESPACE}" "${MINIO_POD}" -- curl -sf http://localhost:9000/minio/health/ready 2>/dev/null && ok "MinIO is ready" || fail "MinIO is NOT ready"
else
fail "No MinIO pod found"
fi
sep "API Health Check"
API_POD="$(kubectl get pods -n "${NAMESPACE}" -l app.kubernetes.io/name=api -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)"
if [[ -n "${API_POD}" ]]; then
kubectl exec -n "${NAMESPACE}" "${API_POD}" -- curl -sf http://localhost:8000/api/health/ 2>/dev/null && ok "API health check passed" || fail "API health check FAILED"
else
fail "No API pod found"
fi
sep "Resource Usage"
kubectl top pods -n "${NAMESPACE}" 2>/dev/null || log "Metrics server not available (install with: kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml)"
# =============================================================================
# Security Verification
# =============================================================================
sep "Security: Network Policies"
NP_COUNT="$(kubectl get networkpolicy -n "${NAMESPACE}" --no-headers 2>/dev/null | wc -l | tr -d ' ')"
if (( NP_COUNT >= 5 )); then
ok "Found ${NP_COUNT} network policies"
kubectl get networkpolicy -n "${NAMESPACE}" --no-headers 2>/dev/null | while read -r line; do
echo " ${line}"
done
else
fail "Expected 5+ network policies, found ${NP_COUNT}"
fi
sep "Security: Service Accounts"
SA_COUNT="$(kubectl get sa -n "${NAMESPACE}" --no-headers 2>/dev/null | grep -cv default | tr -d ' ')"
if (( SA_COUNT >= 6 )); then
ok "Found ${SA_COUNT} custom service accounts (api, worker, admin, redis, postgres, minio)"
else
fail "Expected 6 custom service accounts, found ${SA_COUNT}"
fi
kubectl get sa -n "${NAMESPACE}" --no-headers 2>/dev/null | while read -r line; do
echo " ${line}"
done
sep "Security: Pod Security Contexts"
PODS_WITHOUT_SECURITY="$(kubectl get pods -n "${NAMESPACE}" -o json 2>/dev/null | python3 -c "
import json, sys
try:
data = json.load(sys.stdin)
issues = []
for pod in data.get('items', []):
name = pod['metadata']['name']
spec = pod['spec']
sc = spec.get('securityContext', {})
# Postgres is exempt from runAsNonRoot (entrypoint requirement)
is_postgres = any('postgres' in c.get('image', '') for c in spec.get('containers', []))
if not sc.get('runAsNonRoot') and not is_postgres:
issues.append(f'{name}: missing runAsNonRoot')
for c in spec.get('containers', []):
csc = c.get('securityContext', {})
if csc.get('allowPrivilegeEscalation', True):
issues.append(f'{name}/{c[\"name\"]}: allowPrivilegeEscalation not false')
if issues:
for i in issues:
print(i)
else:
print('OK')
except Exception as e:
print(f'Error: {e}')
" 2>/dev/null || echo "Error parsing pod specs")"
if [[ "${PODS_WITHOUT_SECURITY}" == "OK" ]]; then
ok "All pods have proper security contexts"
else
fail "Pod security context issues:"
echo "${PODS_WITHOUT_SECURITY}" | while read -r line; do
echo " ${line}"
done
fi
sep "Security: Admin Basic Auth"
ADMIN_AUTH="$(kubectl get secret admin-basic-auth -n "${NAMESPACE}" -o name 2>/dev/null || true)"
if [[ -n "${ADMIN_AUTH}" ]]; then
ok "admin-basic-auth secret exists"
else
fail "admin-basic-auth secret not found — admin panel has no additional auth layer"
fi
echo ""
log "Verification complete."

152
deploy-k3s-dev/scripts/_config.sh Executable file
View File

@@ -0,0 +1,152 @@
#!/usr/bin/env bash
# Shared config helper — sourced by all deploy scripts.
# Provides cfg() to read values from config.yaml.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEPLOY_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
CONFIG_FILE="${DEPLOY_DIR}/config.yaml"
if [[ ! -f "${CONFIG_FILE}" ]]; then
if [[ -f "${CONFIG_FILE}.example" ]]; then
echo "[error] config.yaml not found. Run: cp config.yaml.example config.yaml" >&2
else
echo "[error] config.yaml not found." >&2
fi
exit 1
fi
# cfg "dotted.key.path" — reads a value from config.yaml
cfg() {
python3 -c "
import yaml, json, sys
with open(sys.argv[1]) as f:
c = yaml.safe_load(f)
keys = sys.argv[2].split('.')
v = c
for k in keys:
if isinstance(v, list):
v = v[int(k)]
else:
v = v[k]
if isinstance(v, bool):
print(str(v).lower())
elif isinstance(v, (dict, list)):
print(json.dumps(v))
else:
print('' if v is None else v)
" "${CONFIG_FILE}" "$1" 2>/dev/null
}
# cfg_require "key" "label" — reads value and dies if empty
cfg_require() {
local val
val="$(cfg "$1")"
if [[ -z "${val}" ]]; then
echo "[error] Missing required config: $1 ($2)" >&2
exit 1
fi
printf '%s' "${val}"
}
# generate_env — writes the flat env file the app expects to stdout
# Points DB at in-cluster PostgreSQL, storage at in-cluster MinIO
generate_env() {
python3 -c "
import yaml
with open('${CONFIG_FILE}') as f:
c = yaml.safe_load(f)
d = c['domains']
db = c['database']
em = c['email']
ps = c['push']
st = c['storage']
wk = c['worker']
ft = c['features']
aa = c.get('apple_auth', {})
ga = c.get('google_auth', {})
rd = c.get('redis', {})
def b(v):
return str(v).lower() if isinstance(v, bool) else str(v)
def val(v):
return '' if v is None else str(v)
lines = [
# API
'DEBUG=true',
f\"ALLOWED_HOSTS={d['api']},{d['base']},localhost\",
f\"CORS_ALLOWED_ORIGINS=https://{d['base']},https://{d['admin']}\",
'TIMEZONE=UTC',
f\"BASE_URL=https://{d['base']}\",
'PORT=8000',
# Admin
f\"NEXT_PUBLIC_API_URL=https://{d['api']}\",
f\"ADMIN_PANEL_URL=https://{d['admin']}\",
# Database (in-cluster PostgreSQL)
'DB_HOST=postgres.honeydue.svc.cluster.local',
'DB_PORT=5432',
f\"POSTGRES_USER={val(db['user'])}\",
f\"POSTGRES_DB={db['name']}\",
'DB_SSLMODE=disable',
f\"DB_MAX_OPEN_CONNS={db['max_open_conns']}\",
f\"DB_MAX_IDLE_CONNS={db['max_idle_conns']}\",
f\"DB_MAX_LIFETIME={db['max_lifetime']}\",
# Redis (in-cluster)
f\"REDIS_URL=redis://{':%s@' % val(rd.get('password')) if rd.get('password') else ''}redis.honeydue.svc.cluster.local:6379/0\",
'REDIS_DB=0',
# Email
f\"EMAIL_HOST={em['host']}\",
f\"EMAIL_PORT={em['port']}\",
f\"EMAIL_USE_TLS={b(em['use_tls'])}\",
f\"EMAIL_HOST_USER={val(em['user'])}\",
f\"DEFAULT_FROM_EMAIL={val(em['from'])}\",
# Push
'APNS_AUTH_KEY_PATH=/secrets/apns/apns_auth_key.p8',
f\"APNS_AUTH_KEY_ID={val(ps['apns_key_id'])}\",
f\"APNS_TEAM_ID={val(ps['apns_team_id'])}\",
f\"APNS_TOPIC={ps['apns_topic']}\",
f\"APNS_USE_SANDBOX={b(ps['apns_use_sandbox'])}\",
f\"APNS_PRODUCTION={b(ps['apns_production'])}\",
# Worker
f\"TASK_REMINDER_HOUR={wk['task_reminder_hour']}\",
f\"OVERDUE_REMINDER_HOUR={wk['overdue_reminder_hour']}\",
f\"DAILY_DIGEST_HOUR={wk['daily_digest_hour']}\",
# Storage (in-cluster MinIO — S3-compatible, same env vars as B2)
f\"B2_KEY_ID={val(st['minio_root_user'])}\",
# B2_APP_KEY injected from secret (MINIO_ROOT_PASSWORD)
f\"B2_BUCKET_NAME={val(st['bucket'])}\",
'B2_ENDPOINT=minio.honeydue.svc.cluster.local:9000',
'STORAGE_USE_SSL=false',
f\"STORAGE_MAX_FILE_SIZE={st['max_file_size']}\",
f\"STORAGE_ALLOWED_TYPES={st['allowed_types']}\",
# MinIO root user (for MinIO deployment + bucket init job)
f\"MINIO_ROOT_USER={val(st['minio_root_user'])}\",
# Features
f\"FEATURE_PUSH_ENABLED={b(ft['push_enabled'])}\",
f\"FEATURE_EMAIL_ENABLED={b(ft['email_enabled'])}\",
f\"FEATURE_WEBHOOKS_ENABLED={b(ft['webhooks_enabled'])}\",
f\"FEATURE_ONBOARDING_EMAILS_ENABLED={b(ft['onboarding_emails_enabled'])}\",
f\"FEATURE_PDF_REPORTS_ENABLED={b(ft['pdf_reports_enabled'])}\",
f\"FEATURE_WORKER_ENABLED={b(ft['worker_enabled'])}\",
# Apple auth/IAP
f\"APPLE_CLIENT_ID={val(aa.get('client_id'))}\",
f\"APPLE_TEAM_ID={val(aa.get('team_id'))}\",
f\"APPLE_IAP_KEY_ID={val(aa.get('iap_key_id'))}\",
f\"APPLE_IAP_ISSUER_ID={val(aa.get('iap_issuer_id'))}\",
f\"APPLE_IAP_BUNDLE_ID={val(aa.get('iap_bundle_id'))}\",
f\"APPLE_IAP_KEY_PATH={val(aa.get('iap_key_path'))}\",
f\"APPLE_IAP_SANDBOX={b(aa.get('iap_sandbox', True))}\",
# Google auth/IAP
f\"GOOGLE_CLIENT_ID={val(ga.get('client_id'))}\",
f\"GOOGLE_ANDROID_CLIENT_ID={val(ga.get('android_client_id'))}\",
f\"GOOGLE_IOS_CLIENT_ID={val(ga.get('ios_client_id'))}\",
f\"GOOGLE_IAP_PACKAGE_NAME={val(ga.get('iap_package_name'))}\",
f\"GOOGLE_IAP_SERVICE_ACCOUNT_PATH={val(ga.get('iap_service_account_path'))}\",
]
print('\n'.join(lines))
"
}

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -euo pipefail
NAMESPACE="honeydue"
log() { printf '[rollback] %s\n' "$*"; }
die() { printf '[rollback][error] %s\n' "$*" >&2; exit 1; }
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
DEPLOYMENTS=("api" "worker" "admin")
# --- Show current state ---
echo "=== Current Rollout History ==="
for deploy in "${DEPLOYMENTS[@]}"; do
echo ""
echo "--- ${deploy} ---"
kubectl rollout history deployment/"${deploy}" -n "${NAMESPACE}" 2>/dev/null || echo " (not found)"
done
echo ""
echo "=== Current Images ==="
for deploy in "${DEPLOYMENTS[@]}"; do
IMAGE="$(kubectl get deployment "${deploy}" -n "${NAMESPACE}" -o jsonpath='{.spec.template.spec.containers[0].image}' 2>/dev/null || echo "n/a")"
echo " ${deploy}: ${IMAGE}"
done
# --- Confirm ---
echo ""
read -rp "Roll back all deployments to previous revision? [y/N] " confirm
if [[ "${confirm}" != "y" && "${confirm}" != "Y" ]]; then
log "Aborted."
exit 0
fi
# --- Rollback ---
for deploy in "${DEPLOYMENTS[@]}"; do
log "Rolling back ${deploy}..."
kubectl rollout undo deployment/"${deploy}" -n "${NAMESPACE}" 2>/dev/null || log "Skipping ${deploy} (not found or no previous revision)"
done
# --- Wait ---
log "Waiting for rollouts..."
for deploy in "${DEPLOYMENTS[@]}"; do
kubectl rollout status deployment/"${deploy}" -n "${NAMESPACE}" --timeout=300s 2>/dev/null || true
done
# --- Verify ---
echo ""
echo "=== Post-Rollback Images ==="
for deploy in "${DEPLOYMENTS[@]}"; do
IMAGE="$(kubectl get deployment "${deploy}" -n "${NAMESPACE}" -o jsonpath='{.spec.template.spec.containers[0].image}' 2>/dev/null || echo "n/a")"
echo " ${deploy}: ${IMAGE}"
done
log "Rollback complete. Run ./scripts/04-verify.sh to check health."

View File

@@ -0,0 +1,22 @@
# Secrets Directory
Create these files before running `scripts/02-setup-secrets.sh`:
| File | Purpose |
|------|---------|
| `postgres_password.txt` | In-cluster PostgreSQL password |
| `secret_key.txt` | App signing secret (minimum 32 characters) |
| `email_host_password.txt` | SMTP password (Fastmail app password) |
| `fcm_server_key.txt` | Firebase Cloud Messaging server key (optional — Android not yet ready) |
| `apns_auth_key.p8` | Apple Push Notification private key |
| `minio_root_password.txt` | MinIO root password (minimum 8 characters) |
Optional (only if `tls.mode: cloudflare` in config.yaml):
| File | Purpose |
|------|---------|
| `cloudflare-origin.crt` | Cloudflare origin certificate (PEM) |
| `cloudflare-origin.key` | Cloudflare origin certificate key (PEM) |
All string config (registry token, domains, etc.) goes in `config.yaml` instead.
These files are gitignored and should never be committed.

20
deploy-k3s/.gitignore vendored Normal file
View File

@@ -0,0 +1,20 @@
# Single config file (contains tokens and credentials)
config.yaml
# Generated files
kubeconfig
cluster-config.yaml
prod.env
# Secret files
secrets/*.txt
secrets/*.p8
secrets/*.pem
secrets/*.key
secrets/*.crt
!secrets/README.md
# Terraform / Hetzner state
*.tfstate
*.tfstate.backup
.terraform/

391
deploy-k3s/README.md Normal file
View File

@@ -0,0 +1,391 @@
# honeyDue — K3s Production Deployment
Production Kubernetes deployment for honeyDue on Hetzner Cloud using K3s.
**Architecture**: 3-node HA K3s cluster (CX33), Neon Postgres, Redis (in-cluster), Backblaze B2 (uploads), Cloudflare CDN/TLS.
**Domains**: `api.myhoneydue.com`, `admin.myhoneydue.com`
---
## Quick Start
```bash
cd honeyDueAPI-go/deploy-k3s
# 1. Fill in the single config file
cp config.yaml.example config.yaml
# Edit config.yaml — fill in ALL empty values
# 2. Create secret files
# See secrets/README.md for the full list
echo "your-neon-password" > secrets/postgres_password.txt
openssl rand -base64 48 > secrets/secret_key.txt
echo "your-smtp-password" > secrets/email_host_password.txt
echo "your-fcm-key" > secrets/fcm_server_key.txt
cp /path/to/AuthKey.p8 secrets/apns_auth_key.p8
cp /path/to/origin.pem secrets/cloudflare-origin.crt
cp /path/to/origin-key.pem secrets/cloudflare-origin.key
# 3. Provision → Secrets → Deploy
./scripts/01-provision-cluster.sh
./scripts/02-setup-secrets.sh
./scripts/03-deploy.sh
# 4. Set up Hetzner LB + Cloudflare DNS (see sections below)
# 5. Verify
./scripts/04-verify.sh
curl https://api.myhoneydue.com/api/health/
```
That's it. Everything reads from `config.yaml` + `secrets/`.
---
## Table of Contents
1. [Prerequisites](#1-prerequisites)
2. [Configuration](#2-configuration)
3. [Provision Cluster](#3-provision-cluster)
4. [Create Secrets](#4-create-secrets)
5. [Deploy](#5-deploy)
6. [Configure Load Balancer & DNS](#6-configure-load-balancer--dns)
7. [Verify](#7-verify)
8. [Monitoring & Logs](#8-monitoring--logs)
9. [Scaling](#9-scaling)
10. [Rollback](#10-rollback)
11. [Backup & DR](#11-backup--dr)
12. [Security Checklist](#12-security-checklist)
13. [Troubleshooting](#13-troubleshooting)
---
## 1. Prerequisites
| Tool | Install | Purpose |
|------|---------|---------|
| `hetzner-k3s` | `gem install hetzner-k3s` | Cluster provisioning |
| `kubectl` | https://kubernetes.io/docs/tasks/tools/ | Cluster management |
| `helm` | https://helm.sh/docs/intro/install/ | Optional: Prometheus/Grafana |
| `stern` | `brew install stern` | Multi-pod log tailing |
| `docker` | https://docs.docker.com/get-docker/ | Image building |
| `python3` | Pre-installed on macOS | Config parsing |
| `htpasswd` | `brew install httpd` or `apt install apache2-utils` | Admin basic auth secret |
Verify:
```bash
hetzner-k3s version && kubectl version --client && docker version && python3 --version
```
## 2. Configuration
There are two things to fill in:
### config.yaml — all string configuration
```bash
cp config.yaml.example config.yaml
```
Open `config.yaml` and fill in every empty `""` value:
| Section | What to fill in |
|---------|----------------|
| `cluster.hcloud_token` | Hetzner API token (Read/Write) — generate at console.hetzner.cloud |
| `registry.*` | GHCR credentials (same as Docker Swarm setup) |
| `database.host`, `database.user` | Neon PostgreSQL connection info |
| `email.user` | Fastmail email address |
| `push.apns_key_id`, `push.apns_team_id` | Apple Push Notification identifiers |
| `storage.b2_*` | Backblaze B2 bucket and credentials |
| `redis.password` | Strong password for Redis authentication (required for production) |
| `admin.basic_auth_user` | HTTP basic auth username for admin panel |
| `admin.basic_auth_password` | HTTP basic auth password for admin panel |
Everything else has sensible defaults. `config.yaml` is gitignored.
### secrets/ — file-based secrets
These are binary or multi-line files that can't go in YAML:
| File | Source |
|------|--------|
| `secrets/postgres_password.txt` | Your Neon database password |
| `secrets/secret_key.txt` | `openssl rand -base64 48` (min 32 chars) |
| `secrets/email_host_password.txt` | Fastmail app password |
| `secrets/fcm_server_key.txt` | Firebase console → Project Settings → Cloud Messaging |
| `secrets/apns_auth_key.p8` | Apple Developer → Keys → APNs key |
| `secrets/cloudflare-origin.crt` | Cloudflare → SSL/TLS → Origin Server → Create Certificate |
| `secrets/cloudflare-origin.key` | (saved with the certificate above) |
## 3. Provision Cluster
```bash
export KUBECONFIG=$(pwd)/kubeconfig
./scripts/01-provision-cluster.sh
```
This script:
1. Reads cluster config from `config.yaml`
2. Generates `cluster-config.yaml` for hetzner-k3s
3. Provisions 3x CX33 nodes with HA etcd (5-10 minutes)
4. Writes node IPs back into `config.yaml`
5. Labels the Redis node
After provisioning:
```bash
kubectl get nodes
```
## 4. Create Secrets
```bash
./scripts/02-setup-secrets.sh
```
This reads `config.yaml` for registry credentials and creates all Kubernetes Secrets from the `secrets/` files:
- `honeydue-secrets` — DB password, app secret, email password, FCM key, Redis password (if configured)
- `honeydue-apns-key` — APNS .p8 key (mounted as volume in pods)
- `ghcr-credentials` — GHCR image pull credentials
- `cloudflare-origin-cert` — TLS certificate for Ingress
- `admin-basic-auth` — htpasswd secret for admin panel basic auth (if configured)
## 5. Deploy
**Full deploy** (build + push + apply):
```bash
./scripts/03-deploy.sh
```
**Deploy pre-built images** (skip build):
```bash
./scripts/03-deploy.sh --skip-build --tag abc1234
```
The script:
1. Reads registry config from `config.yaml`
2. Builds and pushes 3 Docker images to GHCR
3. Generates a Kubernetes ConfigMap from `config.yaml` (converts to flat env vars)
4. Applies all manifests with image tag substitution
5. Waits for all rollouts to complete
## 6. Configure Load Balancer & DNS
### Hetzner Load Balancer
1. [Hetzner Console](https://console.hetzner.cloud/) → **Load Balancers → Create**
2. Location: **fsn1**, add all 3 nodes as targets
3. Service: TCP 443 → 443, health check on TCP 443
4. Note the LB IP and update `load_balancer_ip` in `config.yaml`
### Cloudflare DNS
1. [Cloudflare Dashboard](https://dash.cloudflare.com/) → `myhoneydue.com`**DNS**
| Type | Name | Content | Proxy |
|------|------|---------|-------|
| A | `api` | `<LB_IP>` | Proxied (orange cloud) |
| A | `admin` | `<LB_IP>` | Proxied (orange cloud) |
2. **SSL/TLS → Overview** → Set mode to **Full (Strict)**
3. If you haven't generated the origin cert yet:
**SSL/TLS → Origin Server → Create Certificate**
- Hostnames: `*.myhoneydue.com`, `myhoneydue.com`
- Validity: 15 years
- Save to `secrets/cloudflare-origin.crt` and `secrets/cloudflare-origin.key`
- Re-run `./scripts/02-setup-secrets.sh`
## 7. Verify
```bash
# Automated cluster health check
./scripts/04-verify.sh
# External health check (after DNS propagation)
curl -v https://api.myhoneydue.com/api/health/
```
Expected: `{"status": "ok"}` with HTTP 200.
## 8. Monitoring & Logs
### Logs with stern
```bash
stern -n honeydue api # All API pod logs
stern -n honeydue worker # All worker logs
stern -n honeydue . # Everything
stern -n honeydue api | grep ERROR # Filter
```
### kubectl logs
```bash
kubectl logs -n honeydue deployment/api -f
kubectl logs -n honeydue <pod-name> --previous # Crashed container
```
### Resource usage
```bash
kubectl top pods -n honeydue
kubectl top nodes
```
### Optional: Prometheus + Grafana
```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring --create-namespace \
--set grafana.adminPassword=your-password
# Access Grafana
kubectl port-forward -n monitoring svc/monitoring-grafana 3001:80
# Open http://localhost:3001
```
## 9. Scaling
### Manual
```bash
kubectl scale deployment/api -n honeydue --replicas=5
kubectl scale deployment/worker -n honeydue --replicas=3
```
### HPA (auto-scaling)
API auto-scales 3→6 replicas on CPU > 70% or memory > 80%:
```bash
kubectl get hpa -n honeydue
kubectl describe hpa api -n honeydue
```
### Adding nodes
Edit `config.yaml` to add nodes, then re-run provisioning:
```bash
./scripts/01-provision-cluster.sh
```
## 10. Rollback
```bash
./scripts/rollback.sh
```
Shows rollout history, asks for confirmation, rolls back all deployments to previous revision.
Single deployment rollback:
```bash
kubectl rollout undo deployment/api -n honeydue
```
## 11. Backup & DR
| Component | Strategy | Action Required |
|-----------|----------|-----------------|
| PostgreSQL | Neon PITR (automatic) | None |
| Redis | Reconstructible cache + Asynq queue | None |
| etcd | K3s auto-snapshots (12h, keeps 5) | None |
| B2 Storage | B2 versioning + lifecycle rules | Enable in B2 settings |
| Secrets | Local `secrets/` + `config.yaml` | Keep secure offline backup |
**Disaster recovery**: Re-provision → re-create secrets → re-deploy. Database recovers via Neon PITR.
## 12. Security
See **[SECURITY.md](SECURITY.md)** for the comprehensive hardening guide, incident response playbooks, and full compliance checklist.
### Summary of deployed security controls
| Control | Status | Manifests |
|---------|--------|-----------|
| Pod security contexts (non-root, read-only FS, no caps) | Applied | All `deployment.yaml` |
| Network policies (default-deny + explicit allows) | Applied | `manifests/network-policies.yaml` |
| RBAC (dedicated SAs, no K8s API access) | Applied | `manifests/rbac.yaml` |
| Pod disruption budgets | Applied | `manifests/pod-disruption-budgets.yaml` |
| Redis authentication | Applied (if `redis.password` set) | `redis/deployment.yaml` |
| Cloudflare-only origin lockdown | Applied | `ingress/ingress.yaml` |
| Admin basic auth | Applied (if `admin.*` set) | `ingress/middleware.yaml` |
| Security headers (HSTS, CSP, Permissions-Policy) | Applied | `ingress/middleware.yaml` |
| Secret encryption at rest | K3s config | `--secrets-encryption` |
### Quick checklist
- [ ] Hetzner Firewall: allow only 22, 443, 6443 from your IP
- [ ] SSH: key-only auth (`PasswordAuthentication no`)
- [ ] `redis.password` set in `config.yaml`
- [ ] `admin.basic_auth_user` and `admin.basic_auth_password` set in `config.yaml`
- [ ] `kubeconfig`: `chmod 600 kubeconfig`, never commit
- [ ] `config.yaml`: contains tokens — never commit, keep secure backup
- [ ] Image scanning: `trivy image` or `docker scout cves` before deploy
- [ ] Run `./scripts/04-verify.sh` — includes automated security checks
## 13. Troubleshooting
### ImagePullBackOff
```bash
kubectl describe pod <pod-name> -n honeydue
# Check: image name, GHCR credentials, image exists
```
Fix: verify `registry.*` in config.yaml, re-run `02-setup-secrets.sh`.
### CrashLoopBackOff
```bash
kubectl logs <pod-name> -n honeydue --previous
# Common: missing env vars, DB connection failure, invalid APNS key
```
### Redis connection refused / NOAUTH
```bash
kubectl get pods -n honeydue -l app.kubernetes.io/name=redis
# If redis.password is set, you must authenticate:
kubectl exec -it deploy/redis -n honeydue -- redis-cli -a "$REDIS_PASSWORD" ping
# Without -a: (error) NOAUTH Authentication required.
```
### Health check failures
```bash
kubectl exec -it deploy/api -n honeydue -- curl -v http://localhost:8000/api/health/
kubectl exec -it deploy/api -n honeydue -- env | sort
```
### Pods stuck in Pending
```bash
kubectl describe pod <pod-name> -n honeydue
# For Redis: ensure a node has label honeydue/redis=true
kubectl get nodes --show-labels | grep redis
```
### DNS not resolving
```bash
dig api.myhoneydue.com +short
# Verify LB IP matches what's in config.yaml
```
### Certificate / TLS errors
```bash
kubectl get secret cloudflare-origin-cert -n honeydue
kubectl describe ingress honeydue -n honeydue
curl -vk --resolve api.myhoneydue.com:443:<NODE_IP> https://api.myhoneydue.com/api/health/
```

813
deploy-k3s/SECURITY.md Normal file
View File

@@ -0,0 +1,813 @@
# honeyDue — Production Security Hardening Guide
Comprehensive security documentation for the honeyDue K3s deployment. Covers every layer from cloud provider to application.
**Last updated**: 2026-03-28
---
## Table of Contents
1. [Threat Model](#1-threat-model)
2. [Hetzner Cloud (Host)](#2-hetzner-cloud-host)
3. [K3s Cluster](#3-k3s-cluster)
4. [Pod Security](#4-pod-security)
5. [Network Segmentation](#5-network-segmentation)
6. [Redis](#6-redis)
7. [PostgreSQL (Neon)](#7-postgresql-neon)
8. [Cloudflare](#8-cloudflare)
9. [Container Images](#9-container-images)
10. [Secrets Management](#10-secrets-management)
11. [B2 Object Storage](#11-b2-object-storage)
12. [Monitoring & Alerting](#12-monitoring--alerting)
13. [Incident Response](#13-incident-response)
14. [Compliance Checklist](#14-compliance-checklist)
---
## 1. Threat Model
### What We're Protecting
| Asset | Impact if Compromised |
|-------|----------------------|
| User credentials (bcrypt hashes) | Account takeover, password reuse attacks |
| Auth tokens | Session hijacking |
| Personal data (email, name, residences) | Privacy violation, regulatory exposure |
| Push notification keys (APNs, FCM) | Spam push to all users, key revocation |
| Cloudflare origin cert | Direct TLS impersonation |
| Database credentials | Full data exfiltration |
| Redis data | Session replay, job queue manipulation |
| B2 storage keys | Document theft or deletion |
### Attack Surface
```
Internet
Cloudflare (WAF, DDoS protection, TLS termination)
▼ (origin cert, Full Strict)
Hetzner Cloud Firewall (ports 22, 443, 6443)
K3s Traefik Ingress (Cloudflare-only IP allowlist)
├──► API pods (Go) ──► Neon PostgreSQL (external, TLS)
│ ──► Redis (internal, authenticated)
│ ──► APNs/FCM (external, TLS)
│ ──► B2 Storage (external, TLS)
│ ──► SMTP (external, TLS)
├──► Admin pods (Next.js) ──► API pods (internal)
└──► Worker pods (Go) ──► same as API
```
### Trust Boundaries
1. **Internet → Cloudflare**: Untrusted. Cloudflare handles DDoS, WAF, TLS.
2. **Cloudflare → Origin**: Semi-trusted. Origin cert validates, IP allowlist enforces.
3. **Ingress → Pods**: Trusted network, but segmented by NetworkPolicy.
4. **Pods → External Services**: Outbound only, TLS required, credentials scoped.
5. **Pods → K8s API**: Denied. Service accounts have no permissions.
---
## 2. Hetzner Cloud (Host)
### Firewall Rules
Only three ports should be open on the Hetzner Cloud Firewall:
| Port | Protocol | Source | Purpose |
|------|----------|--------|---------|
| 22 | TCP | Your IP(s) only | SSH management |
| 443 | TCP | Cloudflare IPs only | HTTPS traffic |
| 6443 | TCP | Your IP(s) only | K3s API (kubectl) |
```bash
# Verify Hetzner firewall rules (Hetzner CLI)
hcloud firewall describe honeydue-fw
```
### SSH Hardening
- **Key-only authentication** — password auth disabled in `/etc/ssh/sshd_config`
- **Root login disabled** — `PermitRootLogin no`
- **fail2ban active** — auto-bans IPs after 5 failed SSH attempts
```bash
# Verify SSH config on each node
ssh user@NODE_IP "grep -E 'PasswordAuthentication|PermitRootLogin' /etc/ssh/sshd_config"
# Expected: PasswordAuthentication no, PermitRootLogin no
# Check fail2ban status
ssh user@NODE_IP "sudo fail2ban-client status sshd"
```
### OS Updates
```bash
# Enable unattended security updates (Ubuntu 24.04)
ssh user@NODE_IP "sudo apt install unattended-upgrades && sudo dpkg-reconfigure -plow unattended-upgrades"
```
---
## 3. K3s Cluster
### Secret Encryption at Rest
K3s is configured with `secrets-encryption: true` in the server config. This encrypts all Secret resources in etcd using AES-CBC.
```bash
# Verify encryption is active
k3s secrets-encrypt status
# Expected: Encryption Status: Enabled
# Rotate encryption keys (do periodically)
k3s secrets-encrypt rotate-keys
k3s secrets-encrypt reencrypt
```
### RBAC
Each workload has a dedicated ServiceAccount with `automountServiceAccountToken: false`:
| ServiceAccount | Used By | K8s API Access |
|---------------|---------|----------------|
| `api` | API deployment | None |
| `worker` | Worker deployment | None |
| `admin` | Admin deployment | None |
| `redis` | Redis deployment | None |
No Roles or RoleBindings are created — pods have zero K8s API access.
```bash
# Verify service accounts exist
kubectl get sa -n honeydue
# Verify no roles are bound
kubectl get rolebindings -n honeydue
kubectl get clusterrolebindings | grep honeydue
# Expected: no results
```
### Pod Disruption Budgets
Prevent node maintenance from taking down all replicas:
| Workload | Replicas | minAvailable |
|----------|----------|-------------|
| API | 3 | 2 |
| Worker | 2 | 1 |
```bash
# Verify PDBs
kubectl get pdb -n honeydue
```
### Audit Logging (Optional Enhancement)
K3s supports audit logging for API server requests:
```yaml
# Add to K3s server config for detailed audit logging
# /etc/rancher/k3s/audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
- level: RequestResponse
users: ["system:anonymous"]
- level: None
resources:
- group: ""
resources: ["events"]
```
### WireGuard (Optional Enhancement)
K3s supports WireGuard for encrypting inter-node traffic:
```bash
# Enable WireGuard on K3s (add to server args)
# --flannel-backend=wireguard-native
```
---
## 4. Pod Security
### Security Contexts
Every pod runs with these security restrictions:
**Pod-level:**
```yaml
securityContext:
runAsNonRoot: true
runAsUser: <uid> # 1000 (api/worker), 1001 (admin), 999 (redis)
runAsGroup: <gid>
fsGroup: <gid>
seccompProfile:
type: RuntimeDefault # Linux kernel syscall filtering
```
**Container-level:**
```yaml
securityContext:
allowPrivilegeEscalation: false # Cannot gain more privileges than parent
readOnlyRootFilesystem: true # Filesystem is immutable
capabilities:
drop: ["ALL"] # No Linux capabilities
```
### Writable Directories
With `readOnlyRootFilesystem: true`, writable paths use emptyDir volumes:
| Pod | Path | Purpose | Backing |
|-----|------|---------|---------|
| API | `/tmp` | Temp files | emptyDir (64Mi) |
| Worker | `/tmp` | Temp files | emptyDir (64Mi) |
| Admin | `/app/.next/cache` | Next.js ISR cache | emptyDir (256Mi) |
| Admin | `/tmp` | Temp files | emptyDir (64Mi) |
| Redis | `/data` | Persistence | PVC (5Gi) |
| Redis | `/tmp` | AOF rewrite temp | emptyDir tmpfs (64Mi) |
### User IDs
| Container | UID:GID | Source |
|-----------|---------|--------|
| API | 1000:1000 | Dockerfile `app` user |
| Worker | 1000:1000 | Dockerfile `app` user |
| Admin | 1001:1001 | Dockerfile `nextjs` user |
| Redis | 999:999 | Alpine `redis` user |
```bash
# Verify all pods run as non-root
kubectl get pods -n honeydue -o jsonpath='{range .items[*]}{.metadata.name}{" runAsNonRoot="}{.spec.securityContext.runAsNonRoot}{"\n"}{end}'
```
---
## 5. Network Segmentation
### Default-Deny Policy
All ingress and egress traffic in the `honeydue` namespace is denied by default. Explicit NetworkPolicy rules allow only necessary traffic.
### Allowed Traffic
```
┌─────────────┐
│ Traefik │
│ (kube-system)│
└──────┬──────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ │
┌────────┐ ┌────────┐ │
│ API │ │ Admin │ │
│ :8000 │ │ :3000 │ │
└───┬────┘ └────┬───┘ │
│ │ │
┌───────┤ │ │
│ │ │ │
▼ ▼ ▼ │
┌───────┐ ┌────────┐ ┌────────┐ │
│ Redis │ │External│ │ API │ │
│ :6379 │ │Services│ │(in-clr)│ │
└───────┘ └────────┘ └────────┘ │
▲ │
│ ┌────────┐ │
└───────│ Worker │────────────┘
└────────┘
```
| Policy | From | To | Ports |
|--------|------|----|-------|
| `default-deny-all` | all | all | none |
| `allow-dns` | all pods | kube-dns | 53 UDP/TCP |
| `allow-ingress-to-api` | Traefik (kube-system) | API pods | 8000 |
| `allow-ingress-to-admin` | Traefik (kube-system) | Admin pods | 3000 |
| `allow-ingress-to-redis` | API + Worker pods | Redis | 6379 |
| `allow-egress-from-api` | API pods | Redis, external (443, 5432, 587) | various |
| `allow-egress-from-worker` | Worker pods | Redis, external (443, 5432, 587) | various |
| `allow-egress-from-admin` | Admin pods | API pods (in-cluster) | 8000 |
**Key restrictions:**
- Redis is reachable ONLY from API and Worker pods
- Admin can ONLY reach the API service (no direct DB/Redis access)
- No pod can reach private IP ranges except in-cluster services
- External egress limited to specific ports (443, 5432, 587)
```bash
# Verify network policies
kubectl get networkpolicy -n honeydue
# Test: admin pod should NOT be able to reach Redis
kubectl exec -n honeydue deploy/admin -- nc -zv redis.honeydue.svc.cluster.local 6379
# Expected: timeout/refused
```
---
## 6. Redis
### Authentication
Redis requires a password when `redis.password` is set in `config.yaml`:
- Password passed via `REDIS_PASSWORD` environment variable from `honeydue-secrets`
- Redis starts with `--requirepass $REDIS_PASSWORD`
- Health probes authenticate with `-a $REDIS_PASSWORD`
- Go API connects via `redis://:PASSWORD@redis.honeydue.svc.cluster.local:6379/0`
### Network Isolation
- Redis has **no Ingress** — not exposed outside the cluster
- NetworkPolicy restricts access to API and Worker pods only
- Admin pods cannot reach Redis
### Memory Limits
- `--maxmemory 256mb` — hard cap on Redis memory
- `--maxmemory-policy noeviction` — returns errors rather than silently evicting data
- K8s resource limit: 512Mi (headroom for AOF rewrite)
### Dangerous Command Renaming (Optional Enhancement)
For additional protection, rename dangerous commands in a custom `redis.conf`:
```
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command DEBUG ""
rename-command CONFIG "HONEYDUE_CONFIG_a7f3b"
```
```bash
# Verify Redis auth is required
kubectl exec -n honeydue deploy/redis -- redis-cli ping
# Expected: (error) NOAUTH Authentication required.
kubectl exec -n honeydue deploy/redis -- redis-cli -a "$REDIS_PASSWORD" ping
# Expected: PONG
```
---
## 7. PostgreSQL (Neon)
### Connection Security
- **SSL required**: `sslmode=require` in connection string
- **Connection limits**: `max_open_conns=25`, `max_idle_conns=10`
- **Scoped credentials**: Database user has access only to `honeydue` database
- **Password rotation**: Change in Neon dashboard, update `secrets/postgres_password.txt`, re-run `02-setup-secrets.sh`
### Access Control
- Only API and Worker pods have egress to port 5432 (NetworkPolicy enforced)
- Admin pods cannot reach the database directly
- Redis pods have no external egress
```bash
# Verify only API/Worker can reach Neon
kubectl exec -n honeydue deploy/admin -- nc -zv ep-xxx.us-east-2.aws.neon.tech 5432
# Expected: timeout (blocked by network policy)
```
### Query Safety
- GORM uses parameterized queries (SQL injection prevention)
- No raw SQL in handlers — all queries go through repositories
- Decimal fields use `shopspring/decimal` (no floating-point errors)
---
## 8. Cloudflare
### TLS Configuration
- **Mode**: Full (Strict) — Cloudflare validates the origin certificate
- **Origin cert**: Stored as K8s Secret `cloudflare-origin-cert`
- **Minimum TLS**: 1.2 (set in Cloudflare dashboard)
- **HSTS**: Enabled via security headers middleware
### Origin Lockdown
The `cloudflare-only` Traefik middleware restricts all ingress to Cloudflare IP ranges only. Direct requests to the origin IP are rejected with 403.
```bash
# Test: direct request to origin should fail
curl -k https://ORIGIN_IP/api/health/
# Expected: 403 Forbidden
# Test: request through Cloudflare should work
curl https://api.myhoneydue.com/api/health/
# Expected: 200 OK
```
### Cloudflare IP Range Updates
Cloudflare IP ranges change infrequently but should be checked periodically:
```bash
# Compare current ranges with deployed middleware
diff <(curl -s https://www.cloudflare.com/ips-v4; curl -s https://www.cloudflare.com/ips-v6) \
<(kubectl get middleware cloudflare-only -n honeydue -o jsonpath='{.spec.ipAllowList.sourceRange[*]}' | tr ' ' '\n')
```
### WAF & Rate Limiting
- **Cloudflare WAF**: Enable managed rulesets in dashboard (OWASP Core, Cloudflare Specials)
- **Rate limiting**: Traefik middleware (100 req/min, burst 200) + Go API auth rate limiting
- **Bot management**: Enable in Cloudflare dashboard for API routes
### Security Headers
Applied via Traefik middleware to all responses:
| Header | Value |
|--------|-------|
| `Strict-Transport-Security` | `max-age=31536000; includeSubDomains` |
| `X-Frame-Options` | `DENY` |
| `X-Content-Type-Options` | `nosniff` |
| `X-XSS-Protection` | `1; mode=block` |
| `Referrer-Policy` | `strict-origin-when-cross-origin` |
| `Content-Security-Policy` | `default-src 'self'; frame-ancestors 'none'` |
| `Permissions-Policy` | `camera=(), microphone=(), geolocation=()` |
| `X-Permitted-Cross-Domain-Policies` | `none` |
---
## 9. Container Images
### Build Security
- **Multi-stage builds**: Build stage discarded, only runtime artifacts copied
- **Alpine base**: Minimal attack surface (~5MB base)
- **Non-root users**: `app:1000` (Go), `nextjs:1001` (admin)
- **Stripped binaries**: Go binaries built with `-ldflags "-s -w"` (no debug symbols)
- **No shell in final image** (Go containers): Only the binary + CA certs
### Image Scanning (Recommended)
Add image scanning to CI/CD before pushing to GHCR:
```bash
# Trivy scan (run in CI)
trivy image --severity HIGH,CRITICAL --exit-code 1 ghcr.io/NAMESPACE/honeydue-api:latest
# Grype alternative
grype ghcr.io/NAMESPACE/honeydue-api:latest --fail-on high
```
### Version Pinning
- Redis image: `redis:7-alpine` (pin to specific tag in production, e.g., `redis:7.4.2-alpine`)
- Go base: pinned in Dockerfile
- Node base: pinned in admin Dockerfile
---
## 10. Secrets Management
### At-Rest Encryption
K3s encrypts all Secret resources in etcd with AES-CBC (`--secrets-encryption` flag).
### Secret Inventory
| Secret | Contains | Rotation Procedure |
|--------|----------|--------------------|
| `honeydue-secrets` | DB password, SECRET_KEY, SMTP password, FCM key, Redis password | Update source files + re-run `02-setup-secrets.sh` |
| `honeydue-apns-key` | APNs .p8 private key | Replace file + re-run `02-setup-secrets.sh` |
| `cloudflare-origin-cert` | TLS cert + key | Regenerate in Cloudflare + re-run `02-setup-secrets.sh` |
| `ghcr-credentials` | Registry PAT | Regenerate GitHub PAT + re-run `02-setup-secrets.sh` |
| `admin-basic-auth` | htpasswd hash | Update config.yaml + re-run `02-setup-secrets.sh` |
### Rotation Procedure
```bash
# 1. Update the secret source (file or config.yaml value)
# 2. Re-run the secrets script
./scripts/02-setup-secrets.sh
# 3. Restart affected pods to pick up new secret values
kubectl rollout restart deployment/api deployment/worker -n honeydue
# 4. Verify pods are healthy
kubectl get pods -n honeydue -w
```
### Secret Hygiene
- `secrets/` directory is gitignored — never committed
- `config.yaml` is gitignored — never committed
- Scripts validate secret files exist and aren't empty before creating K8s secrets
- `SECRET_KEY` requires minimum 32 characters
- ConfigMap redacts sensitive values in `04-verify.sh` output
---
## 11. B2 Object Storage
### Access Control
- **Scoped application key**: Create a B2 key with access to only the `honeydue` bucket
- **Permissions**: Read + Write only (no `deleteFiles`, no `listAllBucketNames`)
- **Bucket-only**: Key cannot access other buckets in the account
```bash
# Create scoped B2 key (Backblaze CLI)
b2 create-key --bucket BUCKET_NAME honeydue-api readFiles,writeFiles,listFiles
```
### Upload Validation (Go API)
- File size limit: `STORAGE_MAX_FILE_SIZE` (10MB default)
- Allowed MIME types: `STORAGE_ALLOWED_TYPES` (images + PDF only)
- Path traversal protection in upload handler
- Files served via authenticated proxy (`media_handler`) — no direct B2 URLs exposed to clients
### Versioning
Enable B2 bucket versioning to protect against accidental deletion:
```bash
# Enable versioning on the B2 bucket
b2 update-bucket --versioning enabled BUCKET_NAME
```
---
## 12. Monitoring & Alerting
### Log Aggregation
K3s logs are available via `kubectl logs`. For persistent log aggregation:
```bash
# View API logs
kubectl logs -n honeydue -l app.kubernetes.io/name=api --tail=100 -f
# View worker logs
kubectl logs -n honeydue -l app.kubernetes.io/name=worker --tail=100 -f
# View all warning events
kubectl get events -n honeydue --field-selector type=Warning --sort-by='.lastTimestamp'
```
**Recommended**: Deploy Loki + Grafana for persistent log search and alerting.
### Health Monitoring
```bash
# Continuous health monitoring
watch -n 10 "kubectl get pods -n honeydue -o wide && echo && kubectl top pods -n honeydue 2>/dev/null"
# Check pod restart counts (indicator of crashes)
kubectl get pods -n honeydue -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{range .status.containerStatuses[*]}{.restartCount}{end}{"\n"}{end}'
```
### Alerting Thresholds
| Metric | Warning | Critical | Check Command |
|--------|---------|----------|---------------|
| Pod restarts | > 3 in 1h | > 10 in 1h | `kubectl get pods` |
| API response time | > 500ms p95 | > 2s p95 | Cloudflare Analytics |
| Memory usage | > 80% limit | > 95% limit | `kubectl top pods` |
| Redis memory | > 200MB | > 250MB | `redis-cli info memory` |
| Disk (PVC) | > 80% | > 95% | `kubectl exec ... df -h` |
| Certificate expiry | < 30 days | < 7 days | Cloudflare dashboard |
### Audit Trail
- **K8s events**: `kubectl get events -n honeydue` (auto-pruned after 1h)
- **Go API**: zerolog structured logging with credential masking
- **Cloudflare**: Access logs, WAF logs, rate limiting logs in dashboard
- **Hetzner**: SSH auth logs in `/var/log/auth.log`
---
## 13. Incident Response
### Playbook: Compromised API Token
```bash
# 1. Rotate SECRET_KEY to invalidate ALL tokens
echo "$(openssl rand -hex 32)" > secrets/secret_key.txt
./scripts/02-setup-secrets.sh
kubectl rollout restart deployment/api deployment/worker -n honeydue
# 2. All users will need to re-authenticate
```
### Playbook: Compromised Database Credentials
```bash
# 1. Rotate password in Neon dashboard
# 2. Update local secret file
echo "NEW_PASSWORD" > secrets/postgres_password.txt
./scripts/02-setup-secrets.sh
kubectl rollout restart deployment/api deployment/worker -n honeydue
# 3. Monitor for connection errors
kubectl logs -n honeydue -l app.kubernetes.io/name=api --tail=50 -f
```
### Playbook: Compromised Push Notification Keys
```bash
# APNs: Revoke key in Apple Developer Console, generate new .p8
cp new_key.p8 secrets/apns_auth_key.p8
./scripts/02-setup-secrets.sh
kubectl rollout restart deployment/api deployment/worker -n honeydue
# FCM: Rotate server key in Firebase Console
echo "NEW_FCM_KEY" > secrets/fcm_server_key.txt
./scripts/02-setup-secrets.sh
kubectl rollout restart deployment/api deployment/worker -n honeydue
```
### Playbook: Suspicious Pod Behavior
```bash
# 1. Isolate the pod (remove from service)
kubectl label pod SUSPICIOUS_POD -n honeydue app.kubernetes.io/name-
# 2. Capture state for investigation
kubectl logs SUSPICIOUS_POD -n honeydue > /tmp/suspicious-logs.txt
kubectl describe pod SUSPICIOUS_POD -n honeydue > /tmp/suspicious-describe.txt
# 3. Delete and let deployment recreate
kubectl delete pod SUSPICIOUS_POD -n honeydue
```
### Communication Plan
1. **Internal**: Document incident timeline in a private channel
2. **Users**: If data breach — notify affected users within 72 hours
3. **Vendors**: Revoke/rotate all potentially compromised credentials
4. **Post-mortem**: Document root cause, timeline, remediation, prevention
---
## 14. Compliance Checklist
Run through this checklist before production launch and periodically thereafter.
### Infrastructure
- [ ] Hetzner firewall allows only ports 22, 443, 6443
- [ ] SSH password auth disabled on all nodes
- [ ] fail2ban active on all nodes
- [ ] OS security updates enabled (unattended-upgrades)
```bash
# Verify
hcloud firewall describe honeydue-fw
ssh user@NODE "grep PasswordAuthentication /etc/ssh/sshd_config"
ssh user@NODE "sudo fail2ban-client status sshd"
```
### K3s Cluster
- [ ] Secret encryption enabled
- [ ] Service accounts created with no API access
- [ ] Pod disruption budgets deployed
- [ ] No default service account used by workloads
```bash
# Verify
k3s secrets-encrypt status
kubectl get sa -n honeydue
kubectl get pdb -n honeydue
kubectl get pods -n honeydue -o jsonpath='{range .items[*]}{.metadata.name}{" sa="}{.spec.serviceAccountName}{"\n"}{end}'
```
### Pod Security
- [ ] All pods: `runAsNonRoot: true`
- [ ] All containers: `allowPrivilegeEscalation: false`
- [ ] All containers: `readOnlyRootFilesystem: true`
- [ ] All containers: `capabilities.drop: ["ALL"]`
- [ ] All pods: `seccompProfile.type: RuntimeDefault`
```bash
# Verify (automated check in 04-verify.sh)
./scripts/04-verify.sh
```
### Network
- [ ] Default-deny NetworkPolicy applied
- [ ] 8+ explicit allow policies deployed
- [ ] Redis only reachable from API + Worker
- [ ] Admin only reaches API service
- [ ] Cloudflare-only middleware applied to all ingress
```bash
# Verify
kubectl get networkpolicy -n honeydue
kubectl get ingress -n honeydue -o yaml | grep cloudflare-only
```
### Authentication & Authorization
- [ ] Redis requires password
- [ ] Admin panel has basic auth layer
- [ ] API uses bcrypt for passwords
- [ ] Auth tokens have expiration
- [ ] Rate limiting on auth endpoints
```bash
# Verify Redis auth
kubectl exec -n honeydue deploy/redis -- redis-cli ping
# Expected: NOAUTH error
# Verify admin auth
kubectl get secret admin-basic-auth -n honeydue
```
### Secrets
- [ ] All secrets stored as K8s Secrets (not ConfigMap)
- [ ] Secrets encrypted at rest (K3s)
- [ ] No secrets in git history
- [ ] SECRET_KEY >= 32 characters
- [ ] Secret rotation documented
```bash
# Verify no secrets in ConfigMap
kubectl get configmap honeydue-config -n honeydue -o yaml | grep -iE 'password|secret|token|key'
# Should show only non-sensitive config keys (EMAIL_HOST, APNS_KEY_ID, etc.)
```
### TLS & Headers
- [ ] Cloudflare Full (Strict) mode enabled
- [ ] Origin cert valid and not expired
- [ ] HSTS header present with includeSubDomains
- [ ] CSP header: `default-src 'self'; frame-ancestors 'none'`
- [ ] Permissions-Policy blocks camera/mic/geo
- [ ] X-Frame-Options: DENY
```bash
# Verify headers (via Cloudflare)
curl -sI https://api.myhoneydue.com/api/health/ | grep -iE 'strict-transport|content-security|permissions-policy|x-frame'
```
### Container Images
- [ ] Multi-stage Dockerfile (no build tools in runtime)
- [ ] Non-root user in all images
- [ ] Alpine base (minimal surface)
- [ ] No secrets baked into images
```bash
# Verify non-root
kubectl get pods -n honeydue -o jsonpath='{range .items[*]}{.metadata.name}{" uid="}{.spec.securityContext.runAsUser}{"\n"}{end}'
```
### External Services
- [ ] PostgreSQL: `sslmode=require`
- [ ] B2: Scoped application key (single bucket)
- [ ] APNs: .p8 key (not .p12 certificate)
- [ ] SMTP: TLS enabled (`use_tls: true`)
---
## Quick Reference Commands
```bash
# Full security verification
./scripts/04-verify.sh
# Rotate all secrets
./scripts/02-setup-secrets.sh && \
kubectl rollout restart deployment/api deployment/worker deployment/admin -n honeydue
# Check for security events
kubectl get events -n honeydue --field-selector type=Warning
# Emergency: scale down everything
kubectl scale deployment --all -n honeydue --replicas=0
# Emergency: restore
kubectl scale deployment api -n honeydue --replicas=3
kubectl scale deployment worker -n honeydue --replicas=2
kubectl scale deployment admin -n honeydue --replicas=1
kubectl scale deployment redis -n honeydue --replicas=1
```

View File

@@ -0,0 +1,118 @@
# config.yaml — single source of truth for honeyDue K3s deployment
# Copy to config.yaml, fill in all empty values, then run scripts in order.
# This file is gitignored — never commit it with real values.
# --- Hetzner Cloud ---
cluster:
hcloud_token: "" # Hetzner API token (Read/Write)
ssh_public_key: ~/.ssh/id_ed25519.pub
ssh_private_key: ~/.ssh/id_ed25519
k3s_version: v1.31.4+k3s1
location: fsn1 # Hetzner datacenter
instance_type: cx33 # 4 vCPU, 16GB RAM
# Filled by 01-provision-cluster.sh, or manually after creating servers
nodes:
- name: honeydue-master1
ip: ""
roles: [master, redis] # 'redis' = pin Redis PVC here
- name: honeydue-master2
ip: ""
roles: [master]
- name: honeydue-master3
ip: ""
roles: [master]
# Hetzner Load Balancer IP (created in console after provisioning)
load_balancer_ip: ""
# --- Domains ---
domains:
api: api.myhoneydue.com
admin: admin.myhoneydue.com
base: myhoneydue.com
# --- Container Registry (GHCR) ---
registry:
server: ghcr.io
namespace: "" # GitHub username or org
username: "" # GitHub username
token: "" # PAT with read:packages, write:packages
# --- Database (Neon PostgreSQL) ---
database:
host: "" # e.g. ep-xxx.us-east-2.aws.neon.tech
port: 5432
user: ""
name: honeydue
sslmode: require
max_open_conns: 25
max_idle_conns: 10
max_lifetime: "600s"
# --- Email (Fastmail) ---
email:
host: smtp.fastmail.com
port: 587
user: "" # Fastmail email address
from: "honeyDue <noreply@myhoneydue.com>"
use_tls: true
# --- Push Notifications ---
push:
apns_key_id: ""
apns_team_id: ""
apns_topic: com.tt.honeyDue
apns_production: true
apns_use_sandbox: false
# --- B2 Object Storage ---
storage:
b2_key_id: ""
b2_app_key: ""
b2_bucket: ""
b2_endpoint: "" # e.g. s3.us-west-004.backblazeb2.com
max_file_size: 10485760
allowed_types: "image/jpeg,image/png,image/gif,image/webp,application/pdf"
# --- Worker Schedules (UTC hours) ---
worker:
task_reminder_hour: 14
overdue_reminder_hour: 15
daily_digest_hour: 3
# --- Feature Flags ---
features:
push_enabled: true
email_enabled: true
webhooks_enabled: true
onboarding_emails_enabled: true
pdf_reports_enabled: true
worker_enabled: true
# --- Redis ---
redis:
password: "" # Set a strong password; leave empty for no auth (NOT recommended for production)
# --- Admin Panel ---
admin:
basic_auth_user: "" # HTTP basic auth username for admin panel
basic_auth_password: "" # HTTP basic auth password for admin panel
# --- Apple Auth / IAP (optional, leave empty if unused) ---
apple_auth:
client_id: ""
team_id: ""
iap_key_id: ""
iap_issuer_id: ""
iap_bundle_id: ""
iap_key_path: ""
iap_sandbox: false
# --- Google Auth / IAP (optional, leave empty if unused) ---
google_auth:
client_id: ""
android_client_id: ""
ios_client_id: ""
iap_package_name: ""
iap_service_account_path: ""

View File

@@ -0,0 +1,94 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app.kubernetes.io/name: admin
template:
metadata:
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: admin
imagePullSecrets:
- name: ghcr-credentials
securityContext:
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
seccompProfile:
type: RuntimeDefault
containers:
- name: admin
image: IMAGE_PLACEHOLDER # Replaced by 03-deploy.sh
ports:
- containerPort: 3000
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: PORT
value: "3000"
- name: HOSTNAME
value: "0.0.0.0"
- name: NEXT_PUBLIC_API_URL
valueFrom:
configMapKeyRef:
name: honeydue-config
key: NEXT_PUBLIC_API_URL
volumeMounts:
- name: nextjs-cache
mountPath: /app/.next/cache
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
startupProbe:
httpGet:
path: /admin/
port: 3000
failureThreshold: 12
periodSeconds: 5
readinessProbe:
httpGet:
path: /admin/
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
httpGet:
path: /admin/
port: 3000
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
volumes:
- name: nextjs-cache
emptyDir:
sizeLimit: 256Mi
- name: tmp
emptyDir:
sizeLimit: 64Mi

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: admin
ports:
- port: 3000
targetPort: 3000
protocol: TCP

View File

@@ -0,0 +1,54 @@
# API Ingress — Cloudflare-only + security headers + rate limiting
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: honeydue-api
namespace: honeydue
labels:
app.kubernetes.io/part-of: honeydue
annotations:
traefik.ingress.kubernetes.io/router.middlewares: honeydue-cloudflare-only@kubernetescrd,honeydue-security-headers@kubernetescrd,honeydue-rate-limit@kubernetescrd
spec:
tls:
- hosts:
- api.myhoneydue.com
secretName: cloudflare-origin-cert
rules:
- host: api.myhoneydue.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 8000
---
# Admin Ingress — Cloudflare-only + security headers + rate limiting + basic auth
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: honeydue-admin
namespace: honeydue
labels:
app.kubernetes.io/part-of: honeydue
annotations:
traefik.ingress.kubernetes.io/router.middlewares: honeydue-cloudflare-only@kubernetescrd,honeydue-security-headers@kubernetescrd,honeydue-rate-limit@kubernetescrd,honeydue-admin-auth@kubernetescrd
spec:
tls:
- hosts:
- admin.myhoneydue.com
secretName: cloudflare-origin-cert
rules:
- host: admin.myhoneydue.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin
port:
number: 3000

View File

@@ -0,0 +1,82 @@
# Traefik CRD middleware for rate limiting
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: rate-limit
namespace: honeydue
spec:
rateLimit:
average: 100
burst: 200
period: 1m
---
# Security headers
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: security-headers
namespace: honeydue
spec:
headers:
frameDeny: true
contentTypeNosniff: true
browserXssFilter: true
referrerPolicy: "strict-origin-when-cross-origin"
customResponseHeaders:
X-Content-Type-Options: "nosniff"
X-Frame-Options: "DENY"
Strict-Transport-Security: "max-age=31536000; includeSubDomains"
Content-Security-Policy: "default-src 'self'; frame-ancestors 'none'"
Permissions-Policy: "camera=(), microphone=(), geolocation=()"
X-Permitted-Cross-Domain-Policies: "none"
---
# Cloudflare IP allowlist (restrict origin to Cloudflare only)
# https://www.cloudflare.com/ips-v4 and /ips-v6
# Update periodically: curl -s https://www.cloudflare.com/ips-v4 && curl -s https://www.cloudflare.com/ips-v6
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: cloudflare-only
namespace: honeydue
spec:
ipAllowList:
sourceRange:
# Cloudflare IPv4 ranges
- 173.245.48.0/20
- 103.21.244.0/22
- 103.22.200.0/22
- 103.31.4.0/22
- 141.101.64.0/18
- 108.162.192.0/18
- 190.93.240.0/20
- 188.114.96.0/20
- 197.234.240.0/22
- 198.41.128.0/17
- 162.158.0.0/15
- 104.16.0.0/13
- 104.24.0.0/14
- 172.64.0.0/13
- 131.0.72.0/22
# Cloudflare IPv6 ranges
- 2400:cb00::/32
- 2606:4700::/32
- 2803:f800::/32
- 2405:b500::/32
- 2405:8100::/32
- 2a06:98c0::/29
- 2c0f:f248::/32
---
# Admin basic auth — additional auth layer for admin panel
# Secret created by 02-setup-secrets.sh from config.yaml credentials
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: admin-auth
namespace: honeydue
spec:
basicAuth:
secret: admin-basic-auth
realm: "honeyDue Admin"

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: honeydue
labels:
app.kubernetes.io/part-of: honeydue

View File

@@ -0,0 +1,202 @@
# Network Policies — default-deny with explicit allows
# Apply AFTER namespace and deployments are created.
# Verify: kubectl get networkpolicy -n honeydue
# --- Default deny all ingress and egress ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: honeydue
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# --- Allow DNS for all pods (required for service discovery) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: honeydue
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
---
# --- API: allow ingress from Traefik (kube-system namespace) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-api
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: api
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: TCP
port: 8000
---
# --- Admin: allow ingress from Traefik (kube-system namespace) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-admin
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: admin
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: TCP
port: 3000
---
# --- Redis: allow ingress ONLY from api + worker pods ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-redis
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: redis
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
- podSelector:
matchLabels:
app.kubernetes.io/name: worker
ports:
- protocol: TCP
port: 6379
---
# --- API: allow egress to Redis, external services (Neon DB, APNs, FCM, B2, SMTP) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-api
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: api
policyTypes:
- Egress
egress:
# Redis (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: redis
ports:
- protocol: TCP
port: 6379
# External services: Neon DB (5432), SMTP (587), HTTPS (443 — APNs, FCM, B2, PostHog)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 5432
- protocol: TCP
port: 587
- protocol: TCP
port: 443
---
# --- Worker: allow egress to Redis, external services ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-worker
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: worker
policyTypes:
- Egress
egress:
# Redis (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: redis
ports:
- protocol: TCP
port: 6379
# External services: Neon DB (5432), SMTP (587), HTTPS (443 — APNs, FCM, B2)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 5432
- protocol: TCP
port: 587
- protocol: TCP
port: 443
---
# --- Admin: allow egress to API (internal) for SSR ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-admin
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: admin
policyTypes:
- Egress
egress:
# API service (in-cluster, for server-side API calls)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
ports:
- protocol: TCP
port: 8000

View File

@@ -0,0 +1,32 @@
# Pod Disruption Budgets — prevent node maintenance from killing all replicas
# API: at least 2 of 3 replicas must stay up during voluntary disruptions
# Worker: at least 1 of 2 replicas must stay up
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: api-pdb
namespace: honeydue
labels:
app.kubernetes.io/name: api
app.kubernetes.io/part-of: honeydue
spec:
minAvailable: 2
selector:
matchLabels:
app.kubernetes.io/name: api
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: worker-pdb
namespace: honeydue
labels:
app.kubernetes.io/name: worker
app.kubernetes.io/part-of: honeydue
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/name: worker

View File

@@ -0,0 +1,46 @@
# RBAC — Dedicated service accounts with no K8s API access
# Each pod gets its own SA with automountServiceAccountToken: false,
# so a compromised pod cannot query the Kubernetes API.
apiVersion: v1
kind: ServiceAccount
metadata:
name: api
namespace: honeydue
labels:
app.kubernetes.io/name: api
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: worker
namespace: honeydue
labels:
app.kubernetes.io/name: worker
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false

View File

@@ -0,0 +1,106 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: Recreate # ReadWriteOnce PVC — can't attach to two pods
selector:
matchLabels:
app.kubernetes.io/name: redis
template:
metadata:
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: redis
nodeSelector:
honeydue/redis: "true"
securityContext:
runAsNonRoot: true
runAsUser: 999
runAsGroup: 999
fsGroup: 999
seccompProfile:
type: RuntimeDefault
containers:
- name: redis
image: redis:7-alpine
command:
- sh
- -c
- |
ARGS="--appendonly yes --appendfsync everysec --maxmemory 256mb --maxmemory-policy noeviction"
if [ -n "$REDIS_PASSWORD" ]; then
ARGS="$ARGS --requirepass $REDIS_PASSWORD"
fi
exec redis-server $ARGS
ports:
- containerPort: 6379
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: honeydue-secrets
key: REDIS_PASSWORD
optional: true
volumeMounts:
- name: redis-data
mountPath: /data
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
readinessProbe:
exec:
command:
- sh
- -c
- |
if [ -n "$REDIS_PASSWORD" ]; then
redis-cli -a "$REDIS_PASSWORD" ping 2>/dev/null | grep -q PONG
else
redis-cli ping | grep -q PONG
fi
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- |
if [ -n "$REDIS_PASSWORD" ]; then
redis-cli -a "$REDIS_PASSWORD" ping 2>/dev/null | grep -q PONG
else
redis-cli ping | grep -q PONG
fi
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 5
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data
- name: tmp
emptyDir:
medium: Memory
sizeLimit: 64Mi

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 5Gi

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: redis
ports:
- port: 6379
targetPort: 6379
protocol: TCP

View File

@@ -0,0 +1,47 @@
# EXAMPLE ONLY — never commit real values.
# Secrets are created by scripts/02-setup-secrets.sh.
# This file shows the expected structure for reference.
---
apiVersion: v1
kind: Secret
metadata:
name: honeydue-secrets
namespace: honeydue
type: Opaque
stringData:
POSTGRES_PASSWORD: "CHANGEME"
SECRET_KEY: "CHANGEME_MIN_32_CHARS"
EMAIL_HOST_PASSWORD: "CHANGEME"
FCM_SERVER_KEY: "CHANGEME"
---
apiVersion: v1
kind: Secret
metadata:
name: honeydue-apns-key
namespace: honeydue
type: Opaque
data:
apns_auth_key.p8: "" # base64-encoded .p8 file contents
---
apiVersion: v1
kind: Secret
metadata:
name: ghcr-credentials
namespace: honeydue
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: "" # base64-encoded Docker config
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-origin-cert
namespace: honeydue
type: kubernetes.io/tls
data:
tls.crt: "" # base64-encoded origin certificate
tls.key: "" # base64-encoded origin private key

View File

@@ -0,0 +1,124 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
log() { printf '[provision] %s\n' "$*"; }
die() { printf '[provision][error] %s\n' "$*" >&2; exit 1; }
# --- Prerequisites ---
command -v hetzner-k3s >/dev/null 2>&1 || die "Missing: hetzner-k3s CLI. Install: https://github.com/vitobotta/hetzner-k3s"
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
HCLOUD_TOKEN="$(cfg_require cluster.hcloud_token "Hetzner API token")"
export HCLOUD_TOKEN
# Validate SSH keys
SSH_PUB="$(cfg cluster.ssh_public_key | sed "s|~|${HOME}|g")"
SSH_PRIV="$(cfg cluster.ssh_private_key | sed "s|~|${HOME}|g")"
[[ -f "${SSH_PUB}" ]] || die "SSH public key not found: ${SSH_PUB}"
[[ -f "${SSH_PRIV}" ]] || die "SSH private key not found: ${SSH_PRIV}"
# --- Generate hetzner-k3s cluster config from config.yaml ---
CLUSTER_CONFIG="${DEPLOY_DIR}/cluster-config.yaml"
log "Generating cluster-config.yaml from config.yaml..."
generate_cluster_config > "${CLUSTER_CONFIG}"
# --- Provision ---
INSTANCE_TYPE="$(cfg cluster.instance_type)"
LOCATION="$(cfg cluster.location)"
NODE_COUNT="$(node_count)"
log "Provisioning K3s cluster on Hetzner Cloud..."
log " Nodes: ${NODE_COUNT}x ${INSTANCE_TYPE} in ${LOCATION}"
log " This takes about 5-10 minutes."
echo ""
hetzner-k3s create --config "${CLUSTER_CONFIG}"
KUBECONFIG_PATH="${DEPLOY_DIR}/kubeconfig"
if [[ ! -f "${KUBECONFIG_PATH}" ]]; then
die "Provisioning completed but kubeconfig not found. Check hetzner-k3s output."
fi
# --- Write node IPs back to config.yaml ---
log "Querying node IPs..."
export KUBECONFIG="${KUBECONFIG_PATH}"
python3 -c "
import yaml, subprocess, json
# Get node info from kubectl
result = subprocess.run(
['kubectl', 'get', 'nodes', '-o', 'json'],
capture_output=True, text=True
)
nodes_json = json.loads(result.stdout)
# Build name → IP map
ip_map = {}
for node in nodes_json.get('items', []):
name = node['metadata']['name']
for addr in node.get('status', {}).get('addresses', []):
if addr['type'] == 'ExternalIP':
ip_map[name] = addr['address']
break
else:
for addr in node.get('status', {}).get('addresses', []):
if addr['type'] == 'InternalIP':
ip_map[name] = addr['address']
break
# Update config.yaml with IPs
with open('${CONFIG_FILE}') as f:
config = yaml.safe_load(f)
updated = 0
for i, node in enumerate(config.get('nodes', [])):
for real_name, ip in ip_map.items():
if node['name'] in real_name or real_name in node['name']:
config['nodes'][i]['ip'] = ip
config['nodes'][i]['name'] = real_name
updated += 1
break
if updated == 0 and ip_map:
# Names didn't match — assign by index
for i, (name, ip) in enumerate(sorted(ip_map.items())):
if i < len(config['nodes']):
config['nodes'][i]['name'] = name
config['nodes'][i]['ip'] = ip
updated += 1
with open('${CONFIG_FILE}', 'w') as f:
yaml.dump(config, f, default_flow_style=False, sort_keys=False)
print(f'Updated {updated} node IPs in config.yaml')
for name, ip in sorted(ip_map.items()):
print(f' {name}: {ip}')
"
# --- Label Redis node ---
REDIS_NODE="$(nodes_with_role redis | head -1)"
if [[ -n "${REDIS_NODE}" ]]; then
# Find the actual K8s node name that matches
ACTUAL_NODE="$(kubectl get nodes -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | head -1)"
log "Labeling node ${ACTUAL_NODE} for Redis..."
kubectl label node "${ACTUAL_NODE}" honeydue/redis=true --overwrite
fi
log ""
log "Cluster provisioned successfully."
log ""
log "Next steps:"
log " export KUBECONFIG=${KUBECONFIG_PATH}"
log " kubectl get nodes"
log " ./scripts/02-setup-secrets.sh"

View File

@@ -0,0 +1,131 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
SECRETS_DIR="${DEPLOY_DIR}/secrets"
NAMESPACE="honeydue"
log() { printf '[secrets] %s\n' "$*"; }
warn() { printf '[secrets][warn] %s\n' "$*" >&2; }
die() { printf '[secrets][error] %s\n' "$*" >&2; exit 1; }
# --- Prerequisites ---
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
kubectl get namespace "${NAMESPACE}" >/dev/null 2>&1 || {
log "Creating namespace ${NAMESPACE}..."
kubectl apply -f "${DEPLOY_DIR}/manifests/namespace.yaml"
}
# --- Validate secret files ---
require_file() {
local path="$1" label="$2"
[[ -f "${path}" ]] || die "Missing: ${path} (${label})"
[[ -s "${path}" ]] || die "Empty: ${path} (${label})"
}
require_file "${SECRETS_DIR}/postgres_password.txt" "Postgres password"
require_file "${SECRETS_DIR}/secret_key.txt" "SECRET_KEY"
require_file "${SECRETS_DIR}/email_host_password.txt" "SMTP password"
require_file "${SECRETS_DIR}/fcm_server_key.txt" "FCM server key"
require_file "${SECRETS_DIR}/apns_auth_key.p8" "APNS private key"
require_file "${SECRETS_DIR}/cloudflare-origin.crt" "Cloudflare origin cert"
require_file "${SECRETS_DIR}/cloudflare-origin.key" "Cloudflare origin key"
# Validate APNS key format
if ! grep -q "BEGIN PRIVATE KEY" "${SECRETS_DIR}/apns_auth_key.p8"; then
die "APNS key file does not look like a private key: ${SECRETS_DIR}/apns_auth_key.p8"
fi
# Validate secret_key length (minimum 32 chars)
SECRET_KEY_LEN="$(tr -d '\r\n' < "${SECRETS_DIR}/secret_key.txt" | wc -c | tr -d ' ')"
if (( SECRET_KEY_LEN < 32 )); then
die "secret_key.txt must be at least 32 characters (got ${SECRET_KEY_LEN})."
fi
# --- Read optional config values ---
REDIS_PASSWORD="$(cfg redis.password 2>/dev/null || true)"
ADMIN_AUTH_USER="$(cfg admin.basic_auth_user 2>/dev/null || true)"
ADMIN_AUTH_PASSWORD="$(cfg admin.basic_auth_password 2>/dev/null || true)"
# --- Create app secrets ---
log "Creating honeydue-secrets..."
SECRET_ARGS=(
--namespace="${NAMESPACE}"
--from-literal="POSTGRES_PASSWORD=$(tr -d '\r\n' < "${SECRETS_DIR}/postgres_password.txt")"
--from-literal="SECRET_KEY=$(tr -d '\r\n' < "${SECRETS_DIR}/secret_key.txt")"
--from-literal="EMAIL_HOST_PASSWORD=$(tr -d '\r\n' < "${SECRETS_DIR}/email_host_password.txt")"
--from-literal="FCM_SERVER_KEY=$(tr -d '\r\n' < "${SECRETS_DIR}/fcm_server_key.txt")"
)
if [[ -n "${REDIS_PASSWORD}" ]]; then
log " Including REDIS_PASSWORD in secrets"
SECRET_ARGS+=(--from-literal="REDIS_PASSWORD=${REDIS_PASSWORD}")
fi
kubectl create secret generic honeydue-secrets \
"${SECRET_ARGS[@]}" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Create APNS key secret ---
log "Creating honeydue-apns-key..."
kubectl create secret generic honeydue-apns-key \
--namespace="${NAMESPACE}" \
--from-file="apns_auth_key.p8=${SECRETS_DIR}/apns_auth_key.p8" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Create GHCR registry credentials ---
REGISTRY_SERVER="$(cfg registry.server)"
REGISTRY_USER="$(cfg registry.username)"
REGISTRY_TOKEN="$(cfg registry.token)"
if [[ -n "${REGISTRY_SERVER}" && -n "${REGISTRY_USER}" && -n "${REGISTRY_TOKEN}" ]]; then
log "Creating ghcr-credentials..."
kubectl create secret docker-registry ghcr-credentials \
--namespace="${NAMESPACE}" \
--docker-server="${REGISTRY_SERVER}" \
--docker-username="${REGISTRY_USER}" \
--docker-password="${REGISTRY_TOKEN}" \
--dry-run=client -o yaml | kubectl apply -f -
else
warn "Registry credentials incomplete in config.yaml — skipping ghcr-credentials."
fi
# --- Create Cloudflare origin cert ---
log "Creating cloudflare-origin-cert..."
kubectl create secret tls cloudflare-origin-cert \
--namespace="${NAMESPACE}" \
--cert="${SECRETS_DIR}/cloudflare-origin.crt" \
--key="${SECRETS_DIR}/cloudflare-origin.key" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Create admin basic auth secret ---
if [[ -n "${ADMIN_AUTH_USER}" && -n "${ADMIN_AUTH_PASSWORD}" ]]; then
command -v htpasswd >/dev/null 2>&1 || die "Missing: htpasswd (install apache2-utils)"
log "Creating admin-basic-auth secret..."
HTPASSWD="$(htpasswd -nb "${ADMIN_AUTH_USER}" "${ADMIN_AUTH_PASSWORD}")"
kubectl create secret generic admin-basic-auth \
--namespace="${NAMESPACE}" \
--from-literal=users="${HTPASSWD}" \
--dry-run=client -o yaml | kubectl apply -f -
else
warn "admin.basic_auth_user/password not set in config.yaml — skipping admin-basic-auth."
warn "Admin panel will NOT have basic auth protection."
fi
# --- Done ---
log ""
log "All secrets created in namespace '${NAMESPACE}'."
log "Verify: kubectl get secrets -n ${NAMESPACE}"

143
deploy-k3s/scripts/03-deploy.sh Executable file
View File

@@ -0,0 +1,143 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
REPO_DIR="$(cd "${DEPLOY_DIR}/.." && pwd)"
NAMESPACE="honeydue"
MANIFESTS="${DEPLOY_DIR}/manifests"
log() { printf '[deploy] %s\n' "$*"; }
warn() { printf '[deploy][warn] %s\n' "$*" >&2; }
die() { printf '[deploy][error] %s\n' "$*" >&2; exit 1; }
# --- Parse arguments ---
SKIP_BUILD=false
DEPLOY_TAG=""
while (( $# > 0 )); do
case "$1" in
--skip-build) SKIP_BUILD=true; shift ;;
--tag)
[[ -n "${2:-}" ]] || die "--tag requires a value"
DEPLOY_TAG="$2"; shift 2 ;;
-h|--help)
cat <<'EOF'
Usage: ./scripts/03-deploy.sh [OPTIONS]
Options:
--skip-build Skip Docker build/push, use existing images
--tag <tag> Image tag (default: git short SHA)
-h, --help Show this help
EOF
exit 0 ;;
*) die "Unknown argument: $1" ;;
esac
done
# --- Prerequisites ---
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
command -v docker >/dev/null 2>&1 || die "Missing: docker"
if [[ -z "${DEPLOY_TAG}" ]]; then
DEPLOY_TAG="$(git -C "${REPO_DIR}" rev-parse --short HEAD 2>/dev/null || echo "latest")"
fi
# --- Read registry config ---
REGISTRY_SERVER="$(cfg_require registry.server "Container registry server")"
REGISTRY_NS="$(cfg_require registry.namespace "Registry namespace")"
REGISTRY_USER="$(cfg_require registry.username "Registry username")"
REGISTRY_TOKEN="$(cfg_require registry.token "Registry token")"
REGISTRY_PREFIX="${REGISTRY_SERVER%/}/${REGISTRY_NS#/}"
API_IMAGE="${REGISTRY_PREFIX}/honeydue-api:${DEPLOY_TAG}"
WORKER_IMAGE="${REGISTRY_PREFIX}/honeydue-worker:${DEPLOY_TAG}"
ADMIN_IMAGE="${REGISTRY_PREFIX}/honeydue-admin:${DEPLOY_TAG}"
# --- Build and push ---
if [[ "${SKIP_BUILD}" == "false" ]]; then
log "Logging in to ${REGISTRY_SERVER}..."
printf '%s' "${REGISTRY_TOKEN}" | docker login "${REGISTRY_SERVER}" -u "${REGISTRY_USER}" --password-stdin >/dev/null
log "Building API image: ${API_IMAGE}"
docker build --target api -t "${API_IMAGE}" "${REPO_DIR}"
log "Building Worker image: ${WORKER_IMAGE}"
docker build --target worker -t "${WORKER_IMAGE}" "${REPO_DIR}"
log "Building Admin image: ${ADMIN_IMAGE}"
docker build --target admin -t "${ADMIN_IMAGE}" "${REPO_DIR}"
log "Pushing images..."
docker push "${API_IMAGE}"
docker push "${WORKER_IMAGE}"
docker push "${ADMIN_IMAGE}"
# Also tag and push :latest
docker tag "${API_IMAGE}" "${REGISTRY_PREFIX}/honeydue-api:latest"
docker tag "${WORKER_IMAGE}" "${REGISTRY_PREFIX}/honeydue-worker:latest"
docker tag "${ADMIN_IMAGE}" "${REGISTRY_PREFIX}/honeydue-admin:latest"
docker push "${REGISTRY_PREFIX}/honeydue-api:latest"
docker push "${REGISTRY_PREFIX}/honeydue-worker:latest"
docker push "${REGISTRY_PREFIX}/honeydue-admin:latest"
else
warn "Skipping build. Using images for tag: ${DEPLOY_TAG}"
fi
# --- Generate and apply ConfigMap from config.yaml ---
log "Generating env from config.yaml..."
ENV_FILE="$(mktemp)"
trap 'rm -f "${ENV_FILE}"' EXIT
generate_env > "${ENV_FILE}"
log "Creating ConfigMap..."
kubectl create configmap honeydue-config \
--namespace="${NAMESPACE}" \
--from-env-file="${ENV_FILE}" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Apply manifests ---
log "Applying manifests..."
kubectl apply -f "${MANIFESTS}/namespace.yaml"
kubectl apply -f "${MANIFESTS}/redis/"
kubectl apply -f "${MANIFESTS}/ingress/"
# Apply deployments with image substitution
sed "s|image: IMAGE_PLACEHOLDER|image: ${API_IMAGE}|" "${MANIFESTS}/api/deployment.yaml" | kubectl apply -f -
kubectl apply -f "${MANIFESTS}/api/service.yaml"
kubectl apply -f "${MANIFESTS}/api/hpa.yaml"
sed "s|image: IMAGE_PLACEHOLDER|image: ${WORKER_IMAGE}|" "${MANIFESTS}/worker/deployment.yaml" | kubectl apply -f -
sed "s|image: IMAGE_PLACEHOLDER|image: ${ADMIN_IMAGE}|" "${MANIFESTS}/admin/deployment.yaml" | kubectl apply -f -
kubectl apply -f "${MANIFESTS}/admin/service.yaml"
# --- Wait for rollouts ---
log "Waiting for rollouts..."
kubectl rollout status deployment/redis -n "${NAMESPACE}" --timeout=120s
kubectl rollout status deployment/api -n "${NAMESPACE}" --timeout=300s
kubectl rollout status deployment/worker -n "${NAMESPACE}" --timeout=300s
kubectl rollout status deployment/admin -n "${NAMESPACE}" --timeout=300s
# --- Done ---
log ""
log "Deploy completed successfully."
log "Tag: ${DEPLOY_TAG}"
log "Images:"
log " API: ${API_IMAGE}"
log " Worker: ${WORKER_IMAGE}"
log " Admin: ${ADMIN_IMAGE}"
log ""
log "Run ./scripts/04-verify.sh to check cluster health."

180
deploy-k3s/scripts/04-verify.sh Executable file
View File

@@ -0,0 +1,180 @@
#!/usr/bin/env bash
set -euo pipefail
NAMESPACE="honeydue"
log() { printf '[verify] %s\n' "$*"; }
sep() { printf '\n%s\n' "--- $1 ---"; }
ok() { printf '[verify] ✓ %s\n' "$*"; }
fail() { printf '[verify] ✗ %s\n' "$*"; }
command -v kubectl >/dev/null 2>&1 || { echo "Missing: kubectl" >&2; exit 1; }
sep "Nodes"
kubectl get nodes -o wide
sep "Pods"
kubectl get pods -n "${NAMESPACE}" -o wide
sep "Services"
kubectl get svc -n "${NAMESPACE}"
sep "Ingress"
kubectl get ingress -n "${NAMESPACE}"
sep "HPA"
kubectl get hpa -n "${NAMESPACE}"
sep "PVCs"
kubectl get pvc -n "${NAMESPACE}"
sep "Secrets (names only)"
kubectl get secrets -n "${NAMESPACE}"
sep "ConfigMap keys"
kubectl get configmap honeydue-config -n "${NAMESPACE}" -o jsonpath='{.data}' 2>/dev/null | python3 -c "
import json, sys
try:
d = json.load(sys.stdin)
for k in sorted(d.keys()):
v = d[k]
if any(s in k.upper() for s in ['PASSWORD', 'SECRET', 'TOKEN', 'KEY']):
v = '***REDACTED***'
print(f' {k}={v}')
except:
print(' (could not parse)')
" 2>/dev/null || log "ConfigMap not found or not parseable"
sep "Warning Events (last 15 min)"
kubectl get events -n "${NAMESPACE}" --field-selector type=Warning --sort-by='.lastTimestamp' 2>/dev/null | tail -20 || log "No warning events"
sep "Pod Restart Counts"
kubectl get pods -n "${NAMESPACE}" -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{range .status.containerStatuses[*]}{.restartCount}{end}{"\n"}{end}' 2>/dev/null || true
sep "In-Cluster Health Check"
API_POD="$(kubectl get pods -n "${NAMESPACE}" -l app.kubernetes.io/name=api -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)"
if [[ -n "${API_POD}" ]]; then
log "Running health check from pod ${API_POD}..."
kubectl exec -n "${NAMESPACE}" "${API_POD}" -- curl -sf http://localhost:8000/api/health/ 2>/dev/null && log "Health check: OK" || log "Health check: FAILED"
else
log "No API pod found — skipping in-cluster health check"
fi
sep "Resource Usage"
kubectl top pods -n "${NAMESPACE}" 2>/dev/null || log "Metrics server not available (install with: kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml)"
# =============================================================================
# Security Verification
# =============================================================================
sep "Security: Secret Encryption"
# Check that secrets-encryption is configured on the K3s server
if kubectl get nodes -o jsonpath='{.items[0].metadata.name}' >/dev/null 2>&1; then
# Verify secrets are stored encrypted by checking the encryption config exists
if kubectl -n kube-system get cm k3s-config -o yaml 2>/dev/null | grep -q "secrets-encryption"; then
ok "secrets-encryption found in K3s config"
else
# Alternative: check if etcd stores encrypted data
ENCRYPTED_CHECK="$(kubectl get secret honeydue-secrets -n "${NAMESPACE}" -o jsonpath='{.metadata.name}' 2>/dev/null || true)"
if [[ -n "${ENCRYPTED_CHECK}" ]]; then
ok "honeydue-secrets exists (verify encryption with: k3s secrets-encrypt status)"
else
fail "Cannot verify secret encryption — run 'k3s secrets-encrypt status' on the server"
fi
fi
else
fail "Cannot reach cluster to verify secret encryption"
fi
sep "Security: Network Policies"
NP_COUNT="$(kubectl get networkpolicy -n "${NAMESPACE}" --no-headers 2>/dev/null | wc -l | tr -d ' ')"
if (( NP_COUNT >= 5 )); then
ok "Found ${NP_COUNT} network policies"
kubectl get networkpolicy -n "${NAMESPACE}" --no-headers 2>/dev/null | while read -r line; do
echo " ${line}"
done
else
fail "Expected 5+ network policies, found ${NP_COUNT}"
fi
sep "Security: Service Accounts"
SA_COUNT="$(kubectl get sa -n "${NAMESPACE}" --no-headers 2>/dev/null | grep -cv default | tr -d ' ')"
if (( SA_COUNT >= 4 )); then
ok "Found ${SA_COUNT} custom service accounts (api, worker, admin, redis)"
else
fail "Expected 4 custom service accounts, found ${SA_COUNT}"
fi
kubectl get sa -n "${NAMESPACE}" --no-headers 2>/dev/null | while read -r line; do
echo " ${line}"
done
sep "Security: Pod Security Contexts"
PODS_WITHOUT_SECURITY="$(kubectl get pods -n "${NAMESPACE}" -o json 2>/dev/null | python3 -c "
import json, sys
try:
data = json.load(sys.stdin)
issues = []
for pod in data.get('items', []):
name = pod['metadata']['name']
spec = pod['spec']
sc = spec.get('securityContext', {})
if not sc.get('runAsNonRoot'):
issues.append(f'{name}: missing runAsNonRoot')
for c in spec.get('containers', []):
csc = c.get('securityContext', {})
if csc.get('allowPrivilegeEscalation', True):
issues.append(f'{name}/{c[\"name\"]}: allowPrivilegeEscalation not false')
if not csc.get('readOnlyRootFilesystem'):
issues.append(f'{name}/{c[\"name\"]}: readOnlyRootFilesystem not true')
if issues:
for i in issues:
print(i)
else:
print('OK')
except Exception as e:
print(f'Error: {e}')
" 2>/dev/null || echo "Error parsing pod specs")"
if [[ "${PODS_WITHOUT_SECURITY}" == "OK" ]]; then
ok "All pods have proper security contexts"
else
fail "Pod security context issues:"
echo "${PODS_WITHOUT_SECURITY}" | while read -r line; do
echo " ${line}"
done
fi
sep "Security: Pod Disruption Budgets"
PDB_COUNT="$(kubectl get pdb -n "${NAMESPACE}" --no-headers 2>/dev/null | wc -l | tr -d ' ')"
if (( PDB_COUNT >= 2 )); then
ok "Found ${PDB_COUNT} pod disruption budgets"
else
fail "Expected 2+ PDBs, found ${PDB_COUNT}"
fi
kubectl get pdb -n "${NAMESPACE}" 2>/dev/null || true
sep "Security: Cloudflare-Only Middleware"
CF_MIDDLEWARE="$(kubectl get middleware cloudflare-only -n "${NAMESPACE}" -o name 2>/dev/null || true)"
if [[ -n "${CF_MIDDLEWARE}" ]]; then
ok "cloudflare-only middleware exists"
# Check ingress annotations reference it
INGRESS_ANNOTATIONS="$(kubectl get ingress -n "${NAMESPACE}" -o jsonpath='{.items[*].metadata.annotations.traefik\.ingress\.kubernetes\.io/router\.middlewares}' 2>/dev/null || true)"
if echo "${INGRESS_ANNOTATIONS}" | grep -q "cloudflare-only"; then
ok "Ingress references cloudflare-only middleware"
else
fail "Ingress does NOT reference cloudflare-only middleware"
fi
else
fail "cloudflare-only middleware not found"
fi
sep "Security: Admin Basic Auth"
ADMIN_AUTH="$(kubectl get secret admin-basic-auth -n "${NAMESPACE}" -o name 2>/dev/null || true)"
if [[ -n "${ADMIN_AUTH}" ]]; then
ok "admin-basic-auth secret exists"
else
fail "admin-basic-auth secret not found — admin panel has no additional auth layer"
fi
echo ""
log "Verification complete."

214
deploy-k3s/scripts/_config.sh Executable file
View File

@@ -0,0 +1,214 @@
#!/usr/bin/env bash
# Shared config helper — sourced by all deploy scripts.
# Provides cfg() to read values from config.yaml.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEPLOY_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
CONFIG_FILE="${DEPLOY_DIR}/config.yaml"
if [[ ! -f "${CONFIG_FILE}" ]]; then
if [[ -f "${CONFIG_FILE}.example" ]]; then
echo "[error] config.yaml not found. Run: cp config.yaml.example config.yaml" >&2
else
echo "[error] config.yaml not found." >&2
fi
exit 1
fi
# cfg "dotted.key.path" — reads a value from config.yaml
# Examples: cfg database.host, cfg nodes.0.ip, cfg features.push_enabled
cfg() {
python3 -c "
import yaml, json, sys
with open(sys.argv[1]) as f:
c = yaml.safe_load(f)
keys = sys.argv[2].split('.')
v = c
for k in keys:
if isinstance(v, list):
v = v[int(k)]
else:
v = v[k]
if isinstance(v, bool):
print(str(v).lower())
elif isinstance(v, (dict, list)):
print(json.dumps(v))
else:
print('' if v is None else v)
" "${CONFIG_FILE}" "$1" 2>/dev/null
}
# cfg_require "key" "label" — reads value and dies if empty
cfg_require() {
local val
val="$(cfg "$1")"
if [[ -z "${val}" ]]; then
echo "[error] Missing required config: $1 ($2)" >&2
exit 1
fi
printf '%s' "${val}"
}
# node_count — returns number of nodes
node_count() {
python3 -c "
import yaml
with open('${CONFIG_FILE}') as f:
c = yaml.safe_load(f)
print(len(c.get('nodes', [])))
"
}
# nodes_with_role "role" — returns node names with a given role
nodes_with_role() {
python3 -c "
import yaml
with open('${CONFIG_FILE}') as f:
c = yaml.safe_load(f)
for n in c.get('nodes', []):
if '$1' in n.get('roles', []):
print(n['name'])
"
}
# generate_env — writes the flat env file the app expects to stdout
generate_env() {
python3 -c "
import yaml
with open('${CONFIG_FILE}') as f:
c = yaml.safe_load(f)
d = c['domains']
db = c['database']
em = c['email']
ps = c['push']
st = c['storage']
wk = c['worker']
ft = c['features']
aa = c.get('apple_auth', {})
ga = c.get('google_auth', {})
rd = c.get('redis', {})
def b(v):
return str(v).lower() if isinstance(v, bool) else str(v)
def val(v):
return '' if v is None else str(v)
lines = [
# API
'DEBUG=false',
f\"ALLOWED_HOSTS={d['api']},{d['base']}\",
f\"CORS_ALLOWED_ORIGINS=https://{d['base']},https://{d['admin']}\",
'TIMEZONE=UTC',
f\"BASE_URL=https://{d['base']}\",
'PORT=8000',
# Admin
f\"NEXT_PUBLIC_API_URL=https://{d['api']}\",
f\"ADMIN_PANEL_URL=https://{d['admin']}\",
# Database
f\"DB_HOST={val(db['host'])}\",
f\"DB_PORT={db['port']}\",
f\"POSTGRES_USER={val(db['user'])}\",
f\"POSTGRES_DB={db['name']}\",
f\"DB_SSLMODE={db['sslmode']}\",
f\"DB_MAX_OPEN_CONNS={db['max_open_conns']}\",
f\"DB_MAX_IDLE_CONNS={db['max_idle_conns']}\",
f\"DB_MAX_LIFETIME={db['max_lifetime']}\",
# Redis (K8s internal DNS — password injected if configured)
f\"REDIS_URL=redis://{':%s@' % val(rd.get('password')) if rd.get('password') else ''}redis.honeydue.svc.cluster.local:6379/0\",
'REDIS_DB=0',
# Email
f\"EMAIL_HOST={em['host']}\",
f\"EMAIL_PORT={em['port']}\",
f\"EMAIL_USE_TLS={b(em['use_tls'])}\",
f\"EMAIL_HOST_USER={val(em['user'])}\",
f\"DEFAULT_FROM_EMAIL={val(em['from'])}\",
# Push
'APNS_AUTH_KEY_PATH=/secrets/apns/apns_auth_key.p8',
f\"APNS_AUTH_KEY_ID={val(ps['apns_key_id'])}\",
f\"APNS_TEAM_ID={val(ps['apns_team_id'])}\",
f\"APNS_TOPIC={ps['apns_topic']}\",
f\"APNS_USE_SANDBOX={b(ps['apns_use_sandbox'])}\",
f\"APNS_PRODUCTION={b(ps['apns_production'])}\",
# Worker
f\"TASK_REMINDER_HOUR={wk['task_reminder_hour']}\",
f\"OVERDUE_REMINDER_HOUR={wk['overdue_reminder_hour']}\",
f\"DAILY_DIGEST_HOUR={wk['daily_digest_hour']}\",
# B2 Storage
f\"B2_KEY_ID={val(st['b2_key_id'])}\",
f\"B2_APP_KEY={val(st['b2_app_key'])}\",
f\"B2_BUCKET_NAME={val(st['b2_bucket'])}\",
f\"B2_ENDPOINT={val(st['b2_endpoint'])}\",
f\"STORAGE_MAX_FILE_SIZE={st['max_file_size']}\",
f\"STORAGE_ALLOWED_TYPES={st['allowed_types']}\",
# Features
f\"FEATURE_PUSH_ENABLED={b(ft['push_enabled'])}\",
f\"FEATURE_EMAIL_ENABLED={b(ft['email_enabled'])}\",
f\"FEATURE_WEBHOOKS_ENABLED={b(ft['webhooks_enabled'])}\",
f\"FEATURE_ONBOARDING_EMAILS_ENABLED={b(ft['onboarding_emails_enabled'])}\",
f\"FEATURE_PDF_REPORTS_ENABLED={b(ft['pdf_reports_enabled'])}\",
f\"FEATURE_WORKER_ENABLED={b(ft['worker_enabled'])}\",
# Apple auth/IAP
f\"APPLE_CLIENT_ID={val(aa.get('client_id'))}\",
f\"APPLE_TEAM_ID={val(aa.get('team_id'))}\",
f\"APPLE_IAP_KEY_ID={val(aa.get('iap_key_id'))}\",
f\"APPLE_IAP_ISSUER_ID={val(aa.get('iap_issuer_id'))}\",
f\"APPLE_IAP_BUNDLE_ID={val(aa.get('iap_bundle_id'))}\",
f\"APPLE_IAP_KEY_PATH={val(aa.get('iap_key_path'))}\",
f\"APPLE_IAP_SANDBOX={b(aa.get('iap_sandbox', False))}\",
# Google auth/IAP
f\"GOOGLE_CLIENT_ID={val(ga.get('client_id'))}\",
f\"GOOGLE_ANDROID_CLIENT_ID={val(ga.get('android_client_id'))}\",
f\"GOOGLE_IOS_CLIENT_ID={val(ga.get('ios_client_id'))}\",
f\"GOOGLE_IAP_PACKAGE_NAME={val(ga.get('iap_package_name'))}\",
f\"GOOGLE_IAP_SERVICE_ACCOUNT_PATH={val(ga.get('iap_service_account_path'))}\",
]
print('\n'.join(lines))
"
}
# generate_cluster_config — writes hetzner-k3s YAML to stdout
generate_cluster_config() {
python3 -c "
import yaml
with open('${CONFIG_FILE}') as f:
c = yaml.safe_load(f)
cl = c['cluster']
config = {
'cluster_name': 'honeydue',
'kubeconfig_path': './kubeconfig',
'k3s_version': cl['k3s_version'],
'networking': {
'ssh': {
'port': 22,
'use_agent': False,
'public_key_path': cl['ssh_public_key'],
'private_key_path': cl['ssh_private_key'],
},
'allowed_networks': {
'ssh': ['0.0.0.0/0'],
'api': ['0.0.0.0/0'],
},
},
'api_server_hostname': '',
'schedule_workloads_on_masters': True,
'masters_pool': {
'instance_type': cl['instance_type'],
'instance_count': len(c.get('nodes', [])),
'location': cl['location'],
'image': 'ubuntu-24.04',
},
'additional_packages': ['open-iscsi'],
'post_create_commands': ['sudo systemctl enable --now iscsid'],
'k3s_config_file': 'secrets-encryption: true\n',
}
print(yaml.dump(config, default_flow_style=False, sort_keys=False))
"
}

61
deploy-k3s/scripts/rollback.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -euo pipefail
NAMESPACE="honeydue"
log() { printf '[rollback] %s\n' "$*"; }
die() { printf '[rollback][error] %s\n' "$*" >&2; exit 1; }
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
DEPLOYMENTS=("api" "worker" "admin")
# --- Show current state ---
echo "=== Current Rollout History ==="
for deploy in "${DEPLOYMENTS[@]}"; do
echo ""
echo "--- ${deploy} ---"
kubectl rollout history deployment/"${deploy}" -n "${NAMESPACE}" 2>/dev/null || echo " (not found)"
done
echo ""
echo "=== Current Images ==="
for deploy in "${DEPLOYMENTS[@]}"; do
IMAGE="$(kubectl get deployment "${deploy}" -n "${NAMESPACE}" -o jsonpath='{.spec.template.spec.containers[0].image}' 2>/dev/null || echo "n/a")"
echo " ${deploy}: ${IMAGE}"
done
# --- Confirm ---
echo ""
read -rp "Roll back all deployments to previous revision? [y/N] " confirm
if [[ "${confirm}" != "y" && "${confirm}" != "Y" ]]; then
log "Aborted."
exit 0
fi
# --- Rollback ---
for deploy in "${DEPLOYMENTS[@]}"; do
log "Rolling back ${deploy}..."
kubectl rollout undo deployment/"${deploy}" -n "${NAMESPACE}" 2>/dev/null || log "Skipping ${deploy} (not found or no previous revision)"
done
# --- Wait ---
log "Waiting for rollouts..."
for deploy in "${DEPLOYMENTS[@]}"; do
kubectl rollout status deployment/"${deploy}" -n "${NAMESPACE}" --timeout=300s 2>/dev/null || true
done
# --- Verify ---
echo ""
echo "=== Post-Rollback Images ==="
for deploy in "${DEPLOYMENTS[@]}"; do
IMAGE="$(kubectl get deployment "${deploy}" -n "${NAMESPACE}" -o jsonpath='{.spec.template.spec.containers[0].image}' 2>/dev/null || echo "n/a")"
echo " ${deploy}: ${IMAGE}"
done
log "Rollback complete. Run ./scripts/04-verify.sh to check health."

View File

@@ -0,0 +1,19 @@
# Secrets Directory
Create these files before running `scripts/02-setup-secrets.sh`:
| File | Purpose |
|------|---------|
| `postgres_password.txt` | Neon PostgreSQL password |
| `secret_key.txt` | App signing secret (minimum 32 characters) |
| `email_host_password.txt` | SMTP password (Fastmail app password) |
| `fcm_server_key.txt` | Firebase Cloud Messaging server key |
| `apns_auth_key.p8` | Apple Push Notification private key |
| `cloudflare-origin.crt` | Cloudflare origin certificate (PEM) |
| `cloudflare-origin.key` | Cloudflare origin certificate key (PEM) |
The first five files are the same format as the Docker Swarm `deploy/secrets/` directory.
The Cloudflare files are new for K3s (TLS termination at the ingress).
All string config (database host, registry token, etc.) goes in `config.yaml` instead.
These files are gitignored and should never be committed.

126
deploy/DEPLOYING.md Normal file
View File

@@ -0,0 +1,126 @@
# Deploying Right Now
Practical walkthrough for a prod deploy against the current Swarm stack.
Assumes infrastructure and cloud services already exist — if not, work
through [`shit_deploy_cant_do.md`](./shit_deploy_cant_do.md) first.
See [`README.md`](./README.md) for the reference docs that back each step.
---
## 0. Pre-flight — check local state
```bash
cd honeyDueAPI-go
git status # clean working tree?
git log -1 --oneline # deploying this SHA
ls deploy/cluster.env deploy/registry.env deploy/prod.env
ls deploy/secrets/*.txt deploy/secrets/*.p8
```
## 1. Reconcile your envs with current defaults
These two values **must** be right — the script does not enforce them:
```bash
# deploy/cluster.env
WORKER_REPLICAS=1 # >1 → duplicate cron jobs (Asynq scheduler is a singleton)
PUSH_LATEST_TAG=false # keeps prod images SHA-pinned
SECRET_KEEP_VERSIONS=3 # optional; 3 is the default
```
Decide storage backend in `deploy/prod.env`:
- **Multi-replica safe (recommended):** set all four of `B2_ENDPOINT`,
`B2_KEY_ID`, `B2_APP_KEY`, `B2_BUCKET_NAME`. Uploads go to B2.
- **Single-node ok:** leave all four empty. Script will warn. In this
mode you must also set `API_REPLICAS=1` — otherwise uploads are
invisible from 2/3 of requests.
## 2. Dry run
```bash
DRY_RUN=1 ./.deploy_prod
```
Confirm in the output:
- `Storage backend: S3 (...)` OR the `LOCAL VOLUME` warning matches intent
- `Replicas: api=3, worker=1, admin=1` (or `api=1` if local storage)
- Image SHA matches `git rev-parse --short HEAD`
- `Manager:` host is correct
- `Secret retention: 3 versions`
Fix envs and re-run until the plan looks right. Nothing touches the cluster yet.
## 3. Real deploy
```bash
./.deploy_prod
```
Do **not** pass `SKIP_BUILD=1` after code changes — the worker's health
server and `MigrateWithLock` both require a fresh build.
End-to-end: ~38 minutes. The script prints each phase.
## 4. Post-deploy verification
```bash
# Stack health (replicas X/X = desired)
ssh <manager> docker stack services honeydue
# API smoke
curl -fsS https://api.<domain>/api/health/ && echo OK
# Logs via Dozzle (loopback-bound, needs SSH tunnel)
ssh -p <port> -L 9999:127.0.0.1:9999 <user>@<manager>
# Then browse http://localhost:9999
```
What the logs should show on a healthy boot:
- `api`: exactly one replica logs `Migration advisory lock acquired`,
the others log `Migration advisory lock acquired` after waiting, then
`released`.
- `worker`: `Health server listening addr=:6060`, `Starting worker server...`,
four `Registered ... job` lines.
- No `Failed to connect to Redis` / `Failed to connect to database`.
## 5. If it goes wrong
Auto-rollback triggers when `DEPLOY_HEALTHCHECK_URL` fails — every service
is rolled back to its previous spec, script exits non-zero.
Triage:
```bash
ssh <manager> docker service logs --tail 200 honeydue_api
ssh <manager> docker service ps honeydue_api --no-trunc
```
Manual rollback (if auto didn't catch it):
```bash
ssh <manager> bash -c '
for svc in $(docker stack services honeydue --format "{{.Name}}"); do
docker service rollback "$svc"
done'
```
Redeploy a known-good SHA:
```bash
DEPLOY_TAG=<older-sha> SKIP_BUILD=1 ./.deploy_prod
# Only valid if that image was previously pushed to the registry.
```
## 6. Pre-deploy honesty check
Before pulling the trigger:
- [ ] Tested Neon PITR restore (not just "backups exist")?
- [ ] `WORKER_REPLICAS=1` — otherwise duplicate push notifications next cron tick
- [ ] Cloudflare-only firewall rule on 80/443 — otherwise origin IP is on the public internet
- [ ] If storage is LOCAL, `API_REPLICAS=1` too
- [ ] Last deploy's secrets still valid (rotation hasn't expired any creds)

View File

@@ -2,13 +2,18 @@
This folder is the full production deploy toolkit for `honeyDueAPI-go`.
Run deploy with:
**Recommended flow — always dry-run first:**
```bash
./.deploy_prod
DRY_RUN=1 ./.deploy_prod # validates everything, prints the plan, no changes
./.deploy_prod # then the real deploy
```
The script will refuse to run until all required values are set.
The script refuses to run until all required values are set.
- Step-by-step walkthrough for a real deploy: [`DEPLOYING.md`](./DEPLOYING.md)
- Manual prerequisites the script cannot automate (Swarm init, firewall,
Cloudflare, Neon, APNS, etc.): [`shit_deploy_cant_do.md`](./shit_deploy_cant_do.md)
## First-Time Prerequisite: Create The Swarm Cluster
@@ -84,16 +89,159 @@ AllowUsers deploy
### 6) Dozzle Hardening
- Keep Dozzle private (no public DNS/ingress).
Dozzle exposes the full Docker log stream with no built-in auth — logs contain
secrets, tokens, and user data. The stack binds Dozzle to `127.0.0.1` on the
manager node only (`mode: host`, `host_ip: 127.0.0.1`), so it is **not
reachable from the public internet or from other Swarm nodes**.
To view logs, open an SSH tunnel from your workstation:
```bash
ssh -p "${DEPLOY_MANAGER_SSH_PORT}" \
-L "${DOZZLE_PORT}:127.0.0.1:${DOZZLE_PORT}" \
"${DEPLOY_MANAGER_USER}@${DEPLOY_MANAGER_HOST}"
# Then browse http://localhost:${DOZZLE_PORT}
```
Additional hardening if you ever need to expose Dozzle over a network:
- Put auth/SSO in front (Cloudflare Access or equivalent).
- Prefer a Docker socket proxy with restricted read-only scope.
- Replace the raw `/var/run/docker.sock` mount with a Docker socket proxy
limited to read-only log endpoints.
- Prefer a persistent log aggregator (Loki, Datadog, CloudWatch) for prod —
Dozzle is ephemeral and not a substitute for audit trails.
### 7) Backup + Restore Readiness
- Postgres PITR path tested in staging.
- Redis persistence enabled and restore path tested.
- Written runbook for restore and secret rotation.
- Named owner for incident response.
Treat this as a pre-launch checklist. Nothing below is automated by
`./.deploy_prod`.
- [ ] Postgres PITR path tested in staging (restore a real dump, validate app boots).
- [x] Redis AOF persistence enabled (`appendonly yes --appendfsync everysec` in stack).
- [ ] Redis restore path tested (verify AOF replays on a fresh node).
- [ ] Written runbook for restore + secret rotation (see §4 and `shit_deploy_cant_do.md`).
- [ ] Named owner for incident response.
- [ ] Uploads bucket (Backblaze B2) lifecycle / versioning reviewed — deletes are
handled by the app, not by retention rules.
### 8) Storage Backend (Uploads)
The stack supports two storage backends. The choice is **runtime-only** — the
same image runs in both modes, selected by env vars in `prod.env`:
| Mode | When to use | Config |
|---|---|---|
| **Local volume** | Dev / single-node prod | Leave all `B2_*` empty. Files land on `/app/uploads` via the named volume. |
| **S3-compatible** (B2, MinIO) | Multi-replica prod | Set all four of `B2_ENDPOINT`, `B2_KEY_ID`, `B2_APP_KEY`, `B2_BUCKET_NAME`. |
The deploy script enforces **all-or-none** for the B2 vars — a partial config
fails fast rather than silently falling back to the local volume.
**Why this matters:** Docker Swarm named volumes are **per-node**. With 3 API
replicas spread across nodes, an upload written on node A is invisible to
replicas on nodes B and C (the client sees a random 404 two-thirds of the
time). In multi-replica prod you **must** use S3-compatible storage.
The `uploads:` volume is still declared as a harmless fallback: when B2 is
configured, nothing writes to it. `./.deploy_prod` prints the selected
backend at the start of each run.
### 9) Worker Replicas & Scheduler
Keep `WORKER_REPLICAS=1` in `cluster.env` until Asynq `PeriodicTaskManager`
is wired up. The current `asynq.Scheduler` in `cmd/worker/main.go` has no
Redis-based leader election, so each replica independently enqueues the
same cron task — users see duplicate daily digests / onboarding emails.
Asynq workers (task consumers) are already safe to scale horizontally; it's
only the scheduler singleton that is constrained. Future work: migrate to
`asynq.NewPeriodicTaskManager(...)` with `PeriodicTaskConfigProvider` so
multiple scheduler replicas coordinate via Redis.
### 10) Database Migrations
`cmd/api/main.go` runs `database.MigrateWithLock()` on startup, which takes a
Postgres session-level `pg_advisory_lock` on a dedicated connection before
calling `AutoMigrate`. This serialises boot-time migrations across all API
replicas — the first replica migrates, the rest wait, then each sees an
already-current schema and `AutoMigrate` is a no-op.
The lock is released on connection close, so a crashed replica can't leave
a stale lock behind.
For very large schema changes, run migrations as a separate pre-deploy
step (there is no dedicated `cmd/migrate` binary today — this is a future
improvement).
### 11) Redis Redundancy
Redis runs as a **single replica** with an AOF-persisted named volume. If
the node running Redis dies, Swarm reschedules the container but the named
volume is per-node — the new Redis boots **empty**.
Impact:
- **Cache** (ETag lookups, static data): regenerates on first request.
- **Asynq queue**: in-flight jobs at the moment of the crash are lost; Asynq
retry semantics cover most re-enqueues. Scheduled-but-not-yet-fired cron
events are re-triggered on the next cron tick.
- **Sessions / auth tokens**: not stored in Redis, so unaffected.
This is an accepted limitation today. Options to harden later: Redis
Sentinel, a managed Redis (Upstash, Dragonfly Cloud), or restoring from the
AOF on a pinned node.
### 12) Multi-Arch Builds
`./.deploy_prod` builds images for the **host** architecture of the machine
running the script. If your Swarm nodes are a different arch (e.g. ARM64
Ampere VMs), use `docker buildx` explicitly:
```bash
docker buildx create --use
docker buildx build --platform linux/arm64 --target api -t <image> --push .
# repeat for worker, admin
SKIP_BUILD=1 ./.deploy_prod # then deploy the already-pushed images
```
The Go stages cross-compile cleanly (`TARGETARCH` is already honoured).
The Node/admin stages require QEMU emulation (`docker run --privileged --rm
tonistiigi/binfmt --install all` on the build host) since native deps may
need to be rebuilt for the target arch.
### 13) Connection Pool & TLS Tuning
Because Postgres is external (Neon/RDS), each replica opens its own pool.
Sizing matters: total open connections across the cluster must stay under
the database's configured limit. Defaults in `prod.env.example`:
| Setting | Default | Notes |
|---|---|---|
| `DB_SSLMODE` | `require` | Never set to `disable` in prod. For Neon use `require`. |
| `DB_MAX_OPEN_CONNS` | `25` | Per-replica cap. Worst case: 25 × (API+worker replicas). |
| `DB_MAX_IDLE_CONNS` | `10` | Keep warm connections ready without exhausting the pool. |
| `DB_MAX_LIFETIME` | `600s` | Recycle before Neon's idle disconnect (typically 5 min). |
Worked example with default replicas (3 API + 1 worker — see §9 for why
worker is pinned to 1):
```
3 × 25 + 1 × 25 = 100 peak open connections
```
That lands exactly on Neon's free-tier ceiling (100 concurrent connections),
which is risky with even one transient spike. For Neon free tier drop
`DB_MAX_OPEN_CONNS=15` (→ 60 peak). Paid tiers (Neon Scale, 1000+
connections) can keep the default or raise it.
Operational checklist:
- Confirm Neon IP allowlist includes every Swarm node IP.
- After changing pool sizes, redeploy and watch `pg_stat_activity` /
Neon metrics for saturation.
- Keep `DB_MAX_LIFETIME` ≤ Neon idle timeout to avoid "terminating
connection due to administrator command" errors in the API logs.
- For read-heavy workloads, consider a Neon read replica and split
query traffic at the application layer.
## Files You Fill In
@@ -113,20 +261,51 @@ If one is missing, the deploy script auto-copies it from its `.example` template
## What `./.deploy_prod` Does
1. Validates all required config files and credentials.
2. Builds and pushes `api`, `worker`, and `admin` images.
3. Uploads deploy bundle to your Swarm manager over SSH.
4. Creates versioned Docker secrets on the manager.
5. Deploys the stack with `docker stack deploy --with-registry-auth`.
6. Waits until service replicas converge.
7. Runs an HTTP health check (if `DEPLOY_HEALTHCHECK_URL` is set).
2. Validates the storage-backend toggle (all-or-none for `B2_*`). Prints
the selected backend (S3 or local volume) before continuing.
3. Builds and pushes `api`, `worker`, and `admin` images (skip with
`SKIP_BUILD=1`).
4. Uploads deploy bundle to your Swarm manager over SSH.
5. Creates versioned Docker secrets on the manager.
6. Deploys the stack with `docker stack deploy --with-registry-auth`.
7. Waits until service replicas converge.
8. Prunes old secret versions, keeping the last `SECRET_KEEP_VERSIONS`
(default 3).
9. Runs an HTTP health check (if `DEPLOY_HEALTHCHECK_URL` is set). **On
failure, automatically runs `docker service rollback` for every service
in the stack and exits non-zero.**
10. Logs out of the registry on both the dev host and the manager so the
token doesn't linger in `~/.docker/config.json`.
## Useful Flags
Environment flags:
- `SKIP_BUILD=1 ./.deploy_prod` to deploy already-pushed images.
- `SKIP_HEALTHCHECK=1 ./.deploy_prod` to skip final URL check.
- `DEPLOY_TAG=<tag> ./.deploy_prod` to deploy a specific image tag.
- `DRY_RUN=1 ./.deploy_prod` — validate config and print the deploy plan
without building, pushing, or touching the cluster. Use this before every
production deploy to review images, replicas, and secret names.
- `SKIP_BUILD=1 ./.deploy_prod` — deploy already-pushed images.
- `SKIP_HEALTHCHECK=1 ./.deploy_prod` — skip final URL check.
- `DEPLOY_TAG=<tag> ./.deploy_prod` — deploy a specific image tag.
- `PUSH_LATEST_TAG=true ./.deploy_prod` — also push `:latest` to the registry
(default is `false` so prod pins to the SHA tag and stays reproducible).
- `SECRET_KEEP_VERSIONS=<n> ./.deploy_prod` — how many versions of each
Swarm secret to retain after deploy (default: 3). Older unused versions
are pruned automatically once the stack converges.
## Secret Versioning & Pruning
Each deploy creates a fresh set of Swarm secrets named
`<stack>_<secret>_<deploy_id>` (for example
`honeydue_secret_key_abc1234_20260413120000`). The stack file references the
current names via `${POSTGRES_PASSWORD_SECRET}` etc., so rolling updates never
reuse a secret that a running task still holds open.
After the new stack converges, `./.deploy_prod` SSHes to the manager and
prunes old versions per base name, keeping the most recent
`SECRET_KEEP_VERSIONS` (default 3). Anything still referenced by a running
task is left alone (Docker refuses to delete in-use secrets) and will be
pruned on the next deploy.
## Important

View File

@@ -12,11 +12,21 @@ DEPLOY_HEALTHCHECK_URL=https://api.honeyDue.treytartt.com/api/health/
# Replicas and published ports
API_REPLICAS=3
WORKER_REPLICAS=2
# IMPORTANT: keep WORKER_REPLICAS=1 until Asynq PeriodicTaskManager is wired.
# The current asynq.Scheduler in cmd/worker/main.go has no Redis-based
# leader election, so running >1 replica fires every cron task once per
# replica → duplicate daily digests / onboarding emails / etc.
WORKER_REPLICAS=1
ADMIN_REPLICAS=1
API_PORT=8000
ADMIN_PORT=3000
DOZZLE_PORT=9999
# Build behavior
PUSH_LATEST_TAG=true
# PUSH_LATEST_TAG=true also tags and pushes :latest on the registry.
# Leave false in production to keep image tags immutable (SHA-pinned only).
PUSH_LATEST_TAG=false
# Secret retention: number of versioned Swarm secrets to keep per name after each deploy.
# Older unused versions are pruned post-convergence. Default: 3.
SECRET_KEEP_VERSIONS=3

View File

@@ -50,6 +50,27 @@ STORAGE_BASE_URL=/uploads
STORAGE_MAX_FILE_SIZE=10485760
STORAGE_ALLOWED_TYPES=image/jpeg,image/png,image/gif,image/webp,application/pdf
# Storage backend (S3-compatible: Backblaze B2 or MinIO)
#
# Leave all B2_* vars empty to use the local filesystem at STORAGE_UPLOAD_DIR.
# - Safe for single-node setups (dev / single-VPS prod).
# - NOT SAFE for multi-replica prod: named volumes are per-node in Swarm,
# so uploads written on one node are invisible to the other replicas.
#
# Set ALL FOUR of B2_ENDPOINT, B2_KEY_ID, B2_APP_KEY, B2_BUCKET_NAME to
# switch to S3-compatible storage. The deploy script enforces all-or-none.
#
# Example for Backblaze B2 (us-west-004):
# B2_ENDPOINT=s3.us-west-004.backblazeb2.com
# B2_USE_SSL=true
# B2_REGION=us-west-004
B2_ENDPOINT=
B2_KEY_ID=
B2_APP_KEY=
B2_BUCKET_NAME=
B2_USE_SSL=true
B2_REGION=us-east-1
# Feature flags
FEATURE_PUSH_ENABLED=true
FEATURE_EMAIL_ENABLED=true

View File

@@ -18,6 +18,8 @@ SECRET_APNS_KEY="${DEPLOY_DIR}/secrets/apns_auth_key.p8"
SKIP_BUILD="${SKIP_BUILD:-0}"
SKIP_HEALTHCHECK="${SKIP_HEALTHCHECK:-0}"
DRY_RUN="${DRY_RUN:-0}"
SECRET_KEEP_VERSIONS="${SECRET_KEEP_VERSIONS:-3}"
log() {
printf '[deploy] %s\n' "$*"
@@ -91,9 +93,13 @@ Usage:
./.deploy_prod
Optional environment flags:
SKIP_BUILD=1 Deploy existing image tags without rebuilding/pushing.
SKIP_HEALTHCHECK=1 Skip final HTTP health check.
DEPLOY_TAG=<tag> Override image tag (default: git short sha).
DRY_RUN=1 Print the deployment plan and exit without changes.
SKIP_BUILD=1 Deploy existing image tags without rebuilding/pushing.
SKIP_HEALTHCHECK=1 Skip final HTTP health check.
DEPLOY_TAG=<tag> Override image tag (default: git short sha).
PUSH_LATEST_TAG=true|false Also tag/push :latest (default: false — SHA only).
SECRET_KEEP_VERSIONS=<n> How many versions of each Swarm secret to retain
(default: 3). Older unused versions are pruned.
EOF
}
@@ -144,7 +150,7 @@ DEPLOY_STACK_NAME="${DEPLOY_STACK_NAME:-honeydue}"
DEPLOY_REMOTE_DIR="${DEPLOY_REMOTE_DIR:-/opt/honeydue/deploy}"
DEPLOY_WAIT_SECONDS="${DEPLOY_WAIT_SECONDS:-420}"
DEPLOY_TAG="${DEPLOY_TAG:-$(git -C "${REPO_DIR}" rev-parse --short HEAD)}"
PUSH_LATEST_TAG="${PUSH_LATEST_TAG:-true}"
PUSH_LATEST_TAG="${PUSH_LATEST_TAG:-false}"
require_var DEPLOY_MANAGER_HOST
require_var DEPLOY_MANAGER_USER
@@ -173,6 +179,27 @@ require_var APNS_AUTH_KEY_ID
require_var APNS_TEAM_ID
require_var APNS_TOPIC
# Storage backend validation: B2 is all-or-none. If any var is filled with
# a real value, require all four core vars. Empty means "use local volume".
b2_any_set=0
b2_all_set=1
for b2_var in B2_ENDPOINT B2_KEY_ID B2_APP_KEY B2_BUCKET_NAME; do
val="${!b2_var:-}"
if [[ -n "${val}" ]] && ! contains_placeholder "${val}"; then
b2_any_set=1
else
b2_all_set=0
fi
done
if (( b2_any_set == 1 && b2_all_set == 0 )); then
die "Partial B2 configuration detected. Set all four of B2_ENDPOINT, B2_KEY_ID, B2_APP_KEY, B2_BUCKET_NAME, or leave all four empty to use the local volume."
fi
if (( b2_all_set == 1 )); then
log "Storage backend: S3 (${B2_ENDPOINT} / bucket=${B2_BUCKET_NAME})"
else
warn "Storage backend: LOCAL VOLUME. This is not safe for multi-replica prod — uploads will only exist on one node. Set B2_* in prod.env to use object storage."
fi
if [[ ! "$(tr -d '\r\n' < "${SECRET_APNS_KEY}")" =~ BEGIN[[:space:]]+PRIVATE[[:space:]]+KEY ]]; then
die "APNS key file does not look like a private key: ${SECRET_APNS_KEY}"
fi
@@ -200,6 +227,50 @@ if [[ -n "${SSH_KEY_PATH}" ]]; then
SCP_OPTS+=(-i "${SSH_KEY_PATH}")
fi
if [[ "${DRY_RUN}" == "1" ]]; then
cat <<EOF
==================== DRY RUN ====================
Validation passed. Would deploy:
Stack name: ${DEPLOY_STACK_NAME}
Manager: ${SSH_TARGET}:${DEPLOY_MANAGER_SSH_PORT}
Remote dir: ${DEPLOY_REMOTE_DIR}
Deploy tag: ${DEPLOY_TAG}
Push :latest: ${PUSH_LATEST_TAG}
Skip build: ${SKIP_BUILD}
Skip healthcheck: ${SKIP_HEALTHCHECK}
Secret retention: ${SECRET_KEEP_VERSIONS} versions per name
Images that would be built and pushed:
${API_IMAGE}
${WORKER_IMAGE}
${ADMIN_IMAGE}
Replicas:
api: ${API_REPLICAS:-3}
worker: ${WORKER_REPLICAS:-2}
admin: ${ADMIN_REPLICAS:-1}
Published ports:
api: ${API_PORT:-8000} (ingress)
admin: ${ADMIN_PORT:-3000} (ingress)
dozzle: ${DOZZLE_PORT:-9999} (manager loopback only — SSH tunnel required)
Versioned secrets that would be created on this deploy:
${DEPLOY_STACK_NAME}_postgres_password_<deploy_id>
${DEPLOY_STACK_NAME}_secret_key_<deploy_id>
${DEPLOY_STACK_NAME}_email_host_password_<deploy_id>
${DEPLOY_STACK_NAME}_fcm_server_key_<deploy_id>
${DEPLOY_STACK_NAME}_apns_auth_key_<deploy_id>
No changes made. Re-run without DRY_RUN=1 to deploy.
=================================================
EOF
exit 0
fi
log "Validating SSH access to ${SSH_TARGET}"
if ! ssh "${SSH_OPTS[@]}" "${SSH_TARGET}" "echo ok" >/dev/null 2>&1; then
die "SSH connection failed to ${SSH_TARGET}"
@@ -384,11 +455,77 @@ while true; do
sleep 10
done
log "Pruning old secret versions (keeping last ${SECRET_KEEP_VERSIONS})"
ssh "${SSH_OPTS[@]}" "${SSH_TARGET}" "bash -s -- '${DEPLOY_STACK_NAME}' '${SECRET_KEEP_VERSIONS}'" <<'EOF' || warn "Secret pruning reported errors (non-fatal)"
set -euo pipefail
STACK_NAME="$1"
KEEP="$2"
prune_prefix() {
local prefix="$1"
# List matching secrets with creation time, sorted newest-first.
local all
all="$(docker secret ls --format '{{.CreatedAt}}|{{.Name}}' 2>/dev/null \
| grep "|${prefix}_" \
| sort -r \
|| true)"
if [[ -z "${all}" ]]; then
return 0
fi
local total
total="$(printf '%s\n' "${all}" | wc -l | tr -d ' ')"
if (( total <= KEEP )); then
echo "[cleanup] ${prefix}: ${total} version(s) — nothing to prune"
return 0
fi
local to_remove
to_remove="$(printf '%s\n' "${all}" | tail -n +$((KEEP + 1)) | awk -F'|' '{print $2}')"
while IFS= read -r name; do
[[ -z "${name}" ]] && continue
if docker secret rm "${name}" >/dev/null 2>&1; then
echo "[cleanup] removed: ${name}"
else
echo "[cleanup] in-use (kept): ${name}"
fi
done <<< "${to_remove}"
}
for base in postgres_password secret_key email_host_password fcm_server_key apns_auth_key; do
prune_prefix "${STACK_NAME}_${base}"
done
EOF
rollback_stack() {
warn "Rolling back stack ${DEPLOY_STACK_NAME} on ${SSH_TARGET}"
ssh "${SSH_OPTS[@]}" "${SSH_TARGET}" "bash -s -- '${DEPLOY_STACK_NAME}'" <<'EOF' || true
set +e
STACK="$1"
for svc in $(docker stack services "${STACK}" --format '{{.Name}}'); do
echo "[rollback] ${svc}"
docker service rollback "${svc}" || echo "[rollback] ${svc}: nothing to roll back"
done
EOF
}
if [[ "${SKIP_HEALTHCHECK}" != "1" && -n "${DEPLOY_HEALTHCHECK_URL:-}" ]]; then
log "Running health check: ${DEPLOY_HEALTHCHECK_URL}"
curl -fsS --max-time 20 "${DEPLOY_HEALTHCHECK_URL}" >/dev/null
if ! curl -fsS --max-time 20 "${DEPLOY_HEALTHCHECK_URL}" >/dev/null; then
warn "Health check FAILED for ${DEPLOY_HEALTHCHECK_URL}"
rollback_stack
die "Deploy rolled back due to failed health check."
fi
fi
# Best-effort registry logout — the token should not linger in
# ~/.docker/config.json after deploy completes. Failures are non-fatal.
log "Logging out of registry (local + remote)"
docker logout "${REGISTRY}" >/dev/null 2>&1 || true
ssh "${SSH_OPTS[@]}" "${SSH_TARGET}" "docker logout '${REGISTRY}' >/dev/null 2>&1 || true"
log "Deploy completed successfully."
log "Stack: ${DEPLOY_STACK_NAME}"
log "Images:"

View File

@@ -0,0 +1,208 @@
# Shit `./.deploy_prod` Can't Do
Everything listed here is **manual**. The deploy script orchestrates builds,
secrets, and the stack — it does not provision infrastructure, touch DNS,
configure Cloudflare, or rotate external credentials. Work through this list
once before your first prod deploy, then revisit after every cloud-side
change.
See [`README.md`](./README.md) for the security checklist that complements
this file.
---
## One-Time: Infrastructure
### Swarm Cluster
- [ ] Provision manager + worker VMs (Hetzner, DO, etc.).
- [ ] `docker swarm init --advertise-addr <manager-private-ip>` on manager #1.
- [ ] `docker swarm join-token {manager,worker}` → join additional nodes.
- [ ] `docker node ls` to verify — all nodes `Ready` and `Active`.
- [ ] Label nodes if you want placement constraints beyond the defaults.
### Node Hardening (every node)
- [ ] SSH: non-default port, key-only auth, no root login — see README §2.
- [ ] Firewall: allow 22 (or 2222), 80, 443 from CF IPs only; 2377/tcp,
7946/tcp+udp, 4789/udp Swarm-nodes only; block the rest — see README §1.
- [ ] Install unattended-upgrades (or equivalent) for security patches.
- [ ] Disable password auth in `/etc/ssh/sshd_config`.
- [ ] Create the `deploy` user (`AllowUsers deploy` in sshd_config).
### DNS + Cloudflare
- [ ] Add A records for `api.<domain>`, `admin.<domain>` pointing to the LB
or manager IPs. Keep them **proxied** (orange cloud).
- [ ] Create a Cloudflare tunnel or enable "Authenticated Origin Pulls" if
you want to lock the origin to CF only.
- [ ] Firewall rule on the nodes: only accept 80/443 from Cloudflare IP ranges
(<https://www.cloudflare.com/ips/>).
- [ ] Configure CF Access (or equivalent SSO) in front of admin panel if
exposing it publicly.
---
## One-Time: External Services
### Postgres (Neon)
- [ ] Create project + database (`honeydue`).
- [ ] Create a dedicated DB user with least privilege — not the project owner.
- [ ] Enable IP allowlist, add every Swarm node's egress IP.
- [ ] Verify `DB_SSLMODE=require` works end-to-end.
- [ ] Turn on PITR (paid tier) or schedule automated `pg_dump` backups.
- [ ] Do one restore drill — boot a staging stack from a real backup. If you
haven't done this, you do not have backups.
### Redis
- Redis runs **inside** the stack on a named volume. No external setup
needed today. See README §11 — this is an accepted SPOF.
- [ ] If you move Redis external (Upstash, Dragonfly Cloud): update
`REDIS_URL` in `prod.env`, remove the `redis` service + volume from
the stack.
### Backblaze B2 (or MinIO)
Skip this section if you're running a single-node prod and are OK with
uploads on a local volume. Required for multi-replica prod — see README §8.
- [ ] Create B2 account + bucket (private).
- [ ] Create a **scoped** application key bound to that single bucket —
not the master key.
- [ ] Set lifecycle rules: keep only the current version of each file,
or whatever matches your policy.
- [ ] Populate `B2_ENDPOINT`, `B2_KEY_ID`, `B2_APP_KEY`, `B2_BUCKET_NAME`
in `deploy/prod.env`. Optionally set `B2_USE_SSL` and `B2_REGION`.
- [ ] Verify uploads round-trip across replicas after the first deploy
(upload a file via client A → fetch via client B in a different session).
### APNS (Apple Push)
- [ ] Create an APNS auth key (`.p8`) in the Apple Developer portal.
- [ ] Save to `deploy/secrets/apns_auth_key.p8` — the script enforces it
contains a real `-----BEGIN PRIVATE KEY-----` block.
- [ ] Fill `APNS_AUTH_KEY_ID`, `APNS_TEAM_ID`, `APNS_TOPIC` (bundle ID) in
`deploy/prod.env`.
- [ ] Decide `APNS_USE_SANDBOX` / `APNS_PRODUCTION` based on build target.
### FCM (Android Push)
- [ ] Create Firebase project + legacy server key (or migrate to HTTP v1 —
the code currently uses the legacy server key).
- [ ] Save to `deploy/secrets/fcm_server_key.txt`.
### SMTP (Email)
- [ ] Provision SMTP credentials (Gmail app password, SES, Postmark, etc.).
- [ ] Fill `EMAIL_HOST`, `EMAIL_PORT`, `EMAIL_HOST_USER`,
`DEFAULT_FROM_EMAIL`, `EMAIL_USE_TLS` in `deploy/prod.env`.
- [ ] Save the password to `deploy/secrets/email_host_password.txt`.
- [ ] Verify SPF, DKIM, DMARC on the sending domain if you care about
deliverability.
### Registry (GHCR / other)
- [ ] Create a personal access token with `write:packages` + `read:packages`.
- [ ] Fill `REGISTRY`, `REGISTRY_NAMESPACE`, `REGISTRY_USERNAME`,
`REGISTRY_TOKEN` in `deploy/registry.env`.
- [ ] Rotate the token on a schedule (quarterly at minimum).
### Apple / Google IAP (optional)
- [ ] Apple: create App Store Connect API key, fill the `APPLE_IAP_*` vars.
- [ ] Google: create a service account with Play Developer API access,
store JSON at a path referenced by `GOOGLE_IAP_SERVICE_ACCOUNT_PATH`.
---
## Recurring Operations
### Secret Rotation
After any compromise, annually at minimum, and when a team member leaves:
1. Generate the new value (e.g. `openssl rand -base64 32 > deploy/secrets/secret_key.txt`).
2. `./.deploy_prod` — creates a new versioned Swarm secret and redeploys
services to pick it up.
3. The old secret lingers until `SECRET_KEEP_VERSIONS` bumps it out (see
README "Secret Versioning & Pruning").
4. For external creds (Neon, B2, APNS, etc.) rotate at the provider first,
update the local secret file, then redeploy.
### Backup Drills
- [ ] Quarterly: pull a Neon backup, restore to a scratch project, boot a
staging stack against it, verify login + basic reads.
- [ ] Monthly: spot-check that B2 objects are actually present and the
app key still works.
- [ ] After any schema change: confirm PITR coverage includes the new
columns before relying on it.
### Certificate Management
- TLS is terminated by Cloudflare today, so there are no origin certs to
renew. If you ever move TLS on-origin (Traefik, Caddy), automate renewal
— don't add it to this list and expect it to happen.
### Multi-Arch Builds
`./.deploy_prod` builds for the host arch. If target ≠ host:
- [ ] Enable buildx: `docker buildx create --use`.
- [ ] Install QEMU: `docker run --privileged --rm tonistiigi/binfmt --install all`.
- [ ] Build + push images manually per target platform.
- [ ] Run `SKIP_BUILD=1 ./.deploy_prod` so the script just deploys.
### Node Maintenance / Rolling Upgrades
- [ ] `docker node update --availability drain <node>` before OS upgrades.
- [ ] Reboot, verify, then `docker node update --availability active <node>`.
- [ ] Re-converge with `docker stack deploy -c swarm-stack.prod.yml honeydue`.
---
## Incident Response
### Redis Node Dies
Named volume is per-node and doesn't follow. Accept the loss:
1. Let Swarm reschedule Redis on a new node.
2. In-flight Asynq jobs are lost; retry semantics cover most of them.
3. Scheduled cron events fire again on the next tick (hourly for smart
reminders and daily digest; daily for onboarding + cleanup).
4. Cache repopulates on first request.
### Deploy Rolled Back Automatically
`./.deploy_prod` triggers `docker service rollback` on every service if
`DEPLOY_HEALTHCHECK_URL` fails. Diagnose with:
```bash
ssh <manager> docker stack services honeydue
ssh <manager> docker service logs --tail 200 honeydue_api
# Or open an SSH tunnel to Dozzle: ssh -L 9999:127.0.0.1:9999 <manager>
```
### Lost Ability to Deploy
- Registry token revoked → regenerate, update `deploy/registry.env`, re-run.
- Manager host key changed → verify legitimacy, update `~/.ssh/known_hosts`.
- All secrets accidentally pruned → restore the `deploy/secrets/*` files
locally and redeploy; new Swarm secret versions will be created.
---
## Known Gaps (Future Work)
- No dedicated `cmd/migrate` binary — migrations run at API boot (see
README §10). Large schema changes still need manual coordination.
- `asynq.Scheduler` has no leader election; `WORKER_REPLICAS` must stay 1
until we migrate to `asynq.PeriodicTaskManager` (README §9).
- No Prometheus / Grafana / alerting in the stack. `/metrics` is exposed
on the API but nothing scrapes it.
- No automated TLS renewal on-origin — add if you ever move off Cloudflare.
- No staging environment wired to the deploy script — `DEPLOY_TAG=<sha>`
is the closest thing. A proper staging flow is future work.

View File

@@ -3,7 +3,7 @@ version: "3.8"
services:
redis:
image: redis:7-alpine
command: redis-server --appendonly yes --appendfsync everysec
command: redis-server --appendonly yes --appendfsync everysec --maxmemory 200mb --maxmemory-policy allkeys-lru
volumes:
- redis_data:/data
healthcheck:
@@ -18,6 +18,13 @@ services:
delay: 5s
placement:
max_replicas_per_node: 1
resources:
limits:
cpus: "0.50"
memory: 256M
reservations:
cpus: "0.10"
memory: 64M
networks:
- honeydue-network
@@ -67,6 +74,17 @@ services:
STORAGE_MAX_FILE_SIZE: "${STORAGE_MAX_FILE_SIZE}"
STORAGE_ALLOWED_TYPES: "${STORAGE_ALLOWED_TYPES}"
# S3-compatible object storage (Backblaze B2, MinIO). When all B2_* vars
# are set, uploads/media are stored in the bucket and the local volume
# mount becomes a no-op fallback. Required for multi-replica prod —
# without it uploads only exist on one node.
B2_ENDPOINT: "${B2_ENDPOINT}"
B2_KEY_ID: "${B2_KEY_ID}"
B2_APP_KEY: "${B2_APP_KEY}"
B2_BUCKET_NAME: "${B2_BUCKET_NAME}"
B2_USE_SSL: "${B2_USE_SSL}"
B2_REGION: "${B2_REGION}"
FEATURE_PUSH_ENABLED: "${FEATURE_PUSH_ENABLED}"
FEATURE_EMAIL_ENABLED: "${FEATURE_EMAIL_ENABLED}"
FEATURE_WEBHOOKS_ENABLED: "${FEATURE_WEBHOOKS_ENABLED}"
@@ -86,6 +104,7 @@ services:
APPLE_IAP_SANDBOX: "${APPLE_IAP_SANDBOX}"
GOOGLE_IAP_SERVICE_ACCOUNT_PATH: "${GOOGLE_IAP_SERVICE_ACCOUNT_PATH}"
GOOGLE_IAP_PACKAGE_NAME: "${GOOGLE_IAP_PACKAGE_NAME}"
stop_grace_period: 60s
command:
- /bin/sh
- -lc
@@ -128,6 +147,13 @@ services:
parallelism: 1
delay: 5s
order: stop-first
resources:
limits:
cpus: "1.00"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
networks:
- honeydue-network
@@ -142,10 +168,12 @@ services:
PORT: "3000"
HOSTNAME: "0.0.0.0"
NEXT_PUBLIC_API_URL: "${NEXT_PUBLIC_API_URL}"
stop_grace_period: 60s
healthcheck:
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://127.0.0.1:3000/admin/"]
test: ["CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://127.0.0.1:3000/api/health"]
interval: 30s
timeout: 10s
start_period: 20s
retries: 3
deploy:
replicas: ${ADMIN_REPLICAS}
@@ -160,6 +188,13 @@ services:
parallelism: 1
delay: 5s
order: stop-first
resources:
limits:
cpus: "0.50"
memory: 384M
reservations:
cpus: "0.10"
memory: 128M
networks:
- honeydue-network
@@ -201,6 +236,7 @@ services:
FEATURE_ONBOARDING_EMAILS_ENABLED: "${FEATURE_ONBOARDING_EMAILS_ENABLED}"
FEATURE_PDF_REPORTS_ENABLED: "${FEATURE_PDF_REPORTS_ENABLED}"
FEATURE_WORKER_ENABLED: "${FEATURE_WORKER_ENABLED}"
stop_grace_period: 60s
command:
- /bin/sh
- -lc
@@ -222,6 +258,12 @@ services:
target: fcm_server_key
- source: ${APNS_AUTH_KEY_SECRET}
target: apns_auth_key
healthcheck:
test: ["CMD", "curl", "-f", "http://127.0.0.1:6060/health"]
interval: 30s
timeout: 10s
start_period: 15s
retries: 3
deploy:
replicas: ${WORKER_REPLICAS}
restart_policy:
@@ -235,16 +277,28 @@ services:
parallelism: 1
delay: 5s
order: stop-first
resources:
limits:
cpus: "1.00"
memory: 512M
reservations:
cpus: "0.25"
memory: 128M
networks:
- honeydue-network
dozzle:
# NOTE: Dozzle exposes the full Docker log stream with no built-in auth.
# Bound to manager loopback only — access via SSH tunnel:
# ssh -L ${DOZZLE_PORT}:127.0.0.1:${DOZZLE_PORT} <manager>
# Then browse http://localhost:${DOZZLE_PORT}
image: amir20/dozzle:latest
ports:
- target: 8080
published: ${DOZZLE_PORT}
protocol: tcp
mode: ingress
mode: host
host_ip: 127.0.0.1
environment:
DOZZLE_NO_ANALYTICS: "true"
volumes:
@@ -257,6 +311,13 @@ services:
placement:
constraints:
- node.role == manager
resources:
limits:
cpus: "0.25"
memory: 128M
reservations:
cpus: "0.05"
memory: 32M
networks:
- honeydue-network

View File

@@ -523,39 +523,6 @@ paths:
items:
$ref: '#/components/schemas/TaskTemplateResponse'
/tasks/templates/by-region/:
get:
tags: [Static Data]
operationId: getTaskTemplatesByRegion
summary: Get task templates for a climate region by state or ZIP code
description: Returns templates matching the climate zone for a given US state abbreviation or ZIP code. At least one parameter is required. If both are provided, state takes priority.
parameters:
- name: state
in: query
required: false
schema:
type: string
example: MA
description: US state abbreviation (e.g., MA, FL, TX)
- name: zip
in: query
required: false
schema:
type: string
example: "02101"
description: US ZIP code (resolved to state on the server)
responses:
'200':
description: Regional templates for the climate zone
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/TaskTemplateResponse'
'400':
$ref: '#/components/responses/BadRequest'
/tasks/templates/{id}/:
get:
tags: [Static Data]
@@ -972,6 +939,34 @@ paths:
'403':
$ref: '#/components/responses/Forbidden'
/tasks/bulk/:
post:
tags: [Tasks]
operationId: bulkCreateTasks
summary: Create multiple tasks atomically
description: Inserts 1-50 tasks in a single database transaction. If any entry fails, the entire batch is rolled back. Used primarily by onboarding to create the user's initial task list in one request.
security:
- tokenAuth: []
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/BulkCreateTasksRequest'
responses:
'201':
description: All tasks created
content:
application/json:
schema:
$ref: '#/components/schemas/BulkCreateTasksResponse'
'400':
$ref: '#/components/responses/ValidationError'
'401':
$ref: '#/components/responses/Unauthorized'
'403':
$ref: '#/components/responses/Forbidden'
/tasks/by-residence/{residence_id}/:
get:
tags: [Tasks]
@@ -2704,6 +2699,105 @@ paths:
'404':
$ref: '#/components/responses/NotFound'
/auth/account/:
delete:
tags: [Authentication]
summary: Delete user account
description: Permanently deletes the authenticated user's account and all associated data
security:
- tokenAuth: []
requestBody:
content:
application/json:
schema:
type: object
properties:
password:
type: string
description: Required for email-auth users
confirmation:
type: string
description: Must be "DELETE" for social-auth users
responses:
'200':
description: Account deleted successfully
'400':
$ref: '#/components/responses/BadRequest'
'401':
$ref: '#/components/responses/Unauthorized'
/auth/refresh/:
post:
tags: [Authentication]
summary: Refresh auth token
description: Returns a new token if current token is in the renewal window (60-90 days old)
security:
- tokenAuth: []
responses:
'200':
description: Token refreshed
content:
application/json:
schema:
type: object
properties:
token:
type: string
'401':
$ref: '#/components/responses/Unauthorized'
/health/live:
get:
tags: [Health]
summary: Liveness probe
description: Simple liveness check, always returns 200
responses:
'200':
description: Alive
/tasks/suggestions/:
get:
tags: [Tasks]
summary: Get personalized task template suggestions
description: Returns task templates ranked by relevance to the residence's home profile
security:
- tokenAuth: []
parameters:
- name: residence_id
in: query
required: true
schema:
type: integer
responses:
'200':
description: Suggestions with relevance scores
content:
application/json:
schema:
type: object
properties:
suggestions:
type: array
items:
type: object
properties:
template:
$ref: '#/components/schemas/TaskTemplate'
relevance_score:
type: number
match_reasons:
type: array
items:
type: string
total_count:
type: integer
profile_completeness:
type: number
'401':
$ref: '#/components/responses/Unauthorized'
'403':
$ref: '#/components/responses/Forbidden'
# =============================================================================
# Components
# =============================================================================
@@ -3591,6 +3685,38 @@ components:
type: integer
format: uint
nullable: true
template_id:
type: integer
format: uint
nullable: true
description: TaskTemplate ID this task was spawned from (onboarding suggestion, browse-catalog pick). Omit for custom tasks.
BulkCreateTasksRequest:
type: object
required: [residence_id, tasks]
properties:
residence_id:
type: integer
format: uint
description: Residence that owns every task in the batch; overrides the per-entry residence_id.
tasks:
type: array
minItems: 1
maxItems: 50
items:
$ref: '#/components/schemas/CreateTaskRequest'
BulkCreateTasksResponse:
type: object
properties:
tasks:
type: array
items:
$ref: '#/components/schemas/TaskResponse'
summary:
$ref: '#/components/schemas/TotalSummary'
created_count:
type: integer
UpdateTaskRequest:
type: object
@@ -3728,6 +3854,11 @@ components:
type: integer
format: uint
nullable: true
template_id:
type: integer
format: uint
nullable: true
description: TaskTemplate this task was spawned from; nil for custom user tasks.
completion_count:
type: integer
kanban_column:

302
docs/server_2026_2_24.md Normal file
View File

@@ -0,0 +1,302 @@
# Casera Infrastructure Plan — February 2026
## Architecture Overview
```
┌─────────────┐
│ Cloudflare │
│ (CDN/DNS) │
└──────┬──────┘
│ HTTPS
┌──────┴──────┐
│ Hetzner LB │
│ ($5.99) │
└──────┬──────┘
┌────────────────┼────────────────┐
│ │ │
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ CX33 #1 │ │ CX33 #2 │ │ CX33 #3 │
│ (manager) │ │ (manager) │ │ (manager) │
│ │ │ │ │ │
│ api (x2) │ │ api (x2) │ │ api (x1) │
│ admin │ │ worker │ │ worker │
│ redis │ │ dozzle │ │ │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
│ Docker Swarm Overlay (IPsec) │
└────────────────┼────────────────┘
┌────────────┼────────────────┐
│ │
┌──────┴──────┐ ┌───────┴──────┐
│ Neon │ │ Backblaze │
│ (Postgres) │ │ B2 │
│ Launch │ │ (media) │
└─────────────┘ └──────────────┘
```
## Swarm Nodes — Hetzner CX33
All 3 nodes are manager+worker (Raft consensus requires 3 managers for fault tolerance — 1 node can go down and the cluster stays operational).
| Spec | Value |
|------|-------|
| Plan | CX33 (Shared Regular Performance) |
| vCPU | 4 |
| RAM | 8 GB |
| Disk | 80 GB SSD |
| Traffic | 20 TB/mo included |
| Price | $6.59/mo per node |
| Region | Pick closest to users (US: Ashburn or Hillsboro, EU: Nuremberg/Falkenstein/Helsinki) |
**Why CX33 over CX23:** 8 GB RAM gives headroom for Redis, multiple API replicas, and the admin panel without pressure. The $2.50/mo difference per node isn't worth optimizing away.
### Container Distribution
| Container | Replicas | Notes |
|-----------|----------|-------|
| api | 3-6 | Spread across all nodes by Swarm |
| worker | 2-3 | Asynq workers pull jobs from Redis concurrently |
| admin | 1 | Next.js admin panel |
| redis | 1 | Pinned to one node with its volume |
| dozzle | 1 | Pinned to a manager node (needs Docker socket) |
### Scaling Path
- Need more capacity? Add another CX33 with `docker swarm join`. Swarm rebalances automatically.
- Need more API throughput? Bump replicas in the compose file. No infra change.
- Only infrastructure addition needed at scale: the Hetzner Load Balancer ($5.99/mo).
## Load Balancer — Hetzner LB
| Spec | Value |
|------|-------|
| Price | $5.99/mo |
| Purpose | Distribute traffic across Swarm nodes, TLS termination |
| When to add | When you need redundant ingress (not required day 1 if using Cloudflare to proxy to a single node) |
## Database — Neon Postgres (Launch Plan)
| Spec | Value |
|------|-------|
| Plan | Launch (usage-based, no monthly minimum) |
| Compute | $0.106/CU-hr, up to 16 CU (64 GB RAM) |
| Storage | $0.35/GB-month |
| Connections | Up to 10,000 via built-in PgBouncer |
| Typical cost | ~$5-15/mo for light load, ~$20-40/mo at 100k users |
| Free tier | Available for dev/staging (100 CU-hrs/mo, 0.5 GB) |
### Connection Pooling
Neon includes built-in PgBouncer on all plans. Enable by adding `-pooler` to the hostname:
```
# Direct connection
ep-cool-darkness-123456.us-east-2.aws.neon.tech
# Pooled connection (use this in production)
ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech
```
Runs in transaction mode — compatible with GORM out of the box.
### Configuration
```env
DB_HOST=ep-xxxxx-pooler.us-east-2.aws.neon.tech
DB_PORT=5432
DB_SSLMODE=require
POSTGRES_USER=<from neon dashboard>
POSTGRES_PASSWORD=<from neon dashboard>
POSTGRES_DB=casera
```
## Object Storage — Backblaze B2
| Spec | Value |
|------|-------|
| Storage | $6/TB/mo ($0.006/GB) |
| Egress | $0.01/GB (first 3x stored amount is free) |
| Free tier | 10 GB storage always free |
| API calls | Class A free, Class B/C free first 2,500/day |
| Spending cap | Built-in data caps with alerts at 75% and 100% |
### Bucket Setup
| Bucket | Visibility | Key Permissions | Contents |
|--------|------------|-----------------|----------|
| `casera-uploads` | Private | Read/Write (API containers) | User-uploaded photos, documents |
| `casera-certs` | Private | Read-only (API + worker) | APNs push certificates |
Serve files through the API using signed URLs — never expose buckets publicly.
### Why B2 Over Others
- **Spending cap**: only S3-compatible provider with built-in hard caps and alerts. No surprise bills.
- **Cheapest storage**: $6/TB vs Cloudflare R2 at $15/TB vs Tigris at $20/TB.
- **Free egress partner CDNs**: Cloudflare, Fastly, bunny.net — zero egress when behind Cloudflare.
## CDN — Cloudflare (Free Tier)
| Spec | Value |
|------|-------|
| Price | $0 |
| Purpose | DNS, CDN caching, DDoS protection, TLS termination |
| Setup | Point DNS to Cloudflare, proxy traffic to Hetzner LB (or directly to a Swarm node) |
Add this on day 1. No reason not to.
## Logging — Dozzle
| Spec | Value |
|------|-------|
| Price | $0 (open source) |
| Port | 9999 (internal only — do not expose publicly) |
| Features | Real-time log viewer, webhook support for alerts |
Runs as a container in the Swarm. Needs Docker socket access, so it's pinned to a manager node.
For 100k+ users, consider adding Prometheus + Grafana (self-hosted, free) or Betterstack (~$10/mo) for metrics and alerting beyond log viewing.
## Security
### Swarm Node Firewall (Hetzner Cloud Firewall — free)
| Port | Protocol | Source | Purpose |
|------|----------|--------|---------|
| Custom (e.g. 2222) | TCP | Your IP only | SSH |
| 80, 443 | TCP | Anywhere | Public traffic |
| 2377 | TCP | Swarm nodes only | Cluster management |
| 7946 | TCP/UDP | Swarm nodes only | Node discovery |
| 4789 | UDP | Swarm nodes only | Overlay network (VXLAN) |
| Everything else | — | — | Blocked |
Set up once in Hetzner dashboard, apply to all 3 nodes.
### SSH Hardening
```
# /etc/ssh/sshd_config
Port 2222 # Non-default port
PermitRootLogin no # No root SSH
PasswordAuthentication no # Key-only auth
PubkeyAuthentication yes
AllowUsers deploy # Only your deploy user
```
### Swarm ↔ Neon (Postgres)
| Layer | Method |
|-------|--------|
| Encryption | TLS enforced by Neon (`DB_SSLMODE=require`) |
| Authentication | Strong password stored as Docker secret |
| Access control | IP allowlist in Neon dashboard — restrict to 3 Swarm node IPs |
### Swarm ↔ B2 (Object Storage)
| Layer | Method |
|-------|--------|
| Encryption | HTTPS always (enforced by B2 API) |
| Authentication | Scoped application keys (not master key) |
| Access control | Per-bucket key permissions (read-only where possible) |
### Swarm Internal
| Layer | Method |
|-------|--------|
| Overlay encryption | `driver_opts: encrypted: "true"` on overlay network (IPsec between nodes) |
| Secrets | Use `docker secret create` for DB password, SECRET_KEY, B2 keys, APNs keys. Mounted at `/run/secrets/`, encrypted in Swarm raft log. |
| Container isolation | Non-root users in all containers (already configured in Dockerfile) |
### Docker Secrets Migration
Current setup uses environment variables for secrets. Migrate to Docker secrets for production:
```bash
# Create secrets
echo "your-db-password" | docker secret create postgres_password -
echo "your-secret-key" | docker secret create secret_key -
echo "your-b2-app-key" | docker secret create b2_app_key -
# Reference in compose file
services:
api:
secrets:
- postgres_password
- secret_key
secrets:
postgres_password:
external: true
secret_key:
external: true
```
Application code reads from `/run/secrets/<name>` instead of env vars.
## Redis (In-Cluster)
Redis stays inside the Swarm — no need to externalize.
| Purpose | Details |
|---------|---------|
| Asynq job queue | Background jobs: push notifications, digests, reminders, onboarding emails |
| Static data cache | Cached lookup tables with ETag support |
| Resource usage | ~20-50 MB RAM, negligible CPU |
At 100k users, Redis handles job queuing for nightly digests (100k enqueue + dequeue operations) without issue. A single Redis instance handles millions of operations per second.
Asynq coordinates multiple worker replicas automatically — each job is dequeued atomically by exactly one worker, no double-processing.
## Performance Estimates
| Metric | Value |
|--------|-------|
| Single CX33 API throughput | ~1,000-2,000 req/s (blended, with Neon latency) |
| 3-node cluster throughput | ~3,000-6,000 req/s |
| Avg requests per user per day | ~50 |
| Estimated user capacity (3 nodes) | ~200k-500k registered users |
| Bottleneck at scale | Neon compute tier, not Go or Swarm |
These are napkin estimates. Load test before launch.
## Monthly Cost Summary
### Starting Out
| Component | Provider | Cost |
|-----------|----------|------|
| 3x Swarm nodes | Hetzner CX33 | $19.77/mo |
| Postgres | Neon Launch | ~$5-15/mo |
| Object storage | Backblaze B2 | <$1/mo |
| CDN | Cloudflare Free | $0 |
| Logging | Dozzle (self-hosted) | $0 |
| **Total** | | **~$25-35/mo** |
### At Scale (100k users)
| Component | Provider | Cost |
|-----------|----------|------|
| 3x Swarm nodes | Hetzner CX33 | $19.77/mo |
| Load balancer | Hetzner LB | $5.99/mo |
| Postgres | Neon Launch | ~$20-40/mo |
| Object storage | Backblaze B2 | ~$1-3/mo |
| CDN | Cloudflare Free | $0 |
| Monitoring | Betterstack or self-hosted | ~$0-10/mo |
| **Total** | | **~$47-79/mo** |
## TODO
- [ ] Set up 3x Hetzner CX33 instances
- [ ] Initialize Docker Swarm (`docker swarm init` on first node, `docker swarm join` on others)
- [ ] Configure Hetzner Cloud Firewall
- [ ] Harden SSH on all nodes
- [ ] Create Neon project (Launch plan), configure IP allowlist
- [ ] Create Backblaze B2 buckets with scoped application keys
- [ ] Set up Cloudflare DNS proxying
- [ ] Update prod compose file: remove `db` service, add overlay encryption, add Docker secrets
- [ ] Add B2 SDK integration for file uploads (code change)
- [ ] Update config to read from `/run/secrets/` for Docker secrets
- [ ] Set B2 spending cap and alerts
- [ ] Load test the deployed stack
- [ ] Add Hetzner LB when needed

27
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/treytartt/honeydue-api
go 1.24.0
go 1.25
require (
github.com/go-pdf/fpdf v0.9.0
@@ -10,6 +10,7 @@ require (
github.com/gorilla/websocket v1.5.3
github.com/hibiken/asynq v0.25.1
github.com/labstack/echo/v4 v4.11.4
github.com/minio/minio-go/v7 v7.0.99
github.com/nicksnyder/go-i18n/v2 v2.6.0
github.com/redis/go-redis/v9 v9.17.1
github.com/rs/zerolog v1.34.0
@@ -20,9 +21,9 @@ require (
github.com/stretchr/testify v1.11.1
github.com/stripe/stripe-go/v81 v81.4.0
github.com/wneessen/go-mail v0.7.2
golang.org/x/crypto v0.45.0
golang.org/x/crypto v0.46.0
golang.org/x/oauth2 v0.34.0
golang.org/x/text v0.31.0
golang.org/x/text v0.32.0
golang.org/x/time v0.14.0
google.golang.org/api v0.257.0
gopkg.in/yaml.v3 v3.0.1
@@ -31,6 +32,20 @@ require (
gorm.io/gorm v1.31.1
)
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/klauspost/compress v1.18.2 // indirect
github.com/klauspost/cpuid/v2 v2.2.11 // indirect
github.com/klauspost/crc32 v1.3.0 // indirect
github.com/minio/crc64nvme v1.1.1 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/philhofer/fwd v1.2.0 // indirect
github.com/rs/xid v1.6.0 // indirect
github.com/tinylib/msgp v1.6.1 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
)
require (
cloud.google.com/go/auth v0.17.0 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
@@ -85,9 +100,9 @@ require (
go.opentelemetry.io/otel v1.38.0 // indirect
go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/otel/trace v1.38.0 // indirect
golang.org/x/net v0.47.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/net v0.48.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/sys v0.39.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251124214823-79d6a2a48846 // indirect
google.golang.org/grpc v1.77.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect

44
go.sum
View File

@@ -20,6 +20,8 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
@@ -28,6 +30,8 @@ github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/gabriel-vasile/mimetype v1.4.8 h1:FfZ3gj38NjllZIeJAmMhr+qKL8Wu+nOoI3GqacKw1NM=
github.com/gabriel-vasile/mimetype v1.4.8/go.mod h1:ByKUIKGjh1ODkGM1asKUbQZOLGrPjydw3hYPU2YU9t8=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
@@ -84,6 +88,13 @@ github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.11 h1:0OwqZRYI2rFrjS4kvkDnqJkKHdHaRnCm68/DY4OxRzU=
github.com/klauspost/cpuid/v2 v2.2.11/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/klauspost/crc32 v1.3.0 h1:sSmTt3gUt81RP655XGZPElI0PelVTZ6YwCRnPSupoFM=
github.com/klauspost/crc32 v1.3.0/go.mod h1:D7kQaZhnkX/Y0tstFGf8VUzv2UofNGqCjnC3zdHB0Hw=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
@@ -104,10 +115,18 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v2.0.3+incompatible h1:gXHsfypPkaMZrKbD5209QV9jbUTJKjyR5WD3HYQSd+U=
github.com/mattn/go-sqlite3 v2.0.3+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/minio/crc64nvme v1.1.1 h1:8dwx/Pz49suywbO+auHCBpCtlW1OfpcLN7wYgVR6wAI=
github.com/minio/crc64nvme v1.1.1/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.99 h1:2vH/byrwUkIpFQFOilvTfaUpvAX3fEFhEzO+DR3DlCE=
github.com/minio/minio-go/v7 v7.0.99/go.mod h1:EtGNKtlX20iL2yaYnxEigaIvj0G0GwSDnifnG8ClIdw=
github.com/nicksnyder/go-i18n/v2 v2.6.0 h1:C/m2NNWNiTB6SK4Ao8df5EWm3JETSTIGNXBpMJTxzxQ=
github.com/nicksnyder/go-i18n/v2 v2.6.0/go.mod h1:88sRqr0C6OPyJn0/KRNaEz1uWorjxIKP7rUUcvycecE=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/philhofer/fwd v1.2.0 h1:e6DnBTl7vGY+Gz322/ASL4Gyp1FspeMvx1RNDoToZuM=
github.com/philhofer/fwd v1.2.0/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
@@ -119,6 +138,7 @@ github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
@@ -154,6 +174,8 @@ github.com/stripe/stripe-go/v81 v81.4.0 h1:AuD9XzdAvl193qUCSaLocf8H+nRopOouXhxqJ
github.com/stripe/stripe-go/v81 v81.4.0/go.mod h1:C/F4jlmnGNacvYtBp/LUHCvVUJEZffFQCobkzwY1WOo=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tinylib/msgp v1.6.1 h1:ESRv8eL3u+DNHUoSAAQRE50Hm162zqAnBoGv9PzScPY=
github.com/tinylib/msgp v1.6.1/go.mod h1:RSp0LW9oSxFut3KzESt5Voq4GVWyS+PSulT77roAqEA=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
@@ -182,17 +204,19 @@ go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJr
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20170512130425-ab89591268e0/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220403103023-749bd193bc2b/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -204,14 +228,14 @@ golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -0,0 +1,176 @@
package dto
import (
"testing"
)
// --- GetPage ---
func TestGetPage_Zero_Returns1(t *testing.T) {
p := &PaginationParams{Page: 0}
if got := p.GetPage(); got != 1 {
t.Errorf("GetPage(0) = %d, want 1", got)
}
}
func TestGetPage_Negative_Returns1(t *testing.T) {
p := &PaginationParams{Page: -5}
if got := p.GetPage(); got != 1 {
t.Errorf("GetPage(-5) = %d, want 1", got)
}
}
func TestGetPage_Valid_ReturnsValue(t *testing.T) {
p := &PaginationParams{Page: 3}
if got := p.GetPage(); got != 3 {
t.Errorf("GetPage(3) = %d, want 3", got)
}
}
// --- GetPerPage ---
func TestGetPerPage_Zero_Returns20(t *testing.T) {
p := &PaginationParams{PerPage: 0}
if got := p.GetPerPage(); got != 20 {
t.Errorf("GetPerPage(0) = %d, want 20", got)
}
}
func TestGetPerPage_Negative_Returns20(t *testing.T) {
p := &PaginationParams{PerPage: -1}
if got := p.GetPerPage(); got != 20 {
t.Errorf("GetPerPage(-1) = %d, want 20", got)
}
}
func TestGetPerPage_TooLarge_Returns10000(t *testing.T) {
p := &PaginationParams{PerPage: 20000}
if got := p.GetPerPage(); got != 10000 {
t.Errorf("GetPerPage(20000) = %d, want 10000", got)
}
}
func TestGetPerPage_Valid_ReturnsValue(t *testing.T) {
p := &PaginationParams{PerPage: 50}
if got := p.GetPerPage(); got != 50 {
t.Errorf("GetPerPage(50) = %d, want 50", got)
}
}
// --- GetOffset ---
func TestGetOffset_Page1_Returns0(t *testing.T) {
p := &PaginationParams{Page: 1, PerPage: 20}
if got := p.GetOffset(); got != 0 {
t.Errorf("GetOffset(page=1, perPage=20) = %d, want 0", got)
}
}
func TestGetOffset_Page3_PerPage10_Returns20(t *testing.T) {
p := &PaginationParams{Page: 3, PerPage: 10}
if got := p.GetOffset(); got != 20 {
t.Errorf("GetOffset(page=3, perPage=10) = %d, want 20", got)
}
}
func TestGetOffset_Defaults_Returns0(t *testing.T) {
p := &PaginationParams{}
if got := p.GetOffset(); got != 0 {
t.Errorf("GetOffset(defaults) = %d, want 0", got)
}
}
// --- GetSortDir ---
func TestGetSortDir_Asc(t *testing.T) {
p := &PaginationParams{SortDir: "asc"}
if got := p.GetSortDir(); got != "ASC" {
t.Errorf("GetSortDir('asc') = %q, want 'ASC'", got)
}
}
func TestGetSortDir_Desc(t *testing.T) {
p := &PaginationParams{SortDir: "desc"}
if got := p.GetSortDir(); got != "DESC" {
t.Errorf("GetSortDir('desc') = %q, want 'DESC'", got)
}
}
func TestGetSortDir_Empty_ReturnsDesc(t *testing.T) {
p := &PaginationParams{SortDir: ""}
if got := p.GetSortDir(); got != "DESC" {
t.Errorf("GetSortDir('') = %q, want 'DESC'", got)
}
}
func TestGetSortDir_Invalid_ReturnsDesc(t *testing.T) {
p := &PaginationParams{SortDir: "RANDOM"}
if got := p.GetSortDir(); got != "DESC" {
t.Errorf("GetSortDir('RANDOM') = %q, want 'DESC'", got)
}
}
// --- GetSafeSortBy ---
func TestGetSafeSortBy_Allowed(t *testing.T) {
p := &PaginationParams{SortBy: "name"}
got := p.GetSafeSortBy([]string{"name", "email"}, "id")
if got != "name" {
t.Errorf("GetSafeSortBy('name') = %q, want 'name'", got)
}
}
func TestGetSafeSortBy_NotAllowed_ReturnsDefault(t *testing.T) {
p := &PaginationParams{SortBy: "password"}
got := p.GetSafeSortBy([]string{"name", "email"}, "id")
if got != "id" {
t.Errorf("GetSafeSortBy('password') = %q, want 'id'", got)
}
}
func TestGetSafeSortBy_Empty_ReturnsDefault(t *testing.T) {
p := &PaginationParams{SortBy: ""}
got := p.GetSafeSortBy([]string{"name", "email"}, "id")
if got != "id" {
t.Errorf("GetSafeSortBy('') = %q, want 'id'", got)
}
}
// --- NewPaginatedResponse ---
func TestNewPaginatedResponse_ExactPages(t *testing.T) {
resp := NewPaginatedResponse([]string{"a", "b"}, 40, 1, 20)
if resp.TotalPages != 2 {
t.Errorf("TotalPages = %d, want 2", resp.TotalPages)
}
if resp.Total != 40 {
t.Errorf("Total = %d, want 40", resp.Total)
}
if resp.Page != 1 {
t.Errorf("Page = %d, want 1", resp.Page)
}
if resp.PerPage != 20 {
t.Errorf("PerPage = %d, want 20", resp.PerPage)
}
}
func TestNewPaginatedResponse_PartialLastPage(t *testing.T) {
resp := NewPaginatedResponse(nil, 21, 1, 20)
if resp.TotalPages != 2 {
t.Errorf("TotalPages = %d, want 2", resp.TotalPages)
}
}
func TestNewPaginatedResponse_SinglePage(t *testing.T) {
resp := NewPaginatedResponse(nil, 5, 1, 20)
if resp.TotalPages != 1 {
t.Errorf("TotalPages = %d, want 1", resp.TotalPages)
}
}
func TestNewPaginatedResponse_ZeroTotal(t *testing.T) {
resp := NewPaginatedResponse(nil, 0, 1, 20)
if resp.TotalPages != 0 {
t.Errorf("TotalPages = %d, want 0", resp.TotalPages)
}
}

View File

@@ -0,0 +1,109 @@
package apperrors
import (
"errors"
"net/http"
"testing"
"github.com/stretchr/testify/assert"
)
func TestNotFound(t *testing.T) {
err := NotFound("error.task_not_found")
assert.Equal(t, http.StatusNotFound, err.Code)
assert.Equal(t, "error.task_not_found", err.MessageKey)
assert.Empty(t, err.Message)
assert.Nil(t, err.Err)
}
func TestForbidden(t *testing.T) {
err := Forbidden("error.residence_access_denied")
assert.Equal(t, http.StatusForbidden, err.Code)
assert.Equal(t, "error.residence_access_denied", err.MessageKey)
}
func TestBadRequest(t *testing.T) {
err := BadRequest("error.invalid_request_body")
assert.Equal(t, http.StatusBadRequest, err.Code)
assert.Equal(t, "error.invalid_request_body", err.MessageKey)
}
func TestUnauthorized(t *testing.T) {
err := Unauthorized("error.not_authenticated")
assert.Equal(t, http.StatusUnauthorized, err.Code)
assert.Equal(t, "error.not_authenticated", err.MessageKey)
}
func TestConflict(t *testing.T) {
err := Conflict("error.email_taken")
assert.Equal(t, http.StatusConflict, err.Code)
assert.Equal(t, "error.email_taken", err.MessageKey)
}
func TestTooManyRequests(t *testing.T) {
err := TooManyRequests("error.rate_limit_exceeded")
assert.Equal(t, http.StatusTooManyRequests, err.Code)
assert.Equal(t, "error.rate_limit_exceeded", err.MessageKey)
}
func TestInternal(t *testing.T) {
underlying := errors.New("database connection failed")
err := Internal(underlying)
assert.Equal(t, http.StatusInternalServerError, err.Code)
assert.Equal(t, "error.internal", err.MessageKey)
assert.Equal(t, underlying, err.Err)
}
func TestAppError_Error_WithWrappedError(t *testing.T) {
underlying := errors.New("connection refused")
err := Internal(underlying).WithMessage("database error")
assert.Equal(t, "database error: connection refused", err.Error())
}
func TestAppError_Error_WithMessageOnly(t *testing.T) {
err := NotFound("error.task_not_found").WithMessage("Task not found")
assert.Equal(t, "Task not found", err.Error())
}
func TestAppError_Error_MessageKeyFallback(t *testing.T) {
err := NotFound("error.task_not_found")
// No Message set, no Err set — should fall back to MessageKey
assert.Equal(t, "error.task_not_found", err.Error())
}
func TestAppError_Unwrap(t *testing.T) {
underlying := errors.New("wrapped error")
err := Internal(underlying)
assert.Equal(t, underlying, errors.Unwrap(err))
}
func TestAppError_Unwrap_Nil(t *testing.T) {
err := NotFound("error.task_not_found")
assert.Nil(t, errors.Unwrap(err))
}
func TestAppError_WithMessage(t *testing.T) {
err := NotFound("error.task_not_found").WithMessage("custom message")
assert.Equal(t, "custom message", err.Message)
assert.Equal(t, "error.task_not_found", err.MessageKey)
}
func TestAppError_Wrap(t *testing.T) {
underlying := errors.New("some error")
err := BadRequest("error.invalid_request_body").Wrap(underlying)
assert.Equal(t, underlying, err.Err)
assert.Equal(t, http.StatusBadRequest, err.Code)
}
func TestAppError_ImplementsError(t *testing.T) {
var err error = NotFound("error.task_not_found")
assert.NotNil(t, err)
assert.Equal(t, "error.task_not_found", err.Error())
}
func TestAppError_ErrorsAs(t *testing.T) {
var appErr *AppError
err := NotFound("error.task_not_found")
assert.True(t, errors.As(err, &appErr))
assert.Equal(t, http.StatusNotFound, appErr.Code)
}

View File

@@ -134,17 +134,37 @@ type SecurityConfig struct {
PasswordResetExpiry time.Duration
ConfirmationExpiry time.Duration
MaxPasswordResetRate int // per hour
TokenExpiryDays int // Number of days before auth tokens expire (default 90)
TokenRefreshDays int // Token must be at least this many days old before refresh (default 60)
}
// StorageConfig holds file storage settings
// StorageConfig holds file storage settings.
// When S3Endpoint is set, files are stored in S3-compatible storage (B2, MinIO).
// When S3Endpoint is empty, files are stored on the local filesystem using UploadDir.
type StorageConfig struct {
UploadDir string // Directory to store uploaded files
BaseURL string // Public URL prefix for serving files (e.g., "/uploads")
// Local filesystem settings
UploadDir string // Directory to store uploaded files (local mode)
BaseURL string // Public URL prefix for serving files (e.g., "/uploads")
// S3-compatible storage settings (B2, MinIO)
S3Endpoint string // S3 endpoint (e.g., "s3.us-west-004.backblazeb2.com" or "minio:9000")
S3KeyID string // Access key ID
S3AppKey string // Secret access key
S3Bucket string // Bucket name
S3UseSSL bool // Use HTTPS (true for B2, false for in-cluster MinIO)
S3Region string // Region (optional, defaults to "us-east-1")
// Shared settings
MaxFileSize int64 // Max file size in bytes (default 10MB)
AllowedTypes string // Comma-separated MIME types
EncryptionKey string // 64-char hex key for file encryption at rest (optional)
}
// IsS3 returns true if S3-compatible storage is configured
func (c *StorageConfig) IsS3() bool {
return c.S3Endpoint != "" && c.S3KeyID != "" && c.S3AppKey != "" && c.S3Bucket != ""
}
// FeatureFlags holds kill switches for major subsystems.
// All default to true (enabled). Set to false via env vars to disable.
type FeatureFlags struct {
@@ -262,10 +282,18 @@ func Load() (*Config, error) {
PasswordResetExpiry: 15 * time.Minute,
ConfirmationExpiry: 24 * time.Hour,
MaxPasswordResetRate: 3,
TokenExpiryDays: viper.GetInt("TOKEN_EXPIRY_DAYS"),
TokenRefreshDays: viper.GetInt("TOKEN_REFRESH_DAYS"),
},
Storage: StorageConfig{
UploadDir: viper.GetString("STORAGE_UPLOAD_DIR"),
BaseURL: viper.GetString("STORAGE_BASE_URL"),
S3Endpoint: viper.GetString("B2_ENDPOINT"),
S3KeyID: viper.GetString("B2_KEY_ID"),
S3AppKey: viper.GetString("B2_APP_KEY"),
S3Bucket: viper.GetString("B2_BUCKET_NAME"),
S3UseSSL: viper.GetString("STORAGE_USE_SSL") == "" || viper.GetBool("STORAGE_USE_SSL"),
S3Region: viper.GetString("B2_REGION"),
MaxFileSize: viper.GetInt64("STORAGE_MAX_FILE_SIZE"),
AllowedTypes: viper.GetString("STORAGE_ALLOWED_TYPES"),
EncryptionKey: viper.GetString("STORAGE_ENCRYPTION_KEY"),
@@ -369,6 +397,10 @@ func setDefaults() {
viper.SetDefault("OVERDUE_REMINDER_HOUR", 15) // 9:00 AM UTC
viper.SetDefault("DAILY_DIGEST_HOUR", 3) // 3:00 AM UTC
// Token expiry defaults
viper.SetDefault("TOKEN_EXPIRY_DAYS", 90) // Tokens expire after 90 days
viper.SetDefault("TOKEN_REFRESH_DAYS", 60) // Tokens can be refreshed after 60 days
// Storage defaults
viper.SetDefault("STORAGE_UPLOAD_DIR", "./uploads")
viper.SetDefault("STORAGE_BASE_URL", "/uploads")

View File

@@ -0,0 +1,324 @@
package config
import (
"sync"
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// resetConfigState resets the package-level singleton so each test starts fresh.
func resetConfigState() {
cfg = nil
cfgOnce = sync.Once{}
viper.Reset()
}
func TestLoad_DefaultValues(t *testing.T) {
resetConfigState()
// Provide required SECRET_KEY so validation passes
t.Setenv("SECRET_KEY", "a-strong-secret-key-for-tests")
c, err := Load()
require.NoError(t, err)
// Server defaults
assert.Equal(t, 8000, c.Server.Port)
assert.False(t, c.Server.Debug)
assert.False(t, c.Server.DebugFixedCodes)
assert.Equal(t, "UTC", c.Server.Timezone)
assert.Equal(t, "/app/static", c.Server.StaticDir)
assert.Equal(t, "https://api.myhoneydue.com", c.Server.BaseURL)
// Database defaults
assert.Equal(t, "localhost", c.Database.Host)
assert.Equal(t, 5432, c.Database.Port)
assert.Equal(t, "postgres", c.Database.User)
assert.Equal(t, "honeydue", c.Database.Database)
assert.Equal(t, "disable", c.Database.SSLMode)
assert.Equal(t, 25, c.Database.MaxOpenConns)
assert.Equal(t, 10, c.Database.MaxIdleConns)
// Redis defaults
assert.Equal(t, "redis://localhost:6379/0", c.Redis.URL)
assert.Equal(t, 0, c.Redis.DB)
// Worker defaults
assert.Equal(t, 14, c.Worker.TaskReminderHour)
assert.Equal(t, 15, c.Worker.OverdueReminderHour)
assert.Equal(t, 3, c.Worker.DailyNotifHour)
// Token expiry defaults
assert.Equal(t, 90, c.Security.TokenExpiryDays)
assert.Equal(t, 60, c.Security.TokenRefreshDays)
// Feature flags default to true
assert.True(t, c.Features.PushEnabled)
assert.True(t, c.Features.EmailEnabled)
assert.True(t, c.Features.WebhooksEnabled)
assert.True(t, c.Features.OnboardingEmailsEnabled)
assert.True(t, c.Features.PDFReportsEnabled)
assert.True(t, c.Features.WorkerEnabled)
}
func TestLoad_EnvOverrides(t *testing.T) {
resetConfigState()
t.Setenv("SECRET_KEY", "a-strong-secret-key-for-tests")
t.Setenv("PORT", "9090")
t.Setenv("DEBUG", "true")
t.Setenv("DB_HOST", "db.example.com")
t.Setenv("DB_PORT", "5433")
t.Setenv("TOKEN_EXPIRY_DAYS", "180")
t.Setenv("TOKEN_REFRESH_DAYS", "120")
t.Setenv("FEATURE_PUSH_ENABLED", "false")
c, err := Load()
require.NoError(t, err)
assert.Equal(t, 9090, c.Server.Port)
assert.True(t, c.Server.Debug)
assert.Equal(t, "db.example.com", c.Database.Host)
assert.Equal(t, 5433, c.Database.Port)
assert.Equal(t, 180, c.Security.TokenExpiryDays)
assert.Equal(t, 120, c.Security.TokenRefreshDays)
assert.False(t, c.Features.PushEnabled)
}
func TestLoad_Validation_MissingSecretKey_Production(t *testing.T) {
// Test validate() directly to avoid the sync.Once mutex issue
// that occurs when Load() resets cfgOnce inside cfgOnce.Do()
cfg := &Config{
Server: ServerConfig{Debug: false},
Security: SecurityConfig{SecretKey: ""},
}
err := validate(cfg)
require.Error(t, err)
assert.Contains(t, err.Error(), "SECRET_KEY")
}
func TestLoad_Validation_MissingSecretKey_DebugMode(t *testing.T) {
resetConfigState()
t.Setenv("SECRET_KEY", "")
t.Setenv("DEBUG", "true")
c, err := Load()
require.NoError(t, err)
// In debug mode, a default key is assigned
assert.Equal(t, "change-me-in-production-secret-key-12345", c.Security.SecretKey)
}
func TestLoad_Validation_WeakSecretKey_Production(t *testing.T) {
// Test validate() directly to avoid the sync.Once mutex issue
cfg := &Config{
Server: ServerConfig{Debug: false},
Security: SecurityConfig{SecretKey: "password"},
}
err := validate(cfg)
require.Error(t, err)
assert.Contains(t, err.Error(), "well-known weak value")
}
func TestLoad_Validation_WeakSecretKey_DebugMode(t *testing.T) {
resetConfigState()
t.Setenv("SECRET_KEY", "secret")
t.Setenv("DEBUG", "true")
// In debug mode, weak keys produce a warning but no error
c, err := Load()
require.NoError(t, err)
assert.Equal(t, "secret", c.Security.SecretKey)
}
func TestLoad_Validation_EncryptionKey_Valid(t *testing.T) {
resetConfigState()
t.Setenv("SECRET_KEY", "a-strong-secret-key-for-tests")
// Valid 64-char hex key (32 bytes)
t.Setenv("STORAGE_ENCRYPTION_KEY", "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef")
c, err := Load()
require.NoError(t, err)
assert.Equal(t, "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef", c.Storage.EncryptionKey)
}
func TestLoad_Validation_EncryptionKey_WrongLength(t *testing.T) {
// Test validate() directly to avoid the sync.Once mutex issue
cfg := &Config{
Server: ServerConfig{Debug: false},
Security: SecurityConfig{SecretKey: "a-strong-secret-key-for-tests"},
Storage: StorageConfig{EncryptionKey: "tooshort"},
}
err := validate(cfg)
require.Error(t, err)
assert.Contains(t, err.Error(), "STORAGE_ENCRYPTION_KEY must be exactly 64 hex characters")
}
func TestLoad_Validation_EncryptionKey_InvalidHex(t *testing.T) {
// Test validate() directly to avoid the sync.Once mutex issue
cfg := &Config{
Server: ServerConfig{Debug: false},
Security: SecurityConfig{SecretKey: "a-strong-secret-key-for-tests"},
Storage: StorageConfig{EncryptionKey: "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz"},
}
err := validate(cfg)
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid hex")
}
func TestLoad_DatabaseURL_Override(t *testing.T) {
resetConfigState()
t.Setenv("SECRET_KEY", "a-strong-secret-key-for-tests")
t.Setenv("DATABASE_URL", "postgres://myuser:mypass@dbhost:5433/mydb?sslmode=require")
c, err := Load()
require.NoError(t, err)
assert.Equal(t, "dbhost", c.Database.Host)
assert.Equal(t, 5433, c.Database.Port)
assert.Equal(t, "myuser", c.Database.User)
assert.Equal(t, "mypass", c.Database.Password)
assert.Equal(t, "mydb", c.Database.Database)
assert.Equal(t, "require", c.Database.SSLMode)
}
func TestDSN(t *testing.T) {
d := DatabaseConfig{
Host: "localhost",
Port: 5432,
User: "testuser",
Password: "Password123",
Database: "testdb",
SSLMode: "disable",
}
dsn := d.DSN()
assert.Contains(t, dsn, "host=localhost")
assert.Contains(t, dsn, "port=5432")
assert.Contains(t, dsn, "user=testuser")
assert.Contains(t, dsn, "password=Password123")
assert.Contains(t, dsn, "dbname=testdb")
assert.Contains(t, dsn, "sslmode=disable")
}
func TestMaskURLCredentials(t *testing.T) {
tests := []struct {
name string
input string
expected string
}{
{
name: "URL with password",
input: "postgres://user:secret@host:5432/db",
expected: "postgres://user:xxxxx@host:5432/db",
},
{
name: "URL without password",
input: "postgres://user@host:5432/db",
expected: "postgres://user@host:5432/db",
},
{
name: "URL without user info",
input: "postgres://host:5432/db",
expected: "postgres://host:5432/db",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := MaskURLCredentials(tc.input)
assert.Equal(t, tc.expected, result)
})
}
}
func TestParseCorsOrigins(t *testing.T) {
tests := []struct {
name string
input string
expected []string
}{
{"empty string", "", nil},
{"single origin", "https://example.com", []string{"https://example.com"}},
{"multiple origins", "https://a.com, https://b.com", []string{"https://a.com", "https://b.com"}},
{"whitespace trimmed", " https://a.com , https://b.com ", []string{"https://a.com", "https://b.com"}},
{"empty parts skipped", "https://a.com,,https://b.com", []string{"https://a.com", "https://b.com"}},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := parseCorsOrigins(tc.input)
assert.Equal(t, tc.expected, result)
})
}
}
func TestParseDatabaseURL(t *testing.T) {
tests := []struct {
name string
url string
wantHost string
wantPort int
wantUser string
wantPass string
wantDB string
wantSSL string
expectError bool
}{
{
name: "full URL",
url: "postgres://user:Password123@host:5433/mydb?sslmode=require",
wantHost: "host",
wantPort: 5433,
wantUser: "user",
wantPass: "Password123",
wantDB: "mydb",
wantSSL: "require",
},
{
name: "default port",
url: "postgres://user:pass@host/mydb",
wantHost: "host",
wantPort: 5432,
wantUser: "user",
wantPass: "pass",
wantDB: "mydb",
wantSSL: "",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result, err := parseDatabaseURL(tc.url)
if tc.expectError {
require.Error(t, err)
return
}
require.NoError(t, err)
assert.Equal(t, tc.wantHost, result.Host)
assert.Equal(t, tc.wantPort, result.Port)
assert.Equal(t, tc.wantUser, result.User)
assert.Equal(t, tc.wantPass, result.Password)
assert.Equal(t, tc.wantDB, result.Database)
assert.Equal(t, tc.wantSSL, result.SSLMode)
})
}
}
func TestIsWeakSecretKey(t *testing.T) {
assert.True(t, isWeakSecretKey("secret"))
assert.True(t, isWeakSecretKey("Secret")) // case-insensitive
assert.True(t, isWeakSecretKey(" changeme ")) // whitespace trimmed
assert.True(t, isWeakSecretKey("password"))
assert.True(t, isWeakSecretKey("change-me"))
assert.False(t, isWeakSecretKey("a-strong-unique-production-key"))
}
func TestGet_ReturnsNilBeforeLoad(t *testing.T) {
resetConfigState()
assert.Nil(t, Get())
}

View File

@@ -1,6 +1,7 @@
package database
import (
"context"
"fmt"
"time"
@@ -15,6 +16,11 @@ import (
"github.com/treytartt/honeydue-api/internal/models"
)
// migrationAdvisoryLockKey is the pg_advisory_lock key that serializes
// Migrate() across API replicas booting in parallel. Value is arbitrary but
// stable ("hdmg" as bytes = honeydue migration).
const migrationAdvisoryLockKey int64 = 0x68646d67
// zerologGormWriter adapts zerolog for GORM's logger interface
type zerologGormWriter struct{}
@@ -121,6 +127,54 @@ func Paginate(page, pageSize int) func(db *gorm.DB) *gorm.DB {
}
}
// MigrateWithLock runs Migrate() under a Postgres session-level advisory lock
// so that multiple API replicas booting in parallel don't race on AutoMigrate.
// On non-Postgres dialects (sqlite in tests) it falls through to Migrate().
func MigrateWithLock() error {
if db == nil {
return fmt.Errorf("database not initialised")
}
if db.Dialector.Name() != "postgres" {
return Migrate()
}
sqlDB, err := db.DB()
if err != nil {
return fmt.Errorf("get underlying sql.DB: %w", err)
}
// Give ourselves up to 5 min to acquire the lock — long enough for a
// slow migration on a peer replica, short enough to fail fast if Postgres
// is hung.
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Minute)
defer cancel()
conn, err := sqlDB.Conn(ctx)
if err != nil {
return fmt.Errorf("acquire dedicated migration connection: %w", err)
}
defer conn.Close()
log.Info().Int64("lock_key", migrationAdvisoryLockKey).Msg("Acquiring migration advisory lock...")
if _, err := conn.ExecContext(ctx, "SELECT pg_advisory_lock($1)", migrationAdvisoryLockKey); err != nil {
return fmt.Errorf("pg_advisory_lock: %w", err)
}
log.Info().Msg("Migration advisory lock acquired")
defer func() {
// Unlock with a fresh context — the outer ctx may have expired.
unlockCtx, unlockCancel := context.WithTimeout(context.Background(), 10*time.Second)
defer unlockCancel()
if _, err := conn.ExecContext(unlockCtx, "SELECT pg_advisory_unlock($1)", migrationAdvisoryLockKey); err != nil {
log.Warn().Err(err).Msg("Failed to release migration advisory lock (session close will also release)")
} else {
log.Info().Msg("Migration advisory lock released")
}
}()
return Migrate()
}
// Migrate runs database migrations for all models
func Migrate() error {
log.Info().Msg("Running database migrations...")

View File

@@ -0,0 +1,103 @@
package database
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
// --- Unit tests for Paginate parameter clamping ---
func TestPaginate_PageZeroDefaultsToOne(t *testing.T) {
scope := Paginate(0, 10)
db := openTestDB(t)
createTestRows(t, db, 5)
var rows []testRow
err := db.Scopes(scope).Find(&rows).Error
require.NoError(t, err)
// page=0 normalised to page=1, pageSize=10 → should get all 5 rows
assert.Len(t, rows, 5)
}
func TestPaginate_PageSizeZeroDefaultsTo100(t *testing.T) {
scope := Paginate(1, 0)
db := openTestDB(t)
createTestRows(t, db, 5)
var rows []testRow
err := db.Scopes(scope).Find(&rows).Error
require.NoError(t, err)
// pageSize=0 normalised to 100, only 5 rows exist → 5 returned
assert.Len(t, rows, 5)
}
func TestPaginate_PageSizeOverMaxCappedAt1000(t *testing.T) {
scope := Paginate(1, 2000)
db := openTestDB(t)
createTestRows(t, db, 5)
var rows []testRow
err := db.Scopes(scope).Find(&rows).Error
require.NoError(t, err)
// pageSize=2000 capped to 1000, only 5 rows → 5 returned
assert.Len(t, rows, 5)
}
func TestPaginate_NormalValues(t *testing.T) {
scope := Paginate(1, 3)
db := openTestDB(t)
createTestRows(t, db, 10)
var rows []testRow
err := db.Scopes(scope).Order("id ASC").Find(&rows).Error
require.NoError(t, err)
assert.Len(t, rows, 3)
assert.Equal(t, "row_1", rows[0].Name)
assert.Equal(t, "row_3", rows[2].Name)
}
func TestPaginate_SQLiteIntegration_Page2Size10(t *testing.T) {
db := openTestDB(t)
createTestRows(t, db, 25)
scope := Paginate(2, 10)
var rows []testRow
err := db.Scopes(scope).Order("id ASC").Find(&rows).Error
require.NoError(t, err)
// Page 2 with size 10 → rows 11..20
assert.Len(t, rows, 10)
assert.Equal(t, "row_11", rows[0].Name)
assert.Equal(t, "row_20", rows[9].Name)
}
// --- helpers ---
type testRow struct {
ID uint `gorm:"primaryKey"`
Name string
}
func openTestDB(t *testing.T) *gorm.DB {
t.Helper()
db, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{})
require.NoError(t, err)
require.NoError(t, db.AutoMigrate(&testRow{}))
return db
}
func createTestRows(t *testing.T, db *gorm.DB, n int) {
t.Helper()
for i := 1; i <= n; i++ {
require.NoError(t, db.Create(&testRow{Name: fmt.Sprintf("row_%d", i)}).Error)
}
}

View File

@@ -0,0 +1,47 @@
package database
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestClassifyCompletion_CompletedAfterDue(t *testing.T) {
dueDate := time.Date(2025, 6, 1, 0, 0, 0, 0, time.UTC)
completedAt := time.Date(2025, 6, 5, 14, 30, 0, 0, time.UTC) // 4 days after due
result := classifyCompletion(completedAt, dueDate, 30)
assert.Equal(t, "overdue_tasks", result)
}
func TestClassifyCompletion_CompletedOnDueDate(t *testing.T) {
dueDate := time.Date(2025, 6, 15, 0, 0, 0, 0, time.UTC)
completedAt := time.Date(2025, 6, 15, 10, 0, 0, 0, time.UTC) // same day
result := classifyCompletion(completedAt, dueDate, 30)
// Completed on the due date: daysBefore == 0, which is <= threshold → due_soon_tasks
assert.Equal(t, "due_soon_tasks", result)
}
func TestClassifyCompletion_CompletedWithinThreshold(t *testing.T) {
dueDate := time.Date(2025, 7, 1, 0, 0, 0, 0, time.UTC)
completedAt := time.Date(2025, 6, 10, 8, 0, 0, 0, time.UTC) // 21 days before due
result := classifyCompletion(completedAt, dueDate, 30)
// 21 days before due, within 30-day threshold → due_soon_tasks
assert.Equal(t, "due_soon_tasks", result)
}
func TestClassifyCompletion_CompletedBeyondThreshold(t *testing.T) {
dueDate := time.Date(2025, 9, 1, 0, 0, 0, 0, time.UTC)
completedAt := time.Date(2025, 6, 1, 12, 0, 0, 0, time.UTC) // 92 days before due
result := classifyCompletion(completedAt, dueDate, 30)
// 92 days before due, beyond 30-day threshold → upcoming_tasks
assert.Equal(t, "upcoming_tasks", result)
}

View File

@@ -0,0 +1,31 @@
package database
import "sort"
// sortMigrationNames returns a sorted copy of the names slice.
func sortMigrationNames(names []string) []string {
sorted := make([]string, len(names))
copy(sorted, names)
sort.Strings(sorted)
return sorted
}
// buildAppliedSet converts a list of applied migrations to a lookup set.
func buildAppliedSet(applied []DataMigration) map[string]bool {
set := make(map[string]bool, len(applied))
for _, m := range applied {
set[m.Name] = true
}
return set
}
// filterPending returns names not present in the applied set.
func filterPending(names []string, applied map[string]bool) []string {
var pending []string
for _, name := range names {
if !applied[name] {
pending = append(pending, name)
}
}
return pending
}

View File

@@ -0,0 +1,82 @@
package database
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
// --- sortMigrationNames ---
func TestSortMigrationNames_Alphabetical(t *testing.T) {
input := []string{"charlie", "alpha", "bravo"}
result := sortMigrationNames(input)
assert.Equal(t, []string{"alpha", "bravo", "charlie"}, result)
// Verify original slice is not mutated
assert.Equal(t, []string{"charlie", "alpha", "bravo"}, input)
}
func TestSortMigrationNames_Empty(t *testing.T) {
result := sortMigrationNames([]string{})
assert.Equal(t, []string{}, result)
assert.Len(t, result, 0)
}
// --- buildAppliedSet ---
func TestBuildAppliedSet_Multiple(t *testing.T) {
applied := []DataMigration{
{ID: 1, Name: "20250101_first", AppliedAt: time.Now()},
{ID: 2, Name: "20250201_second", AppliedAt: time.Now()},
{ID: 3, Name: "20250301_third", AppliedAt: time.Now()},
}
set := buildAppliedSet(applied)
assert.Len(t, set, 3)
assert.True(t, set["20250101_first"])
assert.True(t, set["20250201_second"])
assert.True(t, set["20250301_third"])
assert.False(t, set["nonexistent"])
}
func TestBuildAppliedSet_Empty(t *testing.T) {
set := buildAppliedSet([]DataMigration{})
assert.Len(t, set, 0)
}
// --- filterPending ---
func TestFilterPending_SomePending(t *testing.T) {
names := []string{"20250101_first", "20250201_second", "20250301_third"}
applied := map[string]bool{
"20250101_first": true,
}
pending := filterPending(names, applied)
assert.Equal(t, []string{"20250201_second", "20250301_third"}, pending)
}
func TestFilterPending_AllApplied(t *testing.T) {
names := []string{"20250101_first", "20250201_second"}
applied := map[string]bool{
"20250101_first": true,
"20250201_second": true,
}
pending := filterPending(names, applied)
assert.Nil(t, pending)
}
func TestFilterPending_NoneApplied(t *testing.T) {
names := []string{"20250101_first", "20250201_second", "20250301_third"}
applied := map[string]bool{}
pending := filterPending(names, applied)
assert.Equal(t, []string{"20250101_first", "20250201_second", "20250301_third"}, pending)
}

View File

@@ -0,0 +1,129 @@
package database
import (
"fmt"
"os"
"path/filepath"
"strings"
"gorm.io/gorm"
)
// Seed files run on first boot. Order matters: lookups first, then rows
// that depend on them (admin user is independent; task templates reference
// lookup categories).
var initialSeedFiles = []string{
"001_lookups.sql",
"003_admin_user.sql",
"003_task_templates.sql",
}
// SeedInitialDataApplied is set true during startup if the seed migration
// just ran. main.go reads it post-cache-init to invalidate stale Redis
// entries for /api/static_data (24h TTL) so clients see the new lookups.
var SeedInitialDataApplied bool
func init() {
RegisterDataMigration("20260414_seed_initial_data", seedInitialData)
}
// seedInitialData executes the baseline SQL seed files exactly once. Because
// each INSERT uses ON CONFLICT DO UPDATE, rerunning the files is safe if the
// tracking row is ever lost.
func seedInitialData(tx *gorm.DB) error {
sqlDB, err := tx.DB()
if err != nil {
return fmt.Errorf("get underlying sql.DB: %w", err)
}
for _, filename := range initialSeedFiles {
content, err := readSeedFile(filename)
if err != nil {
return fmt.Errorf("read seed %s: %w", filename, err)
}
for i, stmt := range splitSQL(content) {
if _, err := sqlDB.Exec(stmt); err != nil {
preview := stmt
if len(preview) > 120 {
preview = preview[:120] + "..."
}
return fmt.Errorf("seed %s statement %d failed: %w\nstatement: %s", filename, i+1, err, preview)
}
}
}
SeedInitialDataApplied = true
return nil
}
func readSeedFile(filename string) (string, error) {
paths := []string{
filepath.Join("seeds", filename),
filepath.Join("./seeds", filename),
filepath.Join("/app/seeds", filename),
}
var lastErr error
for _, p := range paths {
content, err := os.ReadFile(p)
if err == nil {
return string(content), nil
}
lastErr = err
}
return "", lastErr
}
// splitSQL splits raw SQL into individual statements, respecting single-quoted
// string literals (including '' escapes) and skipping comment-only fragments.
func splitSQL(sqlContent string) []string {
var out []string
var current strings.Builder
inString := false
stringChar := byte(0)
for i := 0; i < len(sqlContent); i++ {
c := sqlContent[i]
if (c == '\'' || c == '"') && (i == 0 || sqlContent[i-1] != '\\') {
if !inString {
inString = true
stringChar = c
} else if c == stringChar {
if c == '\'' && i+1 < len(sqlContent) && sqlContent[i+1] == '\'' {
current.WriteByte(c)
i++
current.WriteByte(sqlContent[i])
continue
}
inString = false
}
}
if c == ';' && !inString {
current.WriteByte(c)
stmt := strings.TrimSpace(current.String())
if stmt != "" && !isSQLCommentOnly(stmt) {
out = append(out, stmt)
}
current.Reset()
continue
}
current.WriteByte(c)
}
if stmt := strings.TrimSpace(current.String()); stmt != "" && !isSQLCommentOnly(stmt) {
out = append(out, stmt)
}
return out
}
func isSQLCommentOnly(stmt string) bool {
for _, line := range strings.Split(stmt, "\n") {
line = strings.TrimSpace(line)
if line != "" && !strings.HasPrefix(line, "--") {
return false
}
}
return true
}

View File

@@ -11,7 +11,7 @@ type LoginRequest struct {
type RegisterRequest struct {
Username string `json:"username" validate:"required,min=3,max=150"`
Email string `json:"email" validate:"required,email,max=254"`
Password string `json:"password" validate:"required,min=8"`
Password string `json:"password" validate:"required,min=8,password_complexity"`
FirstName string `json:"first_name" validate:"max=150"`
LastName string `json:"last_name" validate:"max=150"`
}
@@ -35,7 +35,7 @@ type VerifyResetCodeRequest struct {
// ResetPasswordRequest represents the reset password request body
type ResetPasswordRequest struct {
ResetToken string `json:"reset_token" validate:"required"`
NewPassword string `json:"new_password" validate:"required,min=8"`
NewPassword string `json:"new_password" validate:"required,min=8,password_complexity"`
}
// UpdateProfileRequest represents the profile update request body

View File

@@ -0,0 +1,130 @@
package requests
import (
"encoding/json"
"testing"
"time"
)
func TestFlexibleDate_UnmarshalJSON_DateOnly(t *testing.T) {
var fd FlexibleDate
err := fd.UnmarshalJSON([]byte(`"2025-11-27"`))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
want := time.Date(2025, 11, 27, 0, 0, 0, 0, time.UTC)
if !fd.Time.Equal(want) {
t.Errorf("got %v, want %v", fd.Time, want)
}
}
func TestFlexibleDate_UnmarshalJSON_RFC3339(t *testing.T) {
var fd FlexibleDate
err := fd.UnmarshalJSON([]byte(`"2025-11-27T15:30:00Z"`))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
want := time.Date(2025, 11, 27, 15, 30, 0, 0, time.UTC)
if !fd.Time.Equal(want) {
t.Errorf("got %v, want %v", fd.Time, want)
}
}
func TestFlexibleDate_UnmarshalJSON_Null(t *testing.T) {
var fd FlexibleDate
err := fd.UnmarshalJSON([]byte(`null`))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !fd.Time.IsZero() {
t.Errorf("expected zero time, got %v", fd.Time)
}
}
func TestFlexibleDate_UnmarshalJSON_EmptyString(t *testing.T) {
var fd FlexibleDate
err := fd.UnmarshalJSON([]byte(`""`))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !fd.Time.IsZero() {
t.Errorf("expected zero time, got %v", fd.Time)
}
}
func TestFlexibleDate_UnmarshalJSON_Invalid(t *testing.T) {
var fd FlexibleDate
err := fd.UnmarshalJSON([]byte(`"not-a-date"`))
if err == nil {
t.Fatal("expected error for invalid date, got nil")
}
}
func TestFlexibleDate_MarshalJSON_Valid(t *testing.T) {
fd := FlexibleDate{Time: time.Date(2025, 11, 27, 15, 30, 0, 0, time.UTC)}
data, err := fd.MarshalJSON()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
var s string
if err := json.Unmarshal(data, &s); err != nil {
t.Fatalf("result is not a JSON string: %v", err)
}
want := "2025-11-27T15:30:00Z"
if s != want {
t.Errorf("got %q, want %q", s, want)
}
}
func TestFlexibleDate_MarshalJSON_Zero(t *testing.T) {
fd := FlexibleDate{}
data, err := fd.MarshalJSON()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if string(data) != "null" {
t.Errorf("got %s, want null", string(data))
}
}
func TestFlexibleDate_ToTimePtr_Valid(t *testing.T) {
fd := &FlexibleDate{Time: time.Date(2025, 11, 27, 0, 0, 0, 0, time.UTC)}
ptr := fd.ToTimePtr()
if ptr == nil {
t.Fatal("expected non-nil pointer")
}
if !ptr.Equal(fd.Time) {
t.Errorf("got %v, want %v", *ptr, fd.Time)
}
}
func TestFlexibleDate_ToTimePtr_Zero(t *testing.T) {
fd := &FlexibleDate{}
ptr := fd.ToTimePtr()
if ptr != nil {
t.Errorf("expected nil, got %v", *ptr)
}
}
func TestFlexibleDate_ToTimePtr_NilReceiver(t *testing.T) {
var fd *FlexibleDate
ptr := fd.ToTimePtr()
if ptr != nil {
t.Errorf("expected nil for nil receiver, got %v", *ptr)
}
}
func TestFlexibleDate_RoundTrip(t *testing.T) {
original := FlexibleDate{Time: time.Date(2025, 6, 15, 10, 0, 0, 0, time.UTC)}
data, err := original.MarshalJSON()
if err != nil {
t.Fatalf("marshal error: %v", err)
}
var restored FlexibleDate
if err := restored.UnmarshalJSON(data); err != nil {
t.Fatalf("unmarshal error: %v", err)
}
if !original.Time.Equal(restored.Time) {
t.Errorf("round-trip mismatch: original %v, restored %v", original.Time, restored.Time)
}
}

View File

@@ -25,6 +25,22 @@ type CreateResidenceRequest struct {
PurchaseDate *time.Time `json:"purchase_date"`
PurchasePrice *decimal.Decimal `json:"purchase_price"`
IsPrimary *bool `json:"is_primary"`
// Home Profile
HeatingType *string `json:"heating_type" validate:"omitempty,oneof=gas_furnace electric_furnace heat_pump boiler radiant other"`
CoolingType *string `json:"cooling_type" validate:"omitempty,oneof=central_ac window_ac heat_pump evaporative none other"`
WaterHeaterType *string `json:"water_heater_type" validate:"omitempty,oneof=tank_gas tank_electric tankless_gas tankless_electric heat_pump solar other"`
RoofType *string `json:"roof_type" validate:"omitempty,oneof=asphalt_shingle metal tile slate wood_shake flat other"`
HasPool *bool `json:"has_pool"`
HasSprinklerSystem *bool `json:"has_sprinkler_system"`
HasSeptic *bool `json:"has_septic"`
HasFireplace *bool `json:"has_fireplace"`
HasGarage *bool `json:"has_garage"`
HasBasement *bool `json:"has_basement"`
HasAttic *bool `json:"has_attic"`
ExteriorType *string `json:"exterior_type" validate:"omitempty,oneof=brick vinyl_siding wood_siding stucco stone fiber_cement other"`
FlooringPrimary *string `json:"flooring_primary" validate:"omitempty,oneof=hardwood laminate tile carpet vinyl concrete other"`
LandscapingType *string `json:"landscaping_type" validate:"omitempty,oneof=lawn desert xeriscape garden mixed none other"`
}
// UpdateResidenceRequest represents the request to update a residence
@@ -46,6 +62,22 @@ type UpdateResidenceRequest struct {
PurchaseDate *time.Time `json:"purchase_date"`
PurchasePrice *decimal.Decimal `json:"purchase_price"`
IsPrimary *bool `json:"is_primary"`
// Home Profile
HeatingType *string `json:"heating_type" validate:"omitempty,oneof=gas_furnace electric_furnace heat_pump boiler radiant other"`
CoolingType *string `json:"cooling_type" validate:"omitempty,oneof=central_ac window_ac heat_pump evaporative none other"`
WaterHeaterType *string `json:"water_heater_type" validate:"omitempty,oneof=tank_gas tank_electric tankless_gas tankless_electric heat_pump solar other"`
RoofType *string `json:"roof_type" validate:"omitempty,oneof=asphalt_shingle metal tile slate wood_shake flat other"`
HasPool *bool `json:"has_pool"`
HasSprinklerSystem *bool `json:"has_sprinkler_system"`
HasSeptic *bool `json:"has_septic"`
HasFireplace *bool `json:"has_fireplace"`
HasGarage *bool `json:"has_garage"`
HasBasement *bool `json:"has_basement"`
HasAttic *bool `json:"has_attic"`
ExteriorType *string `json:"exterior_type" validate:"omitempty,oneof=brick vinyl_siding wood_siding stucco stone fiber_cement other"`
FlooringPrimary *string `json:"flooring_primary" validate:"omitempty,oneof=hardwood laminate tile carpet vinyl concrete other"`
LandscapingType *string `json:"landscaping_type" validate:"omitempty,oneof=lawn desert xeriscape garden mixed none other"`
}
// JoinWithCodeRequest represents the request to join a residence via share code

View File

@@ -52,6 +52,18 @@ func (fd *FlexibleDate) ToTimePtr() *time.Time {
return &fd.Time
}
// BulkCreateTasksRequest represents a batch create. Used by onboarding to
// insert 1-N selected tasks atomically in a single transaction so that a
// failure halfway through doesn't leave a partial task list behind.
//
// ResidenceID is validated once at the service layer; individual task
// entries must reference the same residence or be left empty (the service
// overrides each entry's ResidenceID with the top-level value).
type BulkCreateTasksRequest struct {
ResidenceID uint `json:"residence_id" validate:"required"`
Tasks []CreateTaskRequest `json:"tasks" validate:"required,min=1,max=50,dive"`
}
// CreateTaskRequest represents the request to create a task
type CreateTaskRequest struct {
ResidenceID uint `json:"residence_id" validate:"required"`
@@ -66,6 +78,10 @@ type CreateTaskRequest struct {
DueDate *FlexibleDate `json:"due_date"`
EstimatedCost *decimal.Decimal `json:"estimated_cost"`
ContractorID *uint `json:"contractor_id"`
// TemplateID links the created task to the TaskTemplate it was spawned from
// (e.g. onboarding suggestion or catalog pick). Optional — custom tasks
// leave this nil.
TemplateID *uint `json:"template_id"`
}
// UpdateTaskRequest represents the request to update a task

View File

@@ -79,6 +79,12 @@ type ResetPasswordResponse struct {
Message string `json:"message"`
}
// RefreshTokenResponse represents the token refresh response
type RefreshTokenResponse struct {
Token string `json:"token"`
Message string `json:"message"`
}
// MessageResponse represents a simple message response
type MessageResponse struct {
Message string `json:"message"`

View File

@@ -46,6 +46,22 @@ type ResidenceResponse struct {
Description string `json:"description"`
PurchaseDate *time.Time `json:"purchase_date"`
PurchasePrice *decimal.Decimal `json:"purchase_price"`
// Home Profile
HeatingType *string `json:"heating_type"`
CoolingType *string `json:"cooling_type"`
WaterHeaterType *string `json:"water_heater_type"`
RoofType *string `json:"roof_type"`
HasPool bool `json:"has_pool"`
HasSprinklerSystem bool `json:"has_sprinkler_system"`
HasSeptic bool `json:"has_septic"`
HasFireplace bool `json:"has_fireplace"`
HasGarage bool `json:"has_garage"`
HasBasement bool `json:"has_basement"`
HasAttic bool `json:"has_attic"`
ExteriorType *string `json:"exterior_type"`
FlooringPrimary *string `json:"flooring_primary"`
LandscapingType *string `json:"landscaping_type"`
IsPrimary bool `json:"is_primary"`
IsActive bool `json:"is_active"`
OverdueCount int `json:"overdue_count"`
@@ -184,9 +200,23 @@ func NewResidenceResponse(residence *models.Residence) ResidenceResponse {
YearBuilt: residence.YearBuilt,
Description: residence.Description,
PurchaseDate: residence.PurchaseDate,
PurchasePrice: residence.PurchasePrice,
IsPrimary: residence.IsPrimary,
IsActive: residence.IsActive,
PurchasePrice: residence.PurchasePrice,
HeatingType: residence.HeatingType,
CoolingType: residence.CoolingType,
WaterHeaterType: residence.WaterHeaterType,
RoofType: residence.RoofType,
HasPool: residence.HasPool,
HasSprinklerSystem: residence.HasSprinklerSystem,
HasSeptic: residence.HasSeptic,
HasFireplace: residence.HasFireplace,
HasGarage: residence.HasGarage,
HasBasement: residence.HasBasement,
HasAttic: residence.HasAttic,
ExteriorType: residence.ExteriorType,
FlooringPrimary: residence.FlooringPrimary,
LandscapingType: residence.LandscapingType,
IsPrimary: residence.IsPrimary,
IsActive: residence.IsActive,
CreatedAt: residence.CreatedAt,
UpdatedAt: residence.UpdatedAt,
}

View File

@@ -0,0 +1,819 @@
package responses
import (
"fmt"
"testing"
"time"
"github.com/shopspring/decimal"
"github.com/treytartt/honeydue-api/internal/models"
)
// --- helpers ---
func timePtr(t time.Time) *time.Time { return &t }
func uintPtr(v uint) *uint { return &v }
func intPtr(v int) *int { return &v }
func strPtr(v string) *string { return &v }
func float64Ptr(v float64) *float64 { return &v }
var fixedNow = time.Date(2025, 6, 15, 0, 0, 0, 0, time.UTC)
func makeUser() *models.User {
return &models.User{
ID: 1,
Username: "john",
Email: "john@example.com",
FirstName: "John",
LastName: "Doe",
IsActive: true,
DateJoined: fixedNow,
LastLogin: timePtr(fixedNow),
Profile: &models.UserProfile{
BaseModel: models.BaseModel{ID: 10},
UserID: 1,
Verified: true,
Bio: "hello",
},
}
}
func makeUserNoProfile() *models.User {
u := makeUser()
u.Profile = nil
return u
}
// ==================== auth.go ====================
func TestNewUserResponse_AllFields(t *testing.T) {
u := makeUser()
resp := NewUserResponse(u)
if resp.ID != 1 {
t.Errorf("ID = %d, want 1", resp.ID)
}
if resp.Username != "john" {
t.Errorf("Username = %q", resp.Username)
}
if !resp.Verified {
t.Error("Verified should be true when profile is verified")
}
if resp.LastLogin == nil {
t.Error("LastLogin should not be nil")
}
}
func TestNewUserResponse_NilProfile(t *testing.T) {
u := makeUserNoProfile()
resp := NewUserResponse(u)
if resp.Verified {
t.Error("Verified should be false when profile is nil")
}
}
func TestNewUserProfileResponse_Nil(t *testing.T) {
resp := NewUserProfileResponse(nil)
if resp != nil {
t.Error("expected nil for nil profile")
}
}
func TestNewUserProfileResponse_Valid(t *testing.T) {
p := &models.UserProfile{
BaseModel: models.BaseModel{ID: 5},
UserID: 1,
Verified: true,
Bio: "bio",
}
resp := NewUserProfileResponse(p)
if resp == nil {
t.Fatal("expected non-nil")
}
if resp.ID != 5 || resp.UserID != 1 || !resp.Verified || resp.Bio != "bio" {
t.Errorf("unexpected response: %+v", resp)
}
}
func TestNewCurrentUserResponse(t *testing.T) {
u := makeUser()
resp := NewCurrentUserResponse(u, "apple")
if resp.AuthProvider != "apple" {
t.Errorf("AuthProvider = %q, want apple", resp.AuthProvider)
}
if resp.Profile == nil {
t.Error("Profile should not be nil")
}
if resp.ID != 1 {
t.Errorf("ID = %d, want 1", resp.ID)
}
}
func TestNewLoginResponse(t *testing.T) {
u := makeUser()
resp := NewLoginResponse("tok123", u)
if resp.Token != "tok123" {
t.Errorf("Token = %q", resp.Token)
}
if resp.User.ID != 1 {
t.Errorf("User.ID = %d", resp.User.ID)
}
}
func TestNewRegisterResponse(t *testing.T) {
u := makeUser()
resp := NewRegisterResponse("tok456", u)
if resp.Token != "tok456" {
t.Errorf("Token = %q", resp.Token)
}
if resp.Message == "" {
t.Error("Message should not be empty")
}
}
func TestNewAppleSignInResponse(t *testing.T) {
u := makeUser()
resp := NewAppleSignInResponse("atok", u, true)
if !resp.IsNewUser {
t.Error("IsNewUser should be true")
}
if resp.Token != "atok" {
t.Errorf("Token = %q", resp.Token)
}
}
func TestNewGoogleSignInResponse(t *testing.T) {
u := makeUser()
resp := NewGoogleSignInResponse("gtok", u, false)
if resp.IsNewUser {
t.Error("IsNewUser should be false")
}
if resp.Token != "gtok" {
t.Errorf("Token = %q", resp.Token)
}
}
// ==================== task.go ====================
func makeTask() *models.Task {
due := time.Date(2025, 7, 1, 0, 0, 0, 0, time.UTC)
catID := uint(1)
priID := uint(2)
freqID := uint(3)
return &models.Task{
BaseModel: models.BaseModel{ID: 100, CreatedAt: fixedNow, UpdatedAt: fixedNow},
ResidenceID: 10,
CreatedByID: 1,
CreatedBy: *makeUser(),
Title: "Fix roof",
Description: "Repair leak",
CategoryID: &catID,
Category: &models.TaskCategory{BaseModel: models.BaseModel{ID: catID}, Name: "Exterior", Icon: "roof", Color: "#FF0000", DisplayOrder: 1},
PriorityID: &priID,
Priority: &models.TaskPriority{BaseModel: models.BaseModel{ID: priID}, Name: "High", Level: 3, Color: "#FF0000", DisplayOrder: 1},
FrequencyID: &freqID,
Frequency: &models.TaskFrequency{BaseModel: models.BaseModel{ID: freqID}, Name: "Monthly", Days: intPtr(30), DisplayOrder: 1},
DueDate: &due,
}
}
func TestNewTaskResponse_BasicFields(t *testing.T) {
task := makeTask()
resp := NewTaskResponseWithTime(task, 30, fixedNow)
if resp.ID != 100 {
t.Errorf("ID = %d", resp.ID)
}
if resp.Title != "Fix roof" {
t.Errorf("Title = %q", resp.Title)
}
if resp.CreatedBy == nil {
t.Error("CreatedBy should not be nil")
}
if resp.Category == nil {
t.Error("Category should not be nil")
}
if resp.Priority == nil {
t.Error("Priority should not be nil")
}
if resp.Frequency == nil {
t.Error("Frequency should not be nil")
}
if resp.KanbanColumn == "" {
t.Error("KanbanColumn should not be empty")
}
}
func TestNewTaskResponse_NilAssociations(t *testing.T) {
task := &models.Task{
BaseModel: models.BaseModel{ID: 200},
ResidenceID: 10,
CreatedByID: 1,
Title: "Simple task",
}
resp := NewTaskResponseWithTime(task, 30, fixedNow)
if resp.CreatedBy != nil {
t.Error("CreatedBy should be nil when CreatedBy.ID is 0")
}
if resp.Category != nil {
t.Error("Category should be nil")
}
if resp.Priority != nil {
t.Error("Priority should be nil")
}
if resp.Frequency != nil {
t.Error("Frequency should be nil")
}
if resp.AssignedTo != nil {
t.Error("AssignedTo should be nil")
}
}
func TestNewTaskResponse_WithCompletions(t *testing.T) {
task := makeTask()
task.Completions = []models.TaskCompletion{
{BaseModel: models.BaseModel{ID: 1}, TaskID: 100, CompletedAt: fixedNow, CompletedByID: 1},
{BaseModel: models.BaseModel{ID: 2}, TaskID: 100, CompletedAt: fixedNow, CompletedByID: 1},
}
resp := NewTaskResponseWithTime(task, 30, fixedNow)
if resp.CompletionCount != 2 {
t.Errorf("CompletionCount = %d, want 2", resp.CompletionCount)
}
}
func TestNewTaskResponseWithTime_KanbanColumn(t *testing.T) {
task := makeTask()
// due date is July 1, now is June 15 → 16 days away → due_soon (within 30 days)
resp := NewTaskResponseWithTime(task, 30, fixedNow)
if resp.KanbanColumn == "" {
t.Error("KanbanColumn should be set")
}
}
func TestNewTaskListResponse(t *testing.T) {
tasks := []models.Task{
{BaseModel: models.BaseModel{ID: 1}, Title: "A"},
{BaseModel: models.BaseModel{ID: 2}, Title: "B"},
}
results := NewTaskListResponse(tasks)
if len(results) != 2 {
t.Errorf("len = %d, want 2", len(results))
}
}
func TestNewTaskListResponse_Empty(t *testing.T) {
results := NewTaskListResponse([]models.Task{})
if len(results) != 0 {
t.Errorf("len = %d, want 0", len(results))
}
}
func TestNewTaskCompletionResponse_WithImages(t *testing.T) {
c := &models.TaskCompletion{
BaseModel: models.BaseModel{ID: 50},
TaskID: 100,
CompletedByID: 1,
CompletedBy: *makeUser(),
CompletedAt: fixedNow,
Notes: "done",
Images: []models.TaskCompletionImage{
{BaseModel: models.BaseModel{ID: 1}, ImageURL: "http://img1.jpg", Caption: "before"},
{BaseModel: models.BaseModel{ID: 2}, ImageURL: "http://img2.jpg", Caption: "after"},
},
}
resp := NewTaskCompletionResponse(c)
if resp.CompletedBy == nil {
t.Error("CompletedBy should not be nil")
}
if len(resp.Images) != 2 {
t.Errorf("Images len = %d, want 2", len(resp.Images))
}
if resp.Images[0].MediaURL != "/api/media/completion-image/1" {
t.Errorf("MediaURL = %q", resp.Images[0].MediaURL)
}
}
func TestNewTaskCompletionResponse_EmptyImages(t *testing.T) {
c := &models.TaskCompletion{
BaseModel: models.BaseModel{ID: 51},
TaskID: 100,
CompletedByID: 1,
CompletedAt: fixedNow,
}
resp := NewTaskCompletionResponse(c)
if resp.Images == nil {
t.Error("Images should be empty slice, not nil")
}
if len(resp.Images) != 0 {
t.Errorf("Images len = %d, want 0", len(resp.Images))
}
}
func TestNewKanbanBoardResponse(t *testing.T) {
board := &models.KanbanBoard{
Columns: []models.KanbanColumn{
{
Name: "overdue",
DisplayName: "Overdue",
Color: "#FF0000",
Tasks: []models.Task{{BaseModel: models.BaseModel{ID: 1}, Title: "A"}},
Count: 1,
},
},
DaysThreshold: 30,
}
resp := NewKanbanBoardResponse(board, 10, fixedNow)
if len(resp.Columns) != 1 {
t.Fatalf("Columns len = %d", len(resp.Columns))
}
if resp.ResidenceID != "10" {
t.Errorf("ResidenceID = %q, want '10'", resp.ResidenceID)
}
if resp.Columns[0].Count != 1 {
t.Errorf("Count = %d", resp.Columns[0].Count)
}
}
func TestNewKanbanBoardResponseForAll(t *testing.T) {
board := &models.KanbanBoard{
Columns: []models.KanbanColumn{},
DaysThreshold: 30,
}
resp := NewKanbanBoardResponseForAll(board, fixedNow)
if resp.ResidenceID != "all" {
t.Errorf("ResidenceID = %q, want 'all'", resp.ResidenceID)
}
}
func TestDetermineKanbanColumn_Delegates(t *testing.T) {
task := &models.Task{
BaseModel: models.BaseModel{ID: 1},
Title: "test",
}
col := DetermineKanbanColumn(task, 30)
if col == "" {
t.Error("expected non-empty column")
}
}
func TestNewTaskCompletionWithTaskResponse(t *testing.T) {
c := &models.TaskCompletion{
BaseModel: models.BaseModel{ID: 1},
TaskID: 100,
CompletedByID: 1,
CompletedAt: fixedNow,
}
task := makeTask()
resp := NewTaskCompletionWithTaskResponseWithTime(c, task, 30, fixedNow)
if resp.Task == nil {
t.Error("Task should not be nil")
}
if resp.Task.ID != 100 {
t.Errorf("Task.ID = %d", resp.Task.ID)
}
}
func TestNewTaskCompletionWithTaskResponse_NilTask(t *testing.T) {
c := &models.TaskCompletion{
BaseModel: models.BaseModel{ID: 1},
TaskID: 100,
CompletedByID: 1,
CompletedAt: fixedNow,
}
resp := NewTaskCompletionWithTaskResponseWithTime(c, nil, 30, fixedNow)
if resp.Task != nil {
t.Error("Task should be nil")
}
}
func TestNewTaskCompletionListResponse(t *testing.T) {
completions := []models.TaskCompletion{
{BaseModel: models.BaseModel{ID: 1}, TaskID: 100, CompletedAt: fixedNow, CompletedByID: 1},
}
results := NewTaskCompletionListResponse(completions)
if len(results) != 1 {
t.Errorf("len = %d", len(results))
}
}
func TestNewTaskCategoryResponse_Nil(t *testing.T) {
if NewTaskCategoryResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewTaskPriorityResponse_Nil(t *testing.T) {
if NewTaskPriorityResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewTaskFrequencyResponse_Nil(t *testing.T) {
if NewTaskFrequencyResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewTaskUserResponse_Nil(t *testing.T) {
if NewTaskUserResponse(nil) != nil {
t.Error("expected nil")
}
}
// ==================== contractor.go ====================
func makeContractor() *models.Contractor {
resID := uint(10)
return &models.Contractor{
BaseModel: models.BaseModel{ID: 5, CreatedAt: fixedNow, UpdatedAt: fixedNow},
ResidenceID: &resID,
CreatedByID: 1,
CreatedBy: *makeUser(),
Name: "Bob's Plumbing",
Company: "Bob Co",
Phone: "555-1234",
Email: "bob@plumb.com",
Rating: float64Ptr(4.5),
IsFavorite: true,
IsActive: true,
Specialties: []models.ContractorSpecialty{
{BaseModel: models.BaseModel{ID: 1}, Name: "Plumbing", Icon: "wrench", DisplayOrder: 1},
},
Tasks: []models.Task{{BaseModel: models.BaseModel{ID: 1}}, {BaseModel: models.BaseModel{ID: 2}}},
}
}
func TestNewContractorResponse_BasicFields(t *testing.T) {
c := makeContractor()
resp := NewContractorResponse(c)
if resp.ID != 5 {
t.Errorf("ID = %d", resp.ID)
}
if resp.Name != "Bob's Plumbing" {
t.Errorf("Name = %q", resp.Name)
}
if resp.AddedBy != 1 {
t.Errorf("AddedBy = %d, want 1", resp.AddedBy)
}
if resp.CreatedBy == nil {
t.Error("CreatedBy should not be nil")
}
if resp.TaskCount != 2 {
t.Errorf("TaskCount = %d, want 2", resp.TaskCount)
}
}
func TestNewContractorResponse_WithSpecialties(t *testing.T) {
c := makeContractor()
resp := NewContractorResponse(c)
if len(resp.Specialties) != 1 {
t.Fatalf("Specialties len = %d", len(resp.Specialties))
}
if resp.Specialties[0].Name != "Plumbing" {
t.Errorf("Specialty name = %q", resp.Specialties[0].Name)
}
}
func TestNewContractorListResponse(t *testing.T) {
contractors := []models.Contractor{*makeContractor()}
results := NewContractorListResponse(contractors)
if len(results) != 1 {
t.Errorf("len = %d", len(results))
}
}
func TestNewContractorUserResponse_Nil(t *testing.T) {
if NewContractorUserResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewContractorSpecialtyResponse(t *testing.T) {
s := &models.ContractorSpecialty{
BaseModel: models.BaseModel{ID: 1},
Name: "Electrical",
Description: "Electrical work",
Icon: "bolt",
DisplayOrder: 2,
}
resp := NewContractorSpecialtyResponse(s)
if resp.Name != "Electrical" || resp.Icon != "bolt" {
t.Errorf("unexpected: %+v", resp)
}
}
// ==================== document.go ====================
func makeDocument() *models.Document {
price := decimal.NewFromFloat(99.99)
return &models.Document{
BaseModel: models.BaseModel{ID: 20, CreatedAt: fixedNow, UpdatedAt: fixedNow},
ResidenceID: 10,
CreatedByID: 1,
CreatedBy: *makeUser(),
Title: "Warranty",
Description: "Roof warranty",
DocumentType: "warranty",
FileName: "warranty.pdf",
FileSize: func() *int64 { v := int64(1024); return &v }(),
MimeType: "application/pdf",
PurchasePrice: &price,
IsActive: true,
Images: []models.DocumentImage{
{BaseModel: models.BaseModel{ID: 1}, ImageURL: "http://img.jpg", Caption: "page 1"},
},
}
}
func TestNewDocumentResponse_MediaURL(t *testing.T) {
d := makeDocument()
resp := NewDocumentResponse(d)
want := fmt.Sprintf("/api/media/document/%d", d.ID)
if resp.MediaURL != want {
t.Errorf("MediaURL = %q, want %q", resp.MediaURL, want)
}
if resp.Residence != resp.ResidenceID {
t.Error("Residence alias should equal ResidenceID")
}
}
func TestNewDocumentResponse_WithImages(t *testing.T) {
d := makeDocument()
resp := NewDocumentResponse(d)
if len(resp.Images) != 1 {
t.Fatalf("Images len = %d", len(resp.Images))
}
if resp.Images[0].MediaURL != "/api/media/document-image/1" {
t.Errorf("Image MediaURL = %q", resp.Images[0].MediaURL)
}
}
func TestNewDocumentResponse_EmptyImageURL(t *testing.T) {
d := makeDocument()
d.Images = []models.DocumentImage{
{BaseModel: models.BaseModel{ID: 5}, ImageURL: "", Caption: "missing"},
}
resp := NewDocumentResponse(d)
if resp.Images[0].Error != "image source URL is missing" {
t.Errorf("Error = %q", resp.Images[0].Error)
}
}
func TestNewDocumentListResponse(t *testing.T) {
docs := []models.Document{*makeDocument()}
results := NewDocumentListResponse(docs)
if len(results) != 1 {
t.Errorf("len = %d", len(results))
}
}
func TestNewDocumentUserResponse_Nil(t *testing.T) {
if NewDocumentUserResponse(nil) != nil {
t.Error("expected nil")
}
}
// ==================== residence.go ====================
func makeResidence() *models.Residence {
propTypeID := uint(1)
return &models.Residence{
BaseModel: models.BaseModel{ID: 10, CreatedAt: fixedNow, UpdatedAt: fixedNow},
OwnerID: 1,
Owner: *makeUser(),
Name: "My House",
PropertyTypeID: &propTypeID,
PropertyType: &models.ResidenceType{BaseModel: models.BaseModel{ID: 1}, Name: "House"},
StreetAddress: "123 Main St",
City: "Springfield",
StateProvince: "IL",
PostalCode: "62701",
Country: "USA",
Bedrooms: intPtr(3),
IsPrimary: true,
IsActive: true,
HasPool: true,
HeatingType: strPtr("central"),
Users: []models.User{
{ID: 1, Username: "john", Email: "john@example.com"},
{ID: 2, Username: "jane", Email: "jane@example.com"},
},
}
}
func TestNewResidenceResponse_AllFields(t *testing.T) {
r := makeResidence()
resp := NewResidenceResponse(r)
if resp.ID != 10 {
t.Errorf("ID = %d", resp.ID)
}
if resp.Name != "My House" {
t.Errorf("Name = %q", resp.Name)
}
if resp.Owner == nil {
t.Error("Owner should not be nil")
}
if resp.PropertyType == nil {
t.Error("PropertyType should not be nil")
}
if !resp.HasPool {
t.Error("HasPool should be true")
}
if resp.HeatingType == nil || *resp.HeatingType != "central" {
t.Error("HeatingType should be 'central'")
}
}
func TestNewResidenceResponse_WithUsers(t *testing.T) {
r := makeResidence()
resp := NewResidenceResponse(r)
if len(resp.Users) != 2 {
t.Errorf("Users len = %d, want 2", len(resp.Users))
}
}
func TestNewResidenceResponse_NoUsers(t *testing.T) {
r := makeResidence()
r.Users = nil
resp := NewResidenceResponse(r)
if resp.Users == nil {
t.Error("Users should be empty slice, not nil")
}
if len(resp.Users) != 0 {
t.Errorf("Users len = %d, want 0", len(resp.Users))
}
}
func TestNewResidenceListResponse(t *testing.T) {
residences := []models.Residence{*makeResidence()}
results := NewResidenceListResponse(residences)
if len(results) != 1 {
t.Errorf("len = %d", len(results))
}
}
func TestNewResidenceUserResponse_Nil(t *testing.T) {
if NewResidenceUserResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewResidenceTypeResponse_Nil(t *testing.T) {
if NewResidenceTypeResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewShareCodeResponse(t *testing.T) {
sc := &models.ResidenceShareCode{
BaseModel: models.BaseModel{ID: 1, CreatedAt: fixedNow},
Code: "ABC123",
ResidenceID: 10,
CreatedByID: 1,
IsActive: true,
ExpiresAt: timePtr(fixedNow.Add(24 * time.Hour)),
}
resp := NewShareCodeResponse(sc)
if resp.Code != "ABC123" {
t.Errorf("Code = %q", resp.Code)
}
if resp.ResidenceID != 10 {
t.Errorf("ResidenceID = %d", resp.ResidenceID)
}
}
// ==================== task_template.go ====================
func TestParseTags_Empty(t *testing.T) {
result := parseTags("")
if len(result) != 0 {
t.Errorf("len = %d, want 0", len(result))
}
}
func TestParseTags_Multiple(t *testing.T) {
result := parseTags("plumbing,electrical,roofing")
if len(result) != 3 {
t.Errorf("len = %d, want 3", len(result))
}
if result[0] != "plumbing" || result[1] != "electrical" || result[2] != "roofing" {
t.Errorf("unexpected tags: %v", result)
}
}
func TestParseTags_Whitespace(t *testing.T) {
result := parseTags(" plumbing , , electrical ")
if len(result) != 2 {
t.Errorf("len = %d, want 2 (should skip empty after trim)", len(result))
}
if result[0] != "plumbing" || result[1] != "electrical" {
t.Errorf("unexpected tags: %v", result)
}
}
func makeTemplate(catID *uint, cat *models.TaskCategory) models.TaskTemplate {
return models.TaskTemplate{
BaseModel: models.BaseModel{ID: 1, CreatedAt: fixedNow, UpdatedAt: fixedNow},
Title: "Clean Gutters",
Description: "Remove debris",
CategoryID: catID,
Category: cat,
IconIOS: "leaf",
IconAndroid: "leaf_android",
Tags: "exterior,seasonal",
DisplayOrder: 1,
IsActive: true,
}
}
func TestNewTaskTemplateResponse(t *testing.T) {
catID := uint(1)
cat := &models.TaskCategory{BaseModel: models.BaseModel{ID: 1}, Name: "Exterior"}
tmpl := makeTemplate(&catID, cat)
resp := NewTaskTemplateResponse(&tmpl)
if resp.Title != "Clean Gutters" {
t.Errorf("Title = %q", resp.Title)
}
if len(resp.Tags) != 2 {
t.Errorf("Tags len = %d", len(resp.Tags))
}
if resp.Category == nil {
t.Error("Category should not be nil")
}
}
func TestNewTaskTemplatesGroupedResponse_Grouping(t *testing.T) {
catID := uint(1)
cat := &models.TaskCategory{BaseModel: models.BaseModel{ID: 1}, Name: "Exterior"}
templates := []models.TaskTemplate{
makeTemplate(&catID, cat),
makeTemplate(&catID, cat),
}
resp := NewTaskTemplatesGroupedResponse(templates)
if len(resp.Categories) != 1 {
t.Fatalf("Categories len = %d, want 1", len(resp.Categories))
}
if resp.Categories[0].CategoryName != "Exterior" {
t.Errorf("CategoryName = %q", resp.Categories[0].CategoryName)
}
if resp.Categories[0].Count != 2 {
t.Errorf("Count = %d, want 2", resp.Categories[0].Count)
}
if resp.TotalCount != 2 {
t.Errorf("TotalCount = %d, want 2", resp.TotalCount)
}
}
func TestNewTaskTemplatesGroupedResponse_Uncategorized(t *testing.T) {
tmpl := makeTemplate(nil, nil)
resp := NewTaskTemplatesGroupedResponse([]models.TaskTemplate{tmpl})
if len(resp.Categories) != 1 {
t.Fatalf("Categories len = %d", len(resp.Categories))
}
if resp.Categories[0].CategoryName != "Uncategorized" {
t.Errorf("CategoryName = %q", resp.Categories[0].CategoryName)
}
}
func TestNewTaskTemplateListResponse(t *testing.T) {
templates := []models.TaskTemplate{makeTemplate(nil, nil)}
results := NewTaskTemplateListResponse(templates)
if len(results) != 1 {
t.Errorf("len = %d", len(results))
}
}
// ==================== DetermineKanbanColumnWithTime ====================
func TestDetermineKanbanColumnWithTime(t *testing.T) {
task := makeTask()
col := DetermineKanbanColumnWithTime(task, 30, fixedNow)
if col == "" {
t.Error("expected non-empty column")
}
}
// ==================== NewTaskResponse uses NewTaskResponseWithThreshold ====================
func TestNewTaskResponse_UsesDefault30(t *testing.T) {
task := makeTask()
resp := NewTaskResponse(task)
if resp.ID != 100 {
t.Errorf("ID = %d", resp.ID)
}
// Just verify it doesn't panic and produces a response
}
// ==================== NewTaskCompletionWithTaskResponse UTC variant ====================
func TestNewTaskCompletionWithTaskResponse_UTC(t *testing.T) {
c := &models.TaskCompletion{
BaseModel: models.BaseModel{ID: 1},
TaskID: 100,
CompletedByID: 1,
CompletedAt: fixedNow,
}
task := makeTask()
resp := NewTaskCompletionWithTaskResponse(c, task, 30)
if resp.Task == nil {
t.Error("Task should not be nil")
}
}

View File

@@ -0,0 +1,15 @@
package responses
// TaskSuggestionResponse represents a single task suggestion with relevance scoring
type TaskSuggestionResponse struct {
Template TaskTemplateResponse `json:"template"`
RelevanceScore float64 `json:"relevance_score"`
MatchReasons []string `json:"match_reasons"`
}
// TaskSuggestionsResponse represents the full suggestions response
type TaskSuggestionsResponse struct {
Suggestions []TaskSuggestionResponse `json:"suggestions"`
TotalCount int `json:"total_count"`
ProfileCompleteness float64 `json:"profile_completeness"`
}

View File

@@ -83,9 +83,10 @@ type TaskResponse struct {
Category *TaskCategoryResponse `json:"category,omitempty"`
PriorityID *uint `json:"priority_id"`
Priority *TaskPriorityResponse `json:"priority,omitempty"`
FrequencyID *uint `json:"frequency_id"`
Frequency *TaskFrequencyResponse `json:"frequency,omitempty"`
InProgress bool `json:"in_progress"`
FrequencyID *uint `json:"frequency_id"`
Frequency *TaskFrequencyResponse `json:"frequency,omitempty"`
CustomIntervalDays *int `json:"custom_interval_days"` // For "Custom" frequency, user-specified days
InProgress bool `json:"in_progress"`
DueDate *time.Time `json:"due_date"`
NextDueDate *time.Time `json:"next_due_date"` // For recurring tasks, updated after each completion
EstimatedCost *decimal.Decimal `json:"estimated_cost"`
@@ -94,12 +95,22 @@ type TaskResponse struct {
IsCancelled bool `json:"is_cancelled"`
IsArchived bool `json:"is_archived"`
ParentTaskID *uint `json:"parent_task_id"`
TemplateID *uint `json:"template_id,omitempty"` // Backlink to the TaskTemplate this task was created from
CompletionCount int `json:"completion_count"`
KanbanColumn string `json:"kanban_column,omitempty"` // Which kanban column this task belongs to
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
// BulkCreateTasksResponse is returned by POST /api/tasks/bulk/.
// All entries are created in a single transaction — if any insert fails the
// whole batch is rolled back and no partial state is visible.
type BulkCreateTasksResponse struct {
Tasks []TaskResponse `json:"tasks"`
Summary TotalSummary `json:"summary"`
CreatedCount int `json:"created_count"`
}
// Note: Pagination removed - list endpoints now return arrays directly
// KanbanColumnResponse represents a kanban column
@@ -236,8 +247,9 @@ func newTaskResponseInternal(t *models.Task, daysThreshold int, now time.Time) T
Description: t.Description,
CategoryID: t.CategoryID,
PriorityID: t.PriorityID,
FrequencyID: t.FrequencyID,
InProgress: t.InProgress,
FrequencyID: t.FrequencyID,
CustomIntervalDays: t.CustomIntervalDays,
InProgress: t.InProgress,
AssignedToID: t.AssignedToID,
DueDate: t.DueDate,
NextDueDate: t.NextDueDate,
@@ -247,6 +259,7 @@ func newTaskResponseInternal(t *models.Task, daysThreshold int, now time.Time) T
IsCancelled: t.IsCancelled,
IsArchived: t.IsArchived,
ParentTaskID: t.ParentTaskID,
TemplateID: t.TaskTemplateID,
CompletionCount: predicates.GetCompletionCount(t),
KanbanColumn: DetermineKanbanColumnWithTime(t, daysThreshold, now),
CreatedAt: t.CreatedAt,

View File

@@ -21,8 +21,6 @@ type TaskTemplateResponse struct {
Tags []string `json:"tags"`
DisplayOrder int `json:"display_order"`
IsActive bool `json:"is_active"`
RegionID *uint `json:"region_id,omitempty"`
RegionName string `json:"region_name,omitempty"`
CreatedAt time.Time `json:"created_at"`
UpdatedAt time.Time `json:"updated_at"`
}
@@ -65,11 +63,6 @@ func NewTaskTemplateResponse(t *models.TaskTemplate) TaskTemplateResponse {
resp.Frequency = NewTaskFrequencyResponse(t.Frequency)
}
if len(t.Regions) > 0 {
resp.RegionID = &t.Regions[0].ID
resp.RegionName = t.Regions[0].Name
}
return resp
}

View File

@@ -0,0 +1,105 @@
package echohelpers
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/labstack/echo/v4"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestDefaultQuery(t *testing.T) {
tests := []struct {
name string
query string
key string
defaultValue string
expected string
}{
{"returns value when present", "/?status=active", "status", "all", "active"},
{"returns default when absent", "/", "status", "all", "all"},
{"returns default for empty value", "/?status=", "status", "all", "all"},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, tc.query, nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
result := DefaultQuery(c, tc.key, tc.defaultValue)
assert.Equal(t, tc.expected, result)
})
}
}
func TestParseUintParam(t *testing.T) {
tests := []struct {
name string
paramValue string
expected uint
expectError bool
}{
{"valid uint", "42", 42, false},
{"zero", "0", 0, false},
{"invalid string", "abc", 0, true},
{"negative", "-1", 0, true},
{"empty", "", 0, true},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
c.SetParamNames("id")
c.SetParamValues(tc.paramValue)
result, err := ParseUintParam(c, "id")
if tc.expectError {
require.Error(t, err)
} else {
require.NoError(t, err)
assert.Equal(t, tc.expected, result)
}
})
}
}
func TestParseIntParam(t *testing.T) {
tests := []struct {
name string
paramValue string
expected int
expectError bool
}{
{"valid int", "42", 42, false},
{"zero", "0", 0, false},
{"negative", "-5", -5, false},
{"invalid string", "abc", 0, true},
{"empty", "", 0, true},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
c.SetParamNames("id")
c.SetParamValues(tc.paramValue)
result, err := ParseIntParam(c, "id")
if tc.expectError {
require.Error(t, err)
} else {
require.NoError(t, err)
assert.Equal(t, tc.expected, result)
}
})
}
}

View File

@@ -23,6 +23,7 @@ type AuthHandler struct {
appleAuthService *services.AppleAuthService
googleAuthService *services.GoogleAuthService
storageService *services.StorageService
auditService *services.AuditService
}
// NewAuthHandler creates a new auth handler
@@ -49,6 +50,11 @@ func (h *AuthHandler) SetStorageService(storageService *services.StorageService)
h.storageService = storageService
}
// SetAuditService sets the audit service for logging security events
func (h *AuthHandler) SetAuditService(auditService *services.AuditService) {
h.auditService = auditService
}
// Login handles POST /api/auth/login/
func (h *AuthHandler) Login(c echo.Context) error {
var req requests.LoginRequest
@@ -62,9 +68,19 @@ func (h *AuthHandler) Login(c echo.Context) error {
response, err := h.authService.Login(&req)
if err != nil {
log.Debug().Err(err).Str("identifier", req.Username).Msg("Login failed")
if h.auditService != nil {
h.auditService.LogEvent(c, nil, services.AuditEventLoginFailed, map[string]interface{}{
"identifier": req.Username,
})
}
return err
}
if h.auditService != nil {
userID := response.User.ID
h.auditService.LogEvent(c, &userID, services.AuditEventLogin, nil)
}
return c.JSON(http.StatusOK, response)
}
@@ -84,6 +100,14 @@ func (h *AuthHandler) Register(c echo.Context) error {
return err
}
if h.auditService != nil {
userID := response.User.ID
h.auditService.LogEvent(c, &userID, services.AuditEventRegister, map[string]interface{}{
"username": req.Username,
"email": req.Email,
})
}
// Send welcome email with confirmation code (async)
if h.emailService != nil && confirmationCode != "" {
go func() {
@@ -108,6 +132,14 @@ func (h *AuthHandler) Logout(c echo.Context) error {
return apperrors.Unauthorized("error.not_authenticated")
}
// Log audit event before invalidating the token
if h.auditService != nil {
user := middleware.GetAuthUser(c)
if user != nil {
h.auditService.LogEvent(c, &user.ID, services.AuditEventLogout, nil)
}
}
// Invalidate token in database
if err := h.authService.Logout(token); err != nil {
log.Warn().Err(err).Msg("Failed to delete token from database")
@@ -270,6 +302,12 @@ func (h *AuthHandler) ForgotPassword(c echo.Context) error {
}()
}
if h.auditService != nil {
h.auditService.LogEvent(c, nil, services.AuditEventPasswordReset, map[string]interface{}{
"email": req.Email,
})
}
// Always return success to prevent email enumeration
return c.JSON(http.StatusOK, responses.ForgotPasswordResponse{
Message: "Password reset email sent",
@@ -314,6 +352,12 @@ func (h *AuthHandler) ResetPassword(c echo.Context) error {
return err
}
if h.auditService != nil {
h.auditService.LogEvent(c, nil, services.AuditEventPasswordChanged, map[string]interface{}{
"method": "reset_token",
})
}
return c.JSON(http.StatusOK, responses.ResetPasswordResponse{
Message: "Password reset successful",
})
@@ -413,6 +457,34 @@ func (h *AuthHandler) GoogleSignIn(c echo.Context) error {
return c.JSON(http.StatusOK, response)
}
// RefreshToken handles POST /api/auth/refresh/
func (h *AuthHandler) RefreshToken(c echo.Context) error {
user, err := middleware.MustGetAuthUser(c)
if err != nil {
return err
}
token := middleware.GetAuthToken(c)
if token == "" {
return apperrors.Unauthorized("error.not_authenticated")
}
response, err := h.authService.RefreshToken(token, user.ID)
if err != nil {
log.Debug().Err(err).Uint("user_id", user.ID).Msg("Token refresh failed")
return err
}
// If the token was refreshed (new token), invalidate the old one from cache
if response.Token != token && h.cache != nil {
if cacheErr := h.cache.InvalidateAuthToken(c.Request().Context(), token); cacheErr != nil {
log.Warn().Err(cacheErr).Msg("Failed to invalidate old token from cache during refresh")
}
}
return c.JSON(http.StatusOK, response)
}
// DeleteAccount handles DELETE /api/auth/account/
func (h *AuthHandler) DeleteAccount(c echo.Context) error {
user, err := middleware.MustGetAuthUser(c)
@@ -431,6 +503,14 @@ func (h *AuthHandler) DeleteAccount(c echo.Context) error {
return err
}
if h.auditService != nil {
h.auditService.LogEvent(c, &user.ID, services.AuditEventAccountDeleted, map[string]interface{}{
"user_id": user.ID,
"username": user.Username,
"email": user.Email,
})
}
// Delete files from disk (best effort, don't fail the request)
if h.storageService != nil && len(fileURLs) > 0 {
go func() {

View File

@@ -38,7 +38,7 @@ func setupDeleteAccountHandler(t *testing.T) (*AuthHandler, *echo.Echo, *gorm.DB
func TestAuthHandler_DeleteAccount_EmailUser(t *testing.T) {
handler, e, db := setupDeleteAccountHandler(t)
user := testutil.CreateTestUser(t, db, "deletetest", "delete@test.com", "password123")
user := testutil.CreateTestUser(t, db, "deletetest", "delete@test.com", "Password123")
// Create profile for the user
profile := &models.UserProfile{UserID: user.ID, Verified: true}
@@ -52,7 +52,7 @@ func TestAuthHandler_DeleteAccount_EmailUser(t *testing.T) {
authGroup.DELETE("/account/", handler.DeleteAccount)
t.Run("successful deletion with correct password", func(t *testing.T) {
password := "password123"
password := "Password123"
req := map[string]interface{}{
"password": password,
}
@@ -84,7 +84,7 @@ func TestAuthHandler_DeleteAccount_EmailUser(t *testing.T) {
func TestAuthHandler_DeleteAccount_WrongPassword(t *testing.T) {
handler, e, db := setupDeleteAccountHandler(t)
user := testutil.CreateTestUser(t, db, "wrongpw", "wrongpw@test.com", "password123")
user := testutil.CreateTestUser(t, db, "wrongpw", "wrongpw@test.com", "Password123")
authGroup := e.Group("/api/auth")
authGroup.Use(testutil.MockAuthMiddleware(user))
@@ -105,7 +105,7 @@ func TestAuthHandler_DeleteAccount_WrongPassword(t *testing.T) {
func TestAuthHandler_DeleteAccount_MissingPassword(t *testing.T) {
handler, e, db := setupDeleteAccountHandler(t)
user := testutil.CreateTestUser(t, db, "nopw", "nopw@test.com", "password123")
user := testutil.CreateTestUser(t, db, "nopw", "nopw@test.com", "Password123")
authGroup := e.Group("/api/auth")
authGroup.Use(testutil.MockAuthMiddleware(user))
@@ -207,7 +207,7 @@ func TestAuthHandler_DeleteAccount_Unauthenticated(t *testing.T) {
t.Run("unauthenticated request returns 401", func(t *testing.T) {
req := map[string]interface{}{
"password": "password123",
"password": "Password123",
}
w := testutil.MakeRequest(e, "DELETE", "/api/auth/account/", req, "")

View File

@@ -43,7 +43,7 @@ func TestAuthHandler_Register(t *testing.T) {
req := requests.RegisterRequest{
Username: "newuser",
Email: "new@test.com",
Password: "password123",
Password: "Password123",
FirstName: "New",
LastName: "User",
}
@@ -98,7 +98,7 @@ func TestAuthHandler_Register(t *testing.T) {
req := requests.RegisterRequest{
Username: "duplicate",
Email: "unique1@test.com",
Password: "password123",
Password: "Password123",
}
w := testutil.MakeRequest(e, "POST", "/api/auth/register/", req, "")
testutil.AssertStatusCode(t, w, http.StatusCreated)
@@ -117,7 +117,7 @@ func TestAuthHandler_Register(t *testing.T) {
req := requests.RegisterRequest{
Username: "user1",
Email: "duplicate@test.com",
Password: "password123",
Password: "Password123",
}
w := testutil.MakeRequest(e, "POST", "/api/auth/register/", req, "")
testutil.AssertStatusCode(t, w, http.StatusCreated)
@@ -142,7 +142,7 @@ func TestAuthHandler_Login(t *testing.T) {
registerReq := requests.RegisterRequest{
Username: "logintest",
Email: "login@test.com",
Password: "password123",
Password: "Password123",
FirstName: "Test",
LastName: "User",
}
@@ -152,7 +152,7 @@ func TestAuthHandler_Login(t *testing.T) {
t.Run("successful login with username", func(t *testing.T) {
req := requests.LoginRequest{
Username: "logintest",
Password: "password123",
Password: "Password123",
}
w := testutil.MakeRequest(e, "POST", "/api/auth/login/", req, "")
@@ -174,7 +174,7 @@ func TestAuthHandler_Login(t *testing.T) {
t.Run("successful login with email", func(t *testing.T) {
req := requests.LoginRequest{
Username: "login@test.com", // Using email as username
Password: "password123",
Password: "Password123",
}
w := testutil.MakeRequest(e, "POST", "/api/auth/login/", req, "")
@@ -199,7 +199,7 @@ func TestAuthHandler_Login(t *testing.T) {
t.Run("login with non-existent user", func(t *testing.T) {
req := requests.LoginRequest{
Username: "nonexistent",
Password: "password123",
Password: "Password123",
}
w := testutil.MakeRequest(e, "POST", "/api/auth/login/", req, "")
@@ -223,7 +223,7 @@ func TestAuthHandler_CurrentUser(t *testing.T) {
handler, e, userRepo := setupAuthHandler(t)
db := testutil.SetupTestDB(t)
user := testutil.CreateTestUser(t, db, "metest", "me@test.com", "password123")
user := testutil.CreateTestUser(t, db, "metest", "me@test.com", "Password123")
user.FirstName = "Test"
user.LastName = "User"
userRepo.Update(user)
@@ -251,7 +251,7 @@ func TestAuthHandler_UpdateProfile(t *testing.T) {
handler, e, userRepo := setupAuthHandler(t)
db := testutil.SetupTestDB(t)
user := testutil.CreateTestUser(t, db, "updatetest", "update@test.com", "password123")
user := testutil.CreateTestUser(t, db, "updatetest", "update@test.com", "Password123")
userRepo.Update(user)
authGroup := e.Group("/api/auth")
@@ -289,7 +289,7 @@ func TestAuthHandler_ForgotPassword(t *testing.T) {
registerReq := requests.RegisterRequest{
Username: "forgottest",
Email: "forgot@test.com",
Password: "password123",
Password: "Password123",
}
testutil.MakeRequest(e, "POST", "/api/auth/register/", registerReq, "")
@@ -323,7 +323,7 @@ func TestAuthHandler_Logout(t *testing.T) {
handler, e, userRepo := setupAuthHandler(t)
db := testutil.SetupTestDB(t)
user := testutil.CreateTestUser(t, db, "logouttest", "logout@test.com", "password123")
user := testutil.CreateTestUser(t, db, "logouttest", "logout@test.com", "Password123")
userRepo.Update(user)
authGroup := e.Group("/api/auth")
@@ -350,7 +350,7 @@ func TestAuthHandler_JSONResponses(t *testing.T) {
req := requests.RegisterRequest{
Username: "jsontest",
Email: "json@test.com",
Password: "password123",
Password: "Password123",
FirstName: "JSON",
LastName: "Test",
}

View File

@@ -2,6 +2,7 @@ package handlers
import (
"encoding/json"
"fmt"
"net/http"
"testing"
@@ -180,3 +181,284 @@ func TestContractorHandler_CreateContractor_100Specialties_Returns400(t *testing
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestContractorHandler_ListContractors(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Electrician Bob")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/", handler.ListContractors)
t.Run("successful list", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response []map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Len(t, response, 2)
})
t.Run("user with no contractors returns empty", func(t *testing.T) {
otherUser := testutil.CreateTestUser(t, db, "other", "other@test.com", "Password123")
e2 := testutil.SetupTestRouter()
authGroup2 := e2.Group("/api/contractors")
authGroup2.Use(testutil.MockAuthMiddleware(otherUser))
authGroup2.GET("/", handler.ListContractors)
w := testutil.MakeRequest(e2, "GET", "/api/contractors/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response []map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Len(t, response, 0)
})
}
func TestContractorHandler_GetContractor(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
contractor := testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/:id/", handler.GetContractor)
t.Run("successful get", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", fmt.Sprintf("/api/contractors/%d/", contractor.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "Plumber Joe", response["name"])
})
t.Run("not found returns 404", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/99999/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/invalid/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestContractorHandler_UpdateContractor(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
contractor := testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.PUT("/:id/", handler.UpdateContractor)
t.Run("successful update", func(t *testing.T) {
newName := "Plumber Joe Updated"
req := requests.UpdateContractorRequest{
Name: &newName,
}
w := testutil.MakeRequest(e, "PUT", fmt.Sprintf("/api/contractors/%d/", contractor.ID), req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "Plumber Joe Updated", response["name"])
})
t.Run("invalid id returns 400", func(t *testing.T) {
newName := "Updated"
req := requests.UpdateContractorRequest{Name: &newName}
w := testutil.MakeRequest(e, "PUT", "/api/contractors/invalid/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("not found returns 404", func(t *testing.T) {
newName := "Updated"
req := requests.UpdateContractorRequest{Name: &newName}
w := testutil.MakeRequest(e, "PUT", "/api/contractors/99999/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
}
func TestContractorHandler_DeleteContractor(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
contractor := testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.DELETE("/:id/", handler.DeleteContractor)
t.Run("successful delete", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", fmt.Sprintf("/api/contractors/%d/", contractor.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Contains(t, response, "message")
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", "/api/contractors/invalid/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("not found returns 404", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", "/api/contractors/99999/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
}
func TestContractorHandler_ToggleFavorite(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
contractor := testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/:id/toggle-favorite/", handler.ToggleFavorite)
t.Run("toggle favorite on", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", fmt.Sprintf("/api/contractors/%d/toggle-favorite/", contractor.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Contains(t, response, "is_favorite")
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/contractors/invalid/toggle-favorite/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("not found returns 404", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/contractors/99999/toggle-favorite/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
}
func TestContractorHandler_ListContractorsByResidence(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/by-residence/:residence_id/", handler.ListContractorsByResidence)
t.Run("successful list by residence", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", fmt.Sprintf("/api/contractors/by-residence/%d/", residence.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response []map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Len(t, response, 1)
})
t.Run("invalid residence id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/by-residence/invalid/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestContractorHandler_GetSpecialties(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/specialties/", handler.GetSpecialties)
t.Run("successful list specialties", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/specialties/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response []map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Greater(t, len(response), 0)
})
}
func TestContractorHandler_GetContractorTasks(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
contractor := testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/:id/tasks/", handler.GetContractorTasks)
t.Run("successful get tasks", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", fmt.Sprintf("/api/contractors/%d/tasks/", contractor.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/invalid/tasks/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestContractorHandler_CreateContractor_WithOptionalFields(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/", handler.CreateContractor)
t.Run("creation with all optional fields", func(t *testing.T) {
rating := 4.5
isFavorite := true
req := requests.CreateContractorRequest{
ResidenceID: &residence.ID,
Name: "Full Contractor",
Company: "ABC Plumbing",
Phone: "555-1234",
Email: "contractor@test.com",
Notes: "Great work",
Rating: &rating,
IsFavorite: &isFavorite,
}
w := testutil.MakeRequest(e, "POST", "/api/contractors/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusCreated)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "Full Contractor", response["name"])
assert.Equal(t, "ABC Plumbing", response["company"])
})
}

Some files were not shown because too many files have changed in this diff Show More