Compare commits

..

10 Commits

Author SHA1 Message Date
Trey t
ca818e8478 Merge branch 'master' of github.com:akatreyt/MyCribAPI_GO
Some checks failed
Backend CI / Test (push) Has been cancelled
Backend CI / Contract Tests (push) Has been cancelled
Backend CI / Lint (push) Has been cancelled
Backend CI / Secret Scanning (push) Has been cancelled
Backend CI / Build (push) Has been cancelled
2026-04-01 20:45:43 -05:00
Trey T
bec880886b Coverage priorities 1-5: test pure functions, extract interfaces, mock-based handler tests
- Priority 1: Test NewSendEmailTask + NewSendPushTask (5 tests)
- Priority 2: Test customHTTPErrorHandler — all 15+ branches (21 tests)
- Priority 3: Extract Enqueuer interface + payload builders in worker pkg (5 tests)
- Priority 4: Extract ClassifyFile/ComputeRelPath in migrate-encrypt (6 tests)
- Priority 5: Define Handler interfaces, refactor to accept them, mock-based tests (14 tests)
- Fix .gitignore: /worker instead of worker to stop ignoring internal/worker/

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-01 20:30:09 -05:00
Trey t
2e10822e5a Add S3-compatible storage backend (B2, MinIO, AWS S3)
Introduces a StorageBackend interface with local filesystem and S3
implementations. The StorageService delegates raw I/O to the backend
while keeping validation, encryption, and URL generation unchanged.

Backend selection is config-driven: set B2_ENDPOINT + B2_KEY_ID +
B2_APP_KEY + B2_BUCKET_NAME for S3 mode, or STORAGE_UPLOAD_DIR for
local mode. STORAGE_USE_SSL=false for in-cluster MinIO (HTTP).

All existing tests pass unchanged — the local backend preserves
identical behavior to the previous direct-filesystem implementation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 21:31:24 -05:00
Trey t
34553f3bec Add K3s dev deployment setup for single-node VPS
Mirrors the prod deploy-k3s/ setup but runs all services in-cluster
on a single node: PostgreSQL (replaces Neon), MinIO S3-compatible
storage (replaces B2), Redis, API, worker, and admin.

Includes fully automated setup scripts (00-init through 04-verify),
server hardening (SSH, fail2ban, ufw), Let's Encrypt TLS via Traefik,
network policies, RBAC, and security contexts matching prod.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-30 21:30:39 -05:00
Trey T
00fd674b56 Remove dead climate region code from suggestion engine
Suggestion engine now purely uses home profile features (heating,
cooling, pool, etc.) for template matching. Climate region field
and matching block removed — ZIP code is no longer collected.
2026-03-30 11:19:04 -05:00
Trey T
cb7080c460 Smart onboarding: residence home profile + suggestion engine
14 new optional residence fields (heating, cooling, water heater, roof,
pool, sprinkler, septic, fireplace, garage, basement, attic, exterior,
flooring, landscaping) with JSONB conditions on templates.

Suggestion engine scores templates against home profile: string match
+0.25, bool +0.3, property type +0.15, universal base 0.3. Graceful
degradation from minimal to full profile info.

GET /api/tasks/suggestions/?residence_id=X returns ranked templates.
54 template conditions across 44 templates in seed data.
8 suggestion service tests.
2026-03-30 09:02:03 -05:00
Trey T
4c9a818bd9 Comprehensive TDD test suite for task logic — ~80 new tests
Predicates (20 cases): IsRecurring, IsOneTime, IsDueSoon,
HasCompletions, GetCompletionCount, IsUpcoming edge cases

Task creation (10): NextDueDate initialization, all frequency types,
past dates, all optional fields, access validation

One-time completion (8): NextDueDate→nil, InProgress reset,
notes/cost/rating, double completion, backdated completed_at

Recurring completion (16): Daily/Weekly/BiWeekly/Monthly/Quarterly/
Yearly/Custom frequencies, late/early completion timing, multiple
sequential completions, no-original-DueDate, CompletedFromColumn capture

QuickComplete (5): one-time, recurring, widget notes, 404, 403

State transitions (10): Cancel→Complete, Archive→Complete, InProgress
cycles, recurring full lifecycle, Archive→Unarchive column restore

Kanban column priority (7): verify chain priority order for all columns

Optimistic locking (7): correct/stale version, conflict on complete/
cancel/archive/mark-in-progress, rollback verification

Deletion (5): single/multi/middle completion deletion, NextDueDate
recalculation, InProgress restore behavior documented

Edge cases (9): boundary dates, late/early recurring, nil/zero frequency
days, custom intervals, version conflicts

Handler validation (4): rating bounds, title/description length,
custom interval validation

All 679 tests pass.
2026-03-26 17:36:50 -05:00
Trey T
7f0300cc95 Add custom_interval_days to TaskResponse DTO
Field existed in Task model but was missing from API response.
Aligns Go API contract with KMM mobile model.
2026-03-26 17:06:34 -05:00
Trey T
6df27f203b Add rate limit response headers (X-RateLimit-*, Retry-After)
Custom rate limiter replacing Echo built-in, with per-IP token bucket.
Every response includes X-RateLimit-Limit, Remaining, Reset headers.
429 responses additionally include Retry-After (seconds).
CORS updated to expose rate limit headers to mobile clients.
4 unit tests for header behavior and per-IP isolation.
2026-03-26 14:36:48 -05:00
Trey T
b679f28e55 Production hardening: security, resilience, observability, and compliance
Password complexity: custom validator requiring uppercase, lowercase, digit (min 8 chars)
Token expiry: 90-day token lifetime with refresh endpoint (60-90 day renewal window)
Health check: /api/health/ now pings Postgres + Redis, returns 503 on failure
Audit logging: async audit_log table for auth events (login, register, delete, etc.)
Circuit breaker: APNs/FCM push sends wrapped with 5-failure threshold, 30s recovery
FK indexes: 27 missing foreign key indexes across all tables (migration 017)
CSP header: default-src 'none'; frame-ancestors 'none'
Gzip compression: level 5 with media endpoint skipper
Prometheus metrics: /metrics endpoint using existing monitoring service
External timeouts: 15s push, 30s SMTP, context timeouts on all external calls

Migrations: 016 (token created_at), 017 (FK indexes), 018 (audit_log)
Tests: circuit breaker (15), audit service (8), token refresh (7), health (4),
       middleware expiry (5), validator (new)
2026-03-26 14:05:28 -05:00
189 changed files with 32506 additions and 975 deletions

View File

@@ -12,7 +12,9 @@
"Bash(git add:*)",
"Bash(docker ps:*)",
"Bash(git commit:*)",
"Bash(git push:*)"
"Bash(git push:*)",
"Bash(docker info:*)",
"Bash(curl:*)"
]
},
"enableAllProjectMcpServers": true,

2
.gitignore vendored
View File

@@ -6,7 +6,7 @@
# Binaries
bin/
api
worker
/worker
/admin
!admin/
*.exe

View File

@@ -122,19 +122,13 @@ func main() {
Msg("Email service not configured - emails will not be sent")
}
// Initialize storage service for file uploads
// Initialize storage service for file uploads (local filesystem or S3-compatible)
var storageService *services.StorageService
if cfg.Storage.UploadDir != "" {
if cfg.Storage.UploadDir != "" || cfg.Storage.IsS3() {
storageService, err = services.NewStorageService(&cfg.Storage)
if err != nil {
log.Warn().Err(err).Msg("Failed to initialize storage service - uploads disabled")
} else {
log.Info().
Str("upload_dir", cfg.Storage.UploadDir).
Str("base_url", cfg.Storage.BaseURL).
Int64("max_file_size", cfg.Storage.MaxFileSize).
Msg("Storage service initialized")
// Initialize file encryption at rest if configured
if cfg.Storage.EncryptionKey != "" {
encSvc, encErr := services.NewEncryptionService(cfg.Storage.EncryptionKey)

View File

@@ -0,0 +1,61 @@
package main
import (
"testing"
"time"
)
func TestClassifyCompletion_CompletedAfterDue_ReturnsOverdue(t *testing.T) {
due := time.Date(2025, 6, 1, 0, 0, 0, 0, time.UTC)
completed := time.Date(2025, 6, 5, 14, 0, 0, 0, time.UTC)
got := classifyCompletion(completed, due, 7)
if got != "overdue_tasks" {
t.Errorf("got %q, want overdue_tasks", got)
}
}
func TestClassifyCompletion_CompletedOnDueDate_ReturnsDueSoon(t *testing.T) {
due := time.Date(2025, 6, 1, 0, 0, 0, 0, time.UTC)
completed := time.Date(2025, 6, 1, 10, 0, 0, 0, time.UTC)
got := classifyCompletion(completed, due, 7)
if got != "due_soon_tasks" {
t.Errorf("got %q, want due_soon_tasks", got)
}
}
func TestClassifyCompletion_CompletedWithinThreshold_ReturnsDueSoon(t *testing.T) {
due := time.Date(2025, 6, 10, 0, 0, 0, 0, time.UTC)
completed := time.Date(2025, 6, 5, 0, 0, 0, 0, time.UTC) // 5 days before due, threshold 7
got := classifyCompletion(completed, due, 7)
if got != "due_soon_tasks" {
t.Errorf("got %q, want due_soon_tasks", got)
}
}
func TestClassifyCompletion_CompletedAtExactThreshold_ReturnsDueSoon(t *testing.T) {
due := time.Date(2025, 6, 10, 0, 0, 0, 0, time.UTC)
completed := time.Date(2025, 6, 3, 0, 0, 0, 0, time.UTC) // exactly 7 days before due
got := classifyCompletion(completed, due, 7)
if got != "due_soon_tasks" {
t.Errorf("got %q, want due_soon_tasks", got)
}
}
func TestClassifyCompletion_CompletedBeyondThreshold_ReturnsUpcoming(t *testing.T) {
due := time.Date(2025, 6, 30, 0, 0, 0, 0, time.UTC)
completed := time.Date(2025, 6, 1, 0, 0, 0, 0, time.UTC) // 29 days before due, threshold 7
got := classifyCompletion(completed, due, 7)
if got != "upcoming_tasks" {
t.Errorf("got %q, want upcoming_tasks", got)
}
}
func TestClassifyCompletion_TimeNormalization_SameDayDifferentTimes(t *testing.T) {
due := time.Date(2025, 6, 1, 23, 59, 59, 0, time.UTC)
completed := time.Date(2025, 6, 1, 0, 0, 1, 0, time.UTC) // same day, different times
got := classifyCompletion(completed, due, 7)
// Same day → daysBefore == 0 → within threshold → due_soon
if got != "due_soon_tasks" {
t.Errorf("got %q, want due_soon_tasks", got)
}
}

View File

@@ -0,0 +1,50 @@
package main
import (
"path/filepath"
"strings"
)
// isEncrypted checks if a file path ends with .enc
func isEncrypted(path string) bool {
return strings.HasSuffix(path, ".enc")
}
// encryptedPath appends .enc to the file path.
func encryptedPath(path string) string {
return path + ".enc"
}
// shouldProcessFile returns true if the file should be encrypted.
func shouldProcessFile(isDir bool, path string) bool {
return !isDir && !isEncrypted(path)
}
// FileAction represents the decision about what to do with a file during encryption migration.
type FileAction int
const (
ActionSkipDir FileAction = iota // Directory, skip
ActionSkipEncrypted // Already encrypted, skip
ActionDryRun // Would encrypt (dry run mode)
ActionEncrypt // Should encrypt
)
// ClassifyFile determines what action to take for a file during the walk.
func ClassifyFile(isDir bool, path string, dryRun bool) FileAction {
if isDir {
return ActionSkipDir
}
if isEncrypted(path) {
return ActionSkipEncrypted
}
if dryRun {
return ActionDryRun
}
return ActionEncrypt
}
// ComputeRelPath computes the relative path from base to path.
func ComputeRelPath(base, path string) (string, error) {
return filepath.Rel(base, path)
}

View File

@@ -0,0 +1,96 @@
package main
import "testing"
func TestIsEncrypted_EncFile_True(t *testing.T) {
if !isEncrypted("photo.jpg.enc") {
t.Error("expected true for .enc file")
}
}
func TestIsEncrypted_PdfFile_False(t *testing.T) {
if isEncrypted("doc.pdf") {
t.Error("expected false for .pdf file")
}
}
func TestIsEncrypted_DotEncOnly_True(t *testing.T) {
if !isEncrypted(".enc") {
t.Error("expected true for '.enc'")
}
}
func TestEncryptedPath_AppendsDotEnc(t *testing.T) {
got := encryptedPath("uploads/photo.jpg")
want := "uploads/photo.jpg.enc"
if got != want {
t.Errorf("got %q, want %q", got, want)
}
}
func TestShouldProcessFile_RegularFile_True(t *testing.T) {
if !shouldProcessFile(false, "photo.jpg") {
t.Error("expected true for regular file")
}
}
func TestShouldProcessFile_Directory_False(t *testing.T) {
if shouldProcessFile(true, "uploads") {
t.Error("expected false for directory")
}
}
func TestShouldProcessFile_AlreadyEncrypted_False(t *testing.T) {
if shouldProcessFile(false, "photo.jpg.enc") {
t.Error("expected false for already encrypted file")
}
}
// --- ClassifyFile ---
func TestClassifyFile_Directory_SkipDir(t *testing.T) {
if got := ClassifyFile(true, "uploads", false); got != ActionSkipDir {
t.Errorf("got %d, want ActionSkipDir", got)
}
}
func TestClassifyFile_EncryptedFile_SkipEncrypted(t *testing.T) {
if got := ClassifyFile(false, "photo.jpg.enc", false); got != ActionSkipEncrypted {
t.Errorf("got %d, want ActionSkipEncrypted", got)
}
}
func TestClassifyFile_DryRun_DryRun(t *testing.T) {
if got := ClassifyFile(false, "photo.jpg", true); got != ActionDryRun {
t.Errorf("got %d, want ActionDryRun", got)
}
}
func TestClassifyFile_Normal_Encrypt(t *testing.T) {
if got := ClassifyFile(false, "photo.jpg", false); got != ActionEncrypt {
t.Errorf("got %d, want ActionEncrypt", got)
}
}
// --- ComputeRelPath ---
func TestComputeRelPath_Valid(t *testing.T) {
got, err := ComputeRelPath("/uploads", "/uploads/photo.jpg")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if got != "photo.jpg" {
t.Errorf("got %q, want %q", got, "photo.jpg")
}
}
func TestComputeRelPath_NestedPath(t *testing.T) {
got, err := ComputeRelPath("/uploads", "/uploads/2024/01/photo.jpg")
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
want := "2024/01/photo.jpg"
if got != want {
t.Errorf("got %q, want %q", got, want)
}
}

View File

@@ -13,7 +13,6 @@ import (
"flag"
"os"
"path/filepath"
"strings"
"time"
"github.com/rs/zerolog"
@@ -87,13 +86,11 @@ func main() {
return nil
}
// Skip directories
if info.IsDir() {
action := ClassifyFile(info.IsDir(), path, *dryRun)
switch action {
case ActionSkipDir:
return nil
}
// Skip files already encrypted
if strings.HasSuffix(path, ".enc") {
case ActionSkipEncrypted:
skipped++
return nil
}
@@ -101,14 +98,14 @@ func main() {
totalFiles++
// Compute the relative path from upload dir
relPath, err := filepath.Rel(absUploadDir, path)
relPath, err := ComputeRelPath(absUploadDir, path)
if err != nil {
log.Warn().Err(err).Str("path", path).Msg("Failed to compute relative path")
errCount++
return nil
}
if *dryRun {
if action == ActionDryRun {
log.Info().Str("file", relPath).Msg("[DRY RUN] Would encrypt")
return nil
}

24
cmd/worker/startup.go Normal file
View File

@@ -0,0 +1,24 @@
package main
import "github.com/treytartt/honeydue-api/internal/worker/jobs"
// queuePriorities returns the Asynq queue priority map.
func queuePriorities() map[string]int {
return map[string]int{
"critical": 6,
"default": 3,
"low": 1,
}
}
// allJobTypes returns all registered job type strings.
func allJobTypes() []string {
return []string{
jobs.TypeSmartReminder,
jobs.TypeDailyDigest,
jobs.TypeSendEmail,
jobs.TypeSendPush,
jobs.TypeOnboardingEmails,
jobs.TypeReminderLogCleanup,
}
}

View File

@@ -0,0 +1,45 @@
package main
import (
"testing"
)
func TestQueuePriorities_CriticalHighest(t *testing.T) {
p := queuePriorities()
if p["critical"] <= p["default"] || p["critical"] <= p["low"] {
t.Errorf("critical (%d) should be highest", p["critical"])
}
}
func TestQueuePriorities_ThreeQueues(t *testing.T) {
p := queuePriorities()
if len(p) != 3 {
t.Errorf("len = %d, want 3", len(p))
}
}
func TestAllJobTypes_Count(t *testing.T) {
types := allJobTypes()
if len(types) != 6 {
t.Errorf("len = %d, want 6", len(types))
}
}
func TestAllJobTypes_NoDuplicates(t *testing.T) {
types := allJobTypes()
seen := make(map[string]bool)
for _, typ := range types {
if seen[typ] {
t.Errorf("duplicate job type: %q", typ)
}
seen[typ] = true
}
}
func TestAllJobTypes_AllNonEmpty(t *testing.T) {
for _, typ := range allJobTypes() {
if typ == "" {
t.Error("found empty job type")
}
}
}

13
deploy-k3s-dev/.gitignore vendored Normal file
View File

@@ -0,0 +1,13 @@
# Single config file (contains tokens and credentials)
config.yaml
# Generated files
kubeconfig
# Secret files
secrets/*.txt
secrets/*.p8
secrets/*.pem
secrets/*.key
secrets/*.crt
!secrets/README.md

78
deploy-k3s-dev/README.md Normal file
View File

@@ -0,0 +1,78 @@
# honeyDue — K3s Dev Deployment
Single-node K3s dev environment that replicates the production setup with all services running locally.
**Architecture**: 1-node K3s, in-cluster PostgreSQL + Redis + MinIO (S3-compatible), Let's Encrypt TLS.
**Domains**: `devapi.myhoneydue.com`, `devadmin.myhoneydue.com`
---
## Quick Start
```bash
cd honeyDueAPI-go/deploy-k3s-dev
# 1. Fill in config
cp config.yaml.example config.yaml
# Edit config.yaml — fill in ALL empty values
# 2. Create secret files (see secrets/README.md)
echo "your-postgres-password" > secrets/postgres_password.txt
openssl rand -base64 48 > secrets/secret_key.txt
echo "your-smtp-password" > secrets/email_host_password.txt
echo "your-fcm-key" > secrets/fcm_server_key.txt
openssl rand -base64 24 > secrets/minio_root_password.txt
cp /path/to/AuthKey.p8 secrets/apns_auth_key.p8
# 3. Install K3s → Create secrets → Deploy
./scripts/01-setup-k3s.sh
./scripts/02-setup-secrets.sh
./scripts/03-deploy.sh
# 4. Point DNS at the server IP, then verify
./scripts/04-verify.sh
curl https://devapi.myhoneydue.com/api/health/
```
## Prod vs Dev
| Component | Prod (`deploy-k3s/`) | Dev (`deploy-k3s-dev/`) |
|---|---|---|
| Nodes | 3x CX33 (HA etcd) | 1 node (any VPS) |
| PostgreSQL | Neon (managed) | In-cluster container |
| File storage | Backblaze B2 | MinIO (S3-compatible) |
| Redis | In-cluster | In-cluster (identical) |
| TLS | Cloudflare origin cert | Let's Encrypt (or Cloudflare) |
| Replicas | api=3, worker=2 | All 1 |
| HPA/PDB | Enabled | Not deployed |
| Network policies | Same | Same + postgres/minio rules |
| Security contexts | Same | Same (except postgres) |
| Deploy workflow | Same scripts | Same scripts |
| Docker images | Same | Same |
## TLS Modes
**Let's Encrypt** (default): Traefik auto-provisions certs. Set `tls.letsencrypt_email` in config.yaml.
**Cloudflare**: Same as prod. Set `tls.mode: cloudflare`, add origin cert files to `secrets/`.
## Storage Note
MinIO provides the same S3-compatible API as Backblaze B2. The Go API uses the same env vars (`B2_KEY_ID`, `B2_APP_KEY`, `B2_BUCKET_NAME`, `B2_ENDPOINT`) — it connects to MinIO instead of B2 without code changes.
An additional env var `STORAGE_USE_SSL=false` is set since MinIO runs in-cluster over HTTP. If the Go storage service hardcodes HTTPS, it may need a small change to respect this flag.
## Monitoring
```bash
stern -n honeydue . # All logs
kubectl logs -n honeydue deploy/api -f # API logs
kubectl top pods -n honeydue # Resource usage
```
## Rollback
```bash
./scripts/rollback.sh
```

View File

@@ -0,0 +1,103 @@
# config.yaml — single source of truth for honeyDue K3s DEV deployment
# Copy to config.yaml, fill in all empty values, then run scripts in order.
# This file is gitignored — never commit it with real values.
# --- Server ---
server:
host: "" # Server IP or SSH config alias
user: root # SSH user
ssh_key: ~/.ssh/id_ed25519
# --- Domains ---
domains:
api: devapi.myhoneydue.com
admin: devadmin.myhoneydue.com
base: dev.myhoneydue.com
# --- Container Registry (GHCR) ---
registry:
server: ghcr.io
namespace: "" # GitHub username or org
username: "" # GitHub username
token: "" # PAT with read:packages, write:packages
# --- Database (in-cluster PostgreSQL) ---
database:
name: honeydue_dev
user: honeydue
# password goes in secrets/postgres_password.txt
max_open_conns: 10
max_idle_conns: 5
max_lifetime: "600s"
# --- Email (Fastmail) ---
email:
host: smtp.fastmail.com
port: 587
user: "" # Fastmail email address
from: "honeyDue DEV <noreply@myhoneydue.com>"
use_tls: true
# --- Push Notifications ---
push:
apns_key_id: ""
apns_team_id: ""
apns_topic: com.tt.honeyDue
apns_production: false
apns_use_sandbox: true # Sandbox for dev
# --- Object Storage (in-cluster MinIO — S3-compatible, replaces B2) ---
storage:
minio_root_user: honeydue # MinIO access key
# minio_root_password goes in secrets/minio_root_password.txt
bucket: honeydue-dev
max_file_size: 10485760
allowed_types: "image/jpeg,image/png,image/gif,image/webp,application/pdf"
# --- Worker Schedules (UTC hours) ---
worker:
task_reminder_hour: 14
overdue_reminder_hour: 15
daily_digest_hour: 3
# --- Feature Flags ---
features:
push_enabled: true
email_enabled: false # Disabled for dev by default
webhooks_enabled: false
onboarding_emails_enabled: false
pdf_reports_enabled: true
worker_enabled: true
# --- Redis ---
redis:
password: "" # Set a strong password
# --- Admin Panel ---
admin:
basic_auth_user: "" # HTTP basic auth username
basic_auth_password: "" # HTTP basic auth password
# --- TLS ---
tls:
mode: letsencrypt # "letsencrypt" or "cloudflare"
letsencrypt_email: "" # Required if mode=letsencrypt
# If mode=cloudflare, create secrets/cloudflare-origin.crt and .key
# --- Apple Auth / IAP (optional) ---
apple_auth:
client_id: ""
team_id: ""
iap_key_id: ""
iap_issuer_id: ""
iap_bundle_id: ""
iap_key_path: ""
iap_sandbox: true
# --- Google Auth / IAP (optional) ---
google_auth:
client_id: ""
android_client_id: ""
ios_client_id: ""
iap_package_name: ""
iap_service_account_path: ""

View File

@@ -0,0 +1,94 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app.kubernetes.io/name: admin
template:
metadata:
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: admin
imagePullSecrets:
- name: ghcr-credentials
securityContext:
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
seccompProfile:
type: RuntimeDefault
containers:
- name: admin
image: IMAGE_PLACEHOLDER # Replaced by 03-deploy.sh
ports:
- containerPort: 3000
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: PORT
value: "3000"
- name: HOSTNAME
value: "0.0.0.0"
- name: NEXT_PUBLIC_API_URL
valueFrom:
configMapKeyRef:
name: honeydue-config
key: NEXT_PUBLIC_API_URL
volumeMounts:
- name: nextjs-cache
mountPath: /app/.next/cache
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
startupProbe:
httpGet:
path: /admin/
port: 3000
failureThreshold: 12
periodSeconds: 5
readinessProbe:
httpGet:
path: /admin/
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
httpGet:
path: /admin/
port: 3000
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
volumes:
- name: nextjs-cache
emptyDir:
sizeLimit: 256Mi
- name: tmp
emptyDir:
sizeLimit: 64Mi

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: admin
ports:
- port: 3000
targetPort: 3000
protocol: TCP

View File

@@ -0,0 +1,56 @@
# API Ingress — TLS via Let's Encrypt (default) or Cloudflare origin cert
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: honeydue-api
namespace: honeydue
labels:
app.kubernetes.io/part-of: honeydue
annotations:
# TLS_ANNOTATIONS_PLACEHOLDER — replaced by 03-deploy.sh based on tls.mode
traefik.ingress.kubernetes.io/router.middlewares: honeydue-security-headers@kubernetescrd,honeydue-rate-limit@kubernetescrd
spec:
tls:
- hosts:
- API_DOMAIN_PLACEHOLDER
secretName: TLS_SECRET_PLACEHOLDER
rules:
- host: API_DOMAIN_PLACEHOLDER
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 8000
---
# Admin Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: honeydue-admin
namespace: honeydue
labels:
app.kubernetes.io/part-of: honeydue
annotations:
# TLS_ANNOTATIONS_PLACEHOLDER — replaced by 03-deploy.sh based on tls.mode
traefik.ingress.kubernetes.io/router.middlewares: honeydue-security-headers@kubernetescrd,honeydue-rate-limit@kubernetescrd,honeydue-admin-auth@kubernetescrd
spec:
tls:
- hosts:
- ADMIN_DOMAIN_PLACEHOLDER
secretName: TLS_SECRET_PLACEHOLDER
rules:
- host: ADMIN_DOMAIN_PLACEHOLDER
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin
port:
number: 3000

View File

@@ -0,0 +1,45 @@
# Traefik CRD middleware for rate limiting
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: rate-limit
namespace: honeydue
spec:
rateLimit:
average: 100
burst: 200
period: 1m
---
# Security headers
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: security-headers
namespace: honeydue
spec:
headers:
frameDeny: true
contentTypeNosniff: true
browserXssFilter: true
referrerPolicy: "strict-origin-when-cross-origin"
customResponseHeaders:
X-Content-Type-Options: "nosniff"
X-Frame-Options: "DENY"
Strict-Transport-Security: "max-age=31536000; includeSubDomains"
Content-Security-Policy: "default-src 'self'; frame-ancestors 'none'"
Permissions-Policy: "camera=(), microphone=(), geolocation=()"
X-Permitted-Cross-Domain-Policies: "none"
---
# Admin basic auth — additional auth layer for admin panel
# Secret created by 02-setup-secrets.sh from config.yaml credentials
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: admin-auth
namespace: honeydue
spec:
basicAuth:
secret: admin-basic-auth
realm: "honeyDue Admin"

View File

@@ -0,0 +1,81 @@
# One-shot job to create the default bucket in MinIO.
# Applied by 03-deploy.sh after MinIO is running.
# Re-running is safe — mc mb --ignore-existing is idempotent.
apiVersion: batch/v1
kind: Job
metadata:
name: minio-create-bucket
namespace: honeydue
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
spec:
ttlSecondsAfterFinished: 300
backoffLimit: 5
template:
metadata:
labels:
app.kubernetes.io/name: minio-init
app.kubernetes.io/part-of: honeydue
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: mc
image: minio/mc:latest
command:
- sh
- -c
- |
echo "Waiting for MinIO to be ready..."
until mc alias set honeydue http://minio.honeydue.svc.cluster.local:9000 "$MINIO_ROOT_USER" "$MINIO_ROOT_PASSWORD" 2>/dev/null; do
sleep 2
done
echo "Creating bucket: $BUCKET_NAME"
mc mb --ignore-existing "honeydue/$BUCKET_NAME"
echo "Bucket ready."
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: MINIO_ROOT_USER
valueFrom:
configMapKeyRef:
name: honeydue-config
key: MINIO_ROOT_USER
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: honeydue-secrets
key: MINIO_ROOT_PASSWORD
- name: BUCKET_NAME
valueFrom:
configMapKeyRef:
name: honeydue-config
key: B2_BUCKET_NAME
volumeMounts:
- name: tmp
mountPath: /tmp
- name: mc-config
mountPath: /.mc
resources:
requests:
cpu: 50m
memory: 32Mi
limits:
cpu: 200m
memory: 64Mi
volumes:
- name: tmp
emptyDir:
sizeLimit: 16Mi
- name: mc-config
emptyDir:
sizeLimit: 16Mi
restartPolicy: OnFailure

View File

@@ -0,0 +1,89 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: minio
namespace: honeydue
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: Recreate # ReadWriteOnce PVC — can't attach to two pods
selector:
matchLabels:
app.kubernetes.io/name: minio
template:
metadata:
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: minio
securityContext:
runAsNonRoot: true
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
seccompProfile:
type: RuntimeDefault
containers:
- name: minio
image: minio/minio:latest
args: ["server", "/data", "--console-address", ":9001"]
ports:
- name: api
containerPort: 9000
protocol: TCP
- name: console
containerPort: 9001
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: MINIO_ROOT_USER
valueFrom:
configMapKeyRef:
name: honeydue-config
key: MINIO_ROOT_USER
- name: MINIO_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: honeydue-secrets
key: MINIO_ROOT_PASSWORD
volumeMounts:
- name: minio-data
mountPath: /data
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
readinessProbe:
httpGet:
path: /minio/health/ready
port: 9000
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
httpGet:
path: /minio/health/live
port: 9000
initialDelaySeconds: 15
periodSeconds: 30
timeoutSeconds: 5
volumes:
- name: minio-data
persistentVolumeClaim:
claimName: minio-data
- name: tmp
emptyDir:
sizeLimit: 64Mi

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: minio-data
namespace: honeydue
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 10Gi

View File

@@ -0,0 +1,21 @@
apiVersion: v1
kind: Service
metadata:
name: minio
namespace: honeydue
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: minio
ports:
- name: api
port: 9000
targetPort: 9000
protocol: TCP
- name: console
port: 9001
targetPort: 9001
protocol: TCP

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: honeydue
labels:
app.kubernetes.io/part-of: honeydue

View File

@@ -0,0 +1,305 @@
# Network Policies — default-deny with explicit allows
# Same pattern as prod, with added rules for in-cluster postgres and minio.
# --- Default deny all ingress and egress ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: honeydue
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# --- Allow DNS for all pods (required for service discovery) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: honeydue
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
---
# --- API: allow ingress from Traefik (kube-system namespace) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-api
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: api
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: TCP
port: 8000
---
# --- Admin: allow ingress from Traefik (kube-system namespace) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-admin
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: admin
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: TCP
port: 3000
---
# --- Redis: allow ingress ONLY from api + worker pods ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-redis
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: redis
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
- podSelector:
matchLabels:
app.kubernetes.io/name: worker
ports:
- protocol: TCP
port: 6379
---
# --- PostgreSQL: allow ingress ONLY from api + worker pods ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-postgres
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: postgres
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
- podSelector:
matchLabels:
app.kubernetes.io/name: worker
ports:
- protocol: TCP
port: 5432
---
# --- MinIO: allow ingress from api + worker + minio-init job pods ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-minio
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: minio
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
- podSelector:
matchLabels:
app.kubernetes.io/name: worker
- podSelector:
matchLabels:
app.kubernetes.io/name: minio-init
ports:
- protocol: TCP
port: 9000
- protocol: TCP
port: 9001
---
# --- API: allow egress to Redis, PostgreSQL, MinIO, external services ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-api
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: api
policyTypes:
- Egress
egress:
# Redis (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: redis
ports:
- protocol: TCP
port: 6379
# PostgreSQL (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: postgres
ports:
- protocol: TCP
port: 5432
# MinIO (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- protocol: TCP
port: 9000
# External services: SMTP (587), HTTPS (443 — APNs, FCM, PostHog)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 587
- protocol: TCP
port: 443
---
# --- Worker: allow egress to Redis, PostgreSQL, MinIO, external services ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-worker
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: worker
policyTypes:
- Egress
egress:
# Redis (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: redis
ports:
- protocol: TCP
port: 6379
# PostgreSQL (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: postgres
ports:
- protocol: TCP
port: 5432
# MinIO (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- protocol: TCP
port: 9000
# External services: SMTP (587), HTTPS (443 — APNs, FCM)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 587
- protocol: TCP
port: 443
---
# --- Admin: allow egress to API (internal) for SSR ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-admin
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: admin
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
ports:
- protocol: TCP
port: 8000
---
# --- MinIO init job: allow egress to MinIO ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-minio-init
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: minio-init
policyTypes:
- Egress
egress:
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: minio
ports:
- protocol: TCP
port: 9000

View File

@@ -0,0 +1,93 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: honeydue
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: Recreate # ReadWriteOnce PVC — can't attach to two pods
selector:
matchLabels:
app.kubernetes.io/name: postgres
template:
metadata:
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: postgres
# Note: postgres image entrypoint requires root initially to set up
# permissions, then drops to the postgres user. runAsNonRoot is not set
# here because of this requirement. This differs from prod which uses
# managed Neon PostgreSQL (no container to secure).
securityContext:
fsGroup: 999
seccompProfile:
type: RuntimeDefault
containers:
- name: postgres
image: postgres:17-alpine
ports:
- containerPort: 5432
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop: ["ALL"]
env:
- name: POSTGRES_DB
valueFrom:
configMapKeyRef:
name: honeydue-config
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
configMapKeyRef:
name: honeydue-config
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: honeydue-secrets
key: POSTGRES_PASSWORD
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
- name: run
mountPath: /var/run/postgresql
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: "1"
memory: 1Gi
readinessProbe:
exec:
command: ["pg_isready", "-U", "honeydue"]
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command: ["pg_isready", "-U", "honeydue"]
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 5
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-data
- name: run
emptyDir: {}
- name: tmp
emptyDir:
sizeLimit: 64Mi

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-data
namespace: honeydue
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/part-of: honeydue
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 10Gi

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: honeydue
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: postgres
ports:
- port: 5432
targetPort: 5432
protocol: TCP

View File

@@ -0,0 +1,68 @@
# RBAC — Dedicated service accounts with no K8s API access
# Each pod gets its own SA with automountServiceAccountToken: false,
# so a compromised pod cannot query the Kubernetes API.
apiVersion: v1
kind: ServiceAccount
metadata:
name: api
namespace: honeydue
labels:
app.kubernetes.io/name: api
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: worker
namespace: honeydue
labels:
app.kubernetes.io/name: worker
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: postgres
namespace: honeydue
labels:
app.kubernetes.io/name: postgres
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: minio
namespace: honeydue
labels:
app.kubernetes.io/name: minio
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false

View File

@@ -0,0 +1,105 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: Recreate # ReadWriteOnce PVC — can't attach to two pods
selector:
matchLabels:
app.kubernetes.io/name: redis
template:
metadata:
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: redis
# No nodeSelector — single node dev cluster
securityContext:
runAsNonRoot: true
runAsUser: 999
runAsGroup: 999
fsGroup: 999
seccompProfile:
type: RuntimeDefault
containers:
- name: redis
image: redis:7-alpine
command:
- sh
- -c
- |
ARGS="--appendonly yes --appendfsync everysec --maxmemory 256mb --maxmemory-policy noeviction"
if [ -n "$REDIS_PASSWORD" ]; then
ARGS="$ARGS --requirepass $REDIS_PASSWORD"
fi
exec redis-server $ARGS
ports:
- containerPort: 6379
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: honeydue-secrets
key: REDIS_PASSWORD
optional: true
volumeMounts:
- name: redis-data
mountPath: /data
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 500m
memory: 512Mi
readinessProbe:
exec:
command:
- sh
- -c
- |
if [ -n "$REDIS_PASSWORD" ]; then
redis-cli -a "$REDIS_PASSWORD" ping 2>/dev/null | grep -q PONG
else
redis-cli ping | grep -q PONG
fi
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- |
if [ -n "$REDIS_PASSWORD" ]; then
redis-cli -a "$REDIS_PASSWORD" ping 2>/dev/null | grep -q PONG
else
redis-cli ping | grep -q PONG
fi
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 5
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data
- name: tmp
emptyDir:
medium: Memory
sizeLimit: 64Mi

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 5Gi

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: redis
ports:
- port: 6379
targetPort: 6379
protocol: TCP

View File

@@ -0,0 +1,16 @@
# Configure K3s's built-in Traefik with Let's Encrypt ACME.
# Applied by 03-deploy.sh only when tls.mode=letsencrypt.
# The email placeholder is replaced by the deploy script.
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
additionalArguments:
- "--certificatesresolvers.letsencrypt.acme.email=LETSENCRYPT_EMAIL_PLACEHOLDER"
- "--certificatesresolvers.letsencrypt.acme.storage=/data/acme.json"
- "--certificatesresolvers.letsencrypt.acme.tlschallenge=true"
persistence:
enabled: true

235
deploy-k3s-dev/scripts/00-init.sh Executable file
View File

@@ -0,0 +1,235 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEPLOY_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
SECRETS_DIR="${DEPLOY_DIR}/secrets"
CONFIG_FILE="${DEPLOY_DIR}/config.yaml"
log() { printf '[init] %s\n' "$*"; }
warn() { printf '[init][warn] %s\n' "$*" >&2; }
die() { printf '[init][error] %s\n' "$*" >&2; exit 1; }
# --- Prerequisites ---
command -v openssl >/dev/null 2>&1 || die "Missing: openssl"
command -v python3 >/dev/null 2>&1 || die "Missing: python3"
echo ""
echo "============================================"
echo " honeyDue Dev Server — Initial Setup"
echo "============================================"
echo ""
echo "This script will:"
echo " 1. Generate any missing random secrets"
echo " 2. Ask for anything not already filled in"
echo " 3. Create config.yaml with everything filled in"
echo ""
mkdir -p "${SECRETS_DIR}"
# --- Generate random secrets (skip if already exist) ---
generate_if_missing() {
local file="$1" label="$2" cmd="$3"
if [[ -f "${file}" && -s "${file}" ]]; then
log " ${label} — already exists, keeping"
else
eval "${cmd}" > "${file}"
log " ${label} — generated"
fi
}
log "Checking secrets..."
generate_if_missing "${SECRETS_DIR}/secret_key.txt" "secrets/secret_key.txt" "openssl rand -base64 48"
generate_if_missing "${SECRETS_DIR}/postgres_password.txt" "secrets/postgres_password.txt" "openssl rand -base64 24"
generate_if_missing "${SECRETS_DIR}/minio_root_password.txt" "secrets/minio_root_password.txt" "openssl rand -base64 24"
generate_if_missing "${SECRETS_DIR}/email_host_password.txt" "secrets/email_host_password.txt" "echo PLACEHOLDER"
log " secrets/fcm_server_key.txt — skipped (Android not ready)"
generate_if_missing "${SECRETS_DIR}/apns_auth_key.p8" "secrets/apns_auth_key.p8" "echo ''"
REDIS_PW="$(openssl rand -base64 24)"
log " Redis password — generated"
# --- Collect only what's missing ---
ask() {
local var_name="$1" prompt="$2" default="${3:-}"
local val
if [[ -n "${default}" ]]; then
read -rp "${prompt} [${default}]: " val
val="${val:-${default}}"
else
read -rp "${prompt}: " val
fi
eval "${var_name}='${val}'"
}
echo ""
echo "--- Server ---"
ask SERVER_HOST "Server IP or SSH alias" "honeyDueDevUpdate"
[[ -n "${SERVER_HOST}" ]] || die "Server host is required"
ask SERVER_USER "SSH user" "root"
ask SSH_KEY "SSH key path" "~/.ssh/id_ed25519"
echo ""
echo "--- Container Registry (GHCR) ---"
ask GHCR_USER "GitHub username" "treytartt"
[[ -n "${GHCR_USER}" ]] || die "GitHub username is required"
ask GHCR_TOKEN "GitHub PAT (read:packages, write:packages)" "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"
[[ -n "${GHCR_TOKEN}" ]] || die "GitHub PAT is required"
echo ""
echo "--- TLS ---"
ask LE_EMAIL "Let's Encrypt email" "treytartt@fastmail.com"
echo ""
echo "--- Admin Panel ---"
ask ADMIN_USER "Admin basic auth username" "admin"
ADMIN_PW="$(openssl rand -base64 16)"
# --- Known values from existing Dokku setup ---
EMAIL_USER="treytartt@fastmail.com"
APNS_KEY_ID="9R5Q7ZX874"
APNS_TEAM_ID="V3PF3M6B6U"
log ""
log "Pre-filled from existing dev server:"
log " Email user: ${EMAIL_USER}"
log " APNS Key ID: ${APNS_KEY_ID}"
log " APNS Team ID: ${APNS_TEAM_ID}"
# --- Generate config.yaml ---
log "Generating config.yaml..."
cat > "${CONFIG_FILE}" <<YAML
# config.yaml — auto-generated by 00-init.sh
# This file is gitignored — never commit it with real values.
# --- Server ---
server:
host: "${SERVER_HOST}"
user: "${SERVER_USER}"
ssh_key: "${SSH_KEY}"
# --- Domains ---
domains:
api: devapi.myhoneydue.com
admin: devadmin.myhoneydue.com
base: dev.myhoneydue.com
# --- Container Registry (GHCR) ---
registry:
server: ghcr.io
namespace: "${GHCR_USER}"
username: "${GHCR_USER}"
token: "${GHCR_TOKEN}"
# --- Database (in-cluster PostgreSQL) ---
database:
name: honeydue_dev
user: honeydue
max_open_conns: 10
max_idle_conns: 5
max_lifetime: "600s"
# --- Email (Fastmail) ---
email:
host: smtp.fastmail.com
port: 587
user: "${EMAIL_USER}"
from: "honeyDue DEV <${EMAIL_USER}>"
use_tls: true
# --- Push Notifications ---
push:
apns_key_id: "${APNS_KEY_ID}"
apns_team_id: "${APNS_TEAM_ID}"
apns_topic: com.tt.honeyDue
apns_production: false
apns_use_sandbox: true
# --- Object Storage (in-cluster MinIO) ---
storage:
minio_root_user: honeydue
bucket: honeydue-dev
max_file_size: 10485760
allowed_types: "image/jpeg,image/png,image/gif,image/webp,application/pdf"
# --- Worker Schedules (UTC hours) ---
worker:
task_reminder_hour: 14
overdue_reminder_hour: 15
daily_digest_hour: 3
# --- Feature Flags ---
features:
push_enabled: true
email_enabled: false
webhooks_enabled: false
onboarding_emails_enabled: false
pdf_reports_enabled: true
worker_enabled: true
# --- Redis ---
redis:
password: "${REDIS_PW}"
# --- Admin Panel ---
admin:
basic_auth_user: "${ADMIN_USER}"
basic_auth_password: "${ADMIN_PW}"
# --- TLS ---
tls:
mode: letsencrypt
letsencrypt_email: "${LE_EMAIL}"
# --- Apple Auth / IAP ---
apple_auth:
client_id: "com.tt.honeyDue"
team_id: "${APNS_TEAM_ID}"
iap_key_id: ""
iap_issuer_id: ""
iap_bundle_id: ""
iap_key_path: ""
iap_sandbox: true
# --- Google Auth / IAP ---
google_auth:
client_id: ""
android_client_id: ""
ios_client_id: ""
iap_package_name: ""
iap_service_account_path: ""
YAML
# --- Summary ---
echo ""
echo "============================================"
echo " Setup Complete"
echo "============================================"
echo ""
echo "Generated:"
echo " config.yaml"
echo " secrets/secret_key.txt"
echo " secrets/postgres_password.txt"
echo " secrets/minio_root_password.txt"
echo " secrets/email_host_password.txt"
echo " secrets/fcm_server_key.txt"
echo " secrets/apns_auth_key.p8"
echo ""
echo "Admin panel credentials:"
echo " Username: ${ADMIN_USER}"
echo " Password: ${ADMIN_PW}"
echo " (save these — they won't be shown again)"
echo ""
echo "Next steps:"
echo " ./scripts/01-setup-k3s.sh"
echo " ./scripts/02-setup-secrets.sh"
echo " ./scripts/03-deploy.sh"
echo " ./scripts/04-verify.sh"
echo ""

View File

@@ -0,0 +1,146 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
log() { printf '[setup] %s\n' "$*"; }
die() { printf '[setup][error] %s\n' "$*" >&2; exit 1; }
# --- Local prerequisites ---
command -v kubectl >/dev/null 2>&1 || die "Missing locally: kubectl (https://kubernetes.io/docs/tasks/tools/)"
# --- Server connection ---
SERVER_HOST="$(cfg_require server.host "Server IP or SSH alias")"
SERVER_USER="$(cfg server.user)"
SERVER_USER="${SERVER_USER:-root}"
SSH_KEY="$(cfg server.ssh_key | sed "s|~|${HOME}|g")"
SSH_OPTS=()
if [[ -n "${SSH_KEY}" && -f "${SSH_KEY}" ]]; then
SSH_OPTS+=(-i "${SSH_KEY}")
fi
SSH_OPTS+=(-o StrictHostKeyChecking=accept-new)
ssh_cmd() {
ssh "${SSH_OPTS[@]}" "${SERVER_USER}@${SERVER_HOST}" "$@"
}
log "Testing SSH connection to ${SERVER_USER}@${SERVER_HOST}..."
ssh_cmd "echo 'SSH connection OK'" || die "Cannot SSH into ${SERVER_HOST}"
# --- Server prerequisites ---
log "Setting up server prerequisites..."
ssh_cmd 'bash -s' <<'REMOTE_SETUP'
set -euo pipefail
log() { printf '[setup][remote] %s\n' "$*"; }
# --- System updates ---
log "Updating system packages..."
export DEBIAN_FRONTEND=noninteractive
apt-get update -qq
apt-get upgrade -y -qq
# --- SSH hardening ---
log "Hardening SSH..."
sed -i 's/^#\?PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
sed -i 's/^#\?PermitRootLogin.*/PermitRootLogin prohibit-password/' /etc/ssh/sshd_config
systemctl reload sshd 2>/dev/null || systemctl reload ssh 2>/dev/null || true
# --- fail2ban ---
if ! command -v fail2ban-client >/dev/null 2>&1; then
log "Installing fail2ban..."
apt-get install -y -qq fail2ban
systemctl enable --now fail2ban
else
log "fail2ban already installed"
fi
# --- Unattended security upgrades ---
if ! dpkg -l | grep -q unattended-upgrades; then
log "Installing unattended-upgrades..."
apt-get install -y -qq unattended-upgrades
dpkg-reconfigure -plow unattended-upgrades
else
log "unattended-upgrades already installed"
fi
# --- Firewall (ufw) ---
if command -v ufw >/dev/null 2>&1; then
log "Configuring firewall..."
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp # SSH
ufw allow 443/tcp # HTTPS (Traefik)
ufw allow 6443/tcp # K3s API
ufw allow 80/tcp # HTTP (Let's Encrypt ACME challenge)
ufw --force enable
else
log "Installing ufw..."
apt-get install -y -qq ufw
ufw default deny incoming
ufw default allow outgoing
ufw allow 22/tcp
ufw allow 443/tcp
ufw allow 6443/tcp
ufw allow 80/tcp
ufw --force enable
fi
log "Server prerequisites complete."
REMOTE_SETUP
# --- Install K3s ---
log "Installing K3s on ${SERVER_HOST}..."
log " This takes about 1-2 minutes."
ssh_cmd "curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC='server --secrets-encryption' sh -"
# --- Wait for K3s to be ready ---
log "Waiting for K3s to be ready..."
ssh_cmd "until kubectl get nodes >/dev/null 2>&1; do sleep 2; done"
# --- Copy kubeconfig ---
KUBECONFIG_PATH="${DEPLOY_DIR}/kubeconfig"
log "Copying kubeconfig..."
ssh_cmd "sudo cat /etc/rancher/k3s/k3s.yaml" > "${KUBECONFIG_PATH}"
# Replace 127.0.0.1 with the server's actual IP/hostname
# If SERVER_HOST is an SSH alias, resolve the actual IP
ACTUAL_HOST="${SERVER_HOST}"
if ! echo "${SERVER_HOST}" | grep -qE '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$'; then
# Try to resolve from SSH config
RESOLVED="$(ssh -G "${SERVER_HOST}" 2>/dev/null | awk '/^hostname / {print $2}')"
if [[ -n "${RESOLVED}" && "${RESOLVED}" != "${SERVER_HOST}" ]]; then
ACTUAL_HOST="${RESOLVED}"
fi
fi
sed -i.bak "s|https://127.0.0.1:6443|https://${ACTUAL_HOST}:6443|g" "${KUBECONFIG_PATH}"
rm -f "${KUBECONFIG_PATH}.bak"
chmod 600 "${KUBECONFIG_PATH}"
# --- Verify ---
export KUBECONFIG="${KUBECONFIG_PATH}"
log "Verifying cluster..."
kubectl get nodes
log ""
log "K3s installed successfully on ${SERVER_HOST}."
log "Server hardened: SSH key-only, fail2ban, ufw firewall, unattended-upgrades."
log ""
log "Next steps:"
log " export KUBECONFIG=${KUBECONFIG_PATH}"
log " kubectl get nodes"
log " ./scripts/02-setup-secrets.sh"

View File

@@ -0,0 +1,153 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
SECRETS_DIR="${DEPLOY_DIR}/secrets"
NAMESPACE="honeydue"
log() { printf '[secrets] %s\n' "$*"; }
warn() { printf '[secrets][warn] %s\n' "$*" >&2; }
die() { printf '[secrets][error] %s\n' "$*" >&2; exit 1; }
# --- Prerequisites ---
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
kubectl get namespace "${NAMESPACE}" >/dev/null 2>&1 || {
log "Creating namespace ${NAMESPACE}..."
kubectl apply -f "${DEPLOY_DIR}/manifests/namespace.yaml"
}
# --- Validate secret files ---
require_file() {
local path="$1" label="$2"
[[ -f "${path}" ]] || die "Missing: ${path} (${label})"
[[ -s "${path}" ]] || die "Empty: ${path} (${label})"
}
require_file "${SECRETS_DIR}/postgres_password.txt" "Postgres password"
require_file "${SECRETS_DIR}/secret_key.txt" "SECRET_KEY"
require_file "${SECRETS_DIR}/email_host_password.txt" "SMTP password"
# FCM server key is optional (Android not yet ready)
if [[ -f "${SECRETS_DIR}/fcm_server_key.txt" && -s "${SECRETS_DIR}/fcm_server_key.txt" ]]; then
FCM_CONTENT="$(tr -d '\r\n' < "${SECRETS_DIR}/fcm_server_key.txt")"
if [[ "${FCM_CONTENT}" == "PLACEHOLDER" ]]; then
warn "fcm_server_key.txt is a placeholder — FCM push disabled"
FCM_CONTENT=""
fi
else
warn "fcm_server_key.txt not found — FCM push disabled"
FCM_CONTENT=""
fi
require_file "${SECRETS_DIR}/apns_auth_key.p8" "APNS private key"
require_file "${SECRETS_DIR}/minio_root_password.txt" "MinIO root password"
# Validate APNS key format
if ! grep -q "BEGIN PRIVATE KEY" "${SECRETS_DIR}/apns_auth_key.p8"; then
die "APNS key file does not look like a private key: ${SECRETS_DIR}/apns_auth_key.p8"
fi
# Validate secret_key length (minimum 32 chars)
SECRET_KEY_LEN="$(tr -d '\r\n' < "${SECRETS_DIR}/secret_key.txt" | wc -c | tr -d ' ')"
if (( SECRET_KEY_LEN < 32 )); then
die "secret_key.txt must be at least 32 characters (got ${SECRET_KEY_LEN})."
fi
# Validate MinIO password length (minimum 8 chars)
MINIO_PW_LEN="$(tr -d '\r\n' < "${SECRETS_DIR}/minio_root_password.txt" | wc -c | tr -d ' ')"
if (( MINIO_PW_LEN < 8 )); then
die "minio_root_password.txt must be at least 8 characters (got ${MINIO_PW_LEN})."
fi
# --- Read optional config values ---
REDIS_PASSWORD="$(cfg redis.password 2>/dev/null || true)"
ADMIN_AUTH_USER="$(cfg admin.basic_auth_user 2>/dev/null || true)"
ADMIN_AUTH_PASSWORD="$(cfg admin.basic_auth_password 2>/dev/null || true)"
TLS_MODE="$(cfg tls.mode 2>/dev/null || echo "letsencrypt")"
# --- Create app secrets ---
log "Creating honeydue-secrets..."
SECRET_ARGS=(
--namespace="${NAMESPACE}"
--from-literal="POSTGRES_PASSWORD=$(tr -d '\r\n' < "${SECRETS_DIR}/postgres_password.txt")"
--from-literal="SECRET_KEY=$(tr -d '\r\n' < "${SECRETS_DIR}/secret_key.txt")"
--from-literal="EMAIL_HOST_PASSWORD=$(tr -d '\r\n' < "${SECRETS_DIR}/email_host_password.txt")"
--from-literal="FCM_SERVER_KEY=${FCM_CONTENT}"
--from-literal="MINIO_ROOT_PASSWORD=$(tr -d '\r\n' < "${SECRETS_DIR}/minio_root_password.txt")"
)
if [[ -n "${REDIS_PASSWORD}" ]]; then
log " Including REDIS_PASSWORD in secrets"
SECRET_ARGS+=(--from-literal="REDIS_PASSWORD=${REDIS_PASSWORD}")
fi
kubectl create secret generic honeydue-secrets \
"${SECRET_ARGS[@]}" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Create APNS key secret ---
log "Creating honeydue-apns-key..."
kubectl create secret generic honeydue-apns-key \
--namespace="${NAMESPACE}" \
--from-file="apns_auth_key.p8=${SECRETS_DIR}/apns_auth_key.p8" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Create GHCR registry credentials ---
REGISTRY_SERVER="$(cfg registry.server)"
REGISTRY_USER="$(cfg registry.username)"
REGISTRY_TOKEN="$(cfg registry.token)"
if [[ -n "${REGISTRY_SERVER}" && -n "${REGISTRY_USER}" && -n "${REGISTRY_TOKEN}" ]]; then
log "Creating ghcr-credentials..."
kubectl create secret docker-registry ghcr-credentials \
--namespace="${NAMESPACE}" \
--docker-server="${REGISTRY_SERVER}" \
--docker-username="${REGISTRY_USER}" \
--docker-password="${REGISTRY_TOKEN}" \
--dry-run=client -o yaml | kubectl apply -f -
else
warn "Registry credentials incomplete in config.yaml — skipping ghcr-credentials."
fi
# --- Create Cloudflare origin cert (only if cloudflare mode) ---
if [[ "${TLS_MODE}" == "cloudflare" ]]; then
require_file "${SECRETS_DIR}/cloudflare-origin.crt" "Cloudflare origin cert"
require_file "${SECRETS_DIR}/cloudflare-origin.key" "Cloudflare origin key"
log "Creating cloudflare-origin-cert..."
kubectl create secret tls cloudflare-origin-cert \
--namespace="${NAMESPACE}" \
--cert="${SECRETS_DIR}/cloudflare-origin.crt" \
--key="${SECRETS_DIR}/cloudflare-origin.key" \
--dry-run=client -o yaml | kubectl apply -f -
fi
# --- Create admin basic auth secret ---
if [[ -n "${ADMIN_AUTH_USER}" && -n "${ADMIN_AUTH_PASSWORD}" ]]; then
command -v htpasswd >/dev/null 2>&1 || die "Missing: htpasswd (install apache2-utils)"
log "Creating admin-basic-auth secret..."
HTPASSWD="$(htpasswd -nb "${ADMIN_AUTH_USER}" "${ADMIN_AUTH_PASSWORD}")"
kubectl create secret generic admin-basic-auth \
--namespace="${NAMESPACE}" \
--from-literal=users="${HTPASSWD}" \
--dry-run=client -o yaml | kubectl apply -f -
else
warn "admin.basic_auth_user/password not set in config.yaml — skipping admin-basic-auth."
warn "Admin panel will NOT have basic auth protection."
fi
# --- Done ---
log ""
log "All secrets created in namespace '${NAMESPACE}'."
log "Verify: kubectl get secrets -n ${NAMESPACE}"

View File

@@ -0,0 +1,193 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
REPO_DIR="$(cd "${DEPLOY_DIR}/.." && pwd)"
NAMESPACE="honeydue"
MANIFESTS="${DEPLOY_DIR}/manifests"
log() { printf '[deploy] %s\n' "$*"; }
warn() { printf '[deploy][warn] %s\n' "$*" >&2; }
die() { printf '[deploy][error] %s\n' "$*" >&2; exit 1; }
# --- Parse arguments ---
SKIP_BUILD=false
DEPLOY_TAG=""
while (( $# > 0 )); do
case "$1" in
--skip-build) SKIP_BUILD=true; shift ;;
--tag)
[[ -n "${2:-}" ]] || die "--tag requires a value"
DEPLOY_TAG="$2"; shift 2 ;;
-h|--help)
cat <<'EOF'
Usage: ./scripts/03-deploy.sh [OPTIONS]
Options:
--skip-build Skip Docker build/push, use existing images
--tag <tag> Image tag (default: git short SHA)
-h, --help Show this help
EOF
exit 0 ;;
*) die "Unknown argument: $1" ;;
esac
done
# --- Prerequisites ---
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
command -v docker >/dev/null 2>&1 || die "Missing: docker"
if [[ -z "${DEPLOY_TAG}" ]]; then
DEPLOY_TAG="$(git -C "${REPO_DIR}" rev-parse --short HEAD 2>/dev/null || echo "latest")"
fi
# --- Read config ---
REGISTRY_SERVER="$(cfg_require registry.server "Container registry server")"
REGISTRY_NS="$(cfg_require registry.namespace "Registry namespace")"
REGISTRY_USER="$(cfg_require registry.username "Registry username")"
REGISTRY_TOKEN="$(cfg_require registry.token "Registry token")"
TLS_MODE="$(cfg tls.mode 2>/dev/null || echo "letsencrypt")"
API_DOMAIN="$(cfg_require domains.api "API domain")"
ADMIN_DOMAIN="$(cfg_require domains.admin "Admin domain")"
REGISTRY_PREFIX="${REGISTRY_SERVER%/}/${REGISTRY_NS#/}"
API_IMAGE="${REGISTRY_PREFIX}/honeydue-api:${DEPLOY_TAG}"
WORKER_IMAGE="${REGISTRY_PREFIX}/honeydue-worker:${DEPLOY_TAG}"
ADMIN_IMAGE="${REGISTRY_PREFIX}/honeydue-admin:${DEPLOY_TAG}"
# --- Build and push ---
if [[ "${SKIP_BUILD}" == "false" ]]; then
log "Logging in to ${REGISTRY_SERVER}..."
printf '%s' "${REGISTRY_TOKEN}" | docker login "${REGISTRY_SERVER}" -u "${REGISTRY_USER}" --password-stdin >/dev/null
log "Building API image: ${API_IMAGE}"
docker build --target api -t "${API_IMAGE}" "${REPO_DIR}"
log "Building Worker image: ${WORKER_IMAGE}"
docker build --target worker -t "${WORKER_IMAGE}" "${REPO_DIR}"
log "Building Admin image: ${ADMIN_IMAGE}"
docker build --target admin -t "${ADMIN_IMAGE}" "${REPO_DIR}"
log "Pushing images..."
docker push "${API_IMAGE}"
docker push "${WORKER_IMAGE}"
docker push "${ADMIN_IMAGE}"
# Also tag and push :latest
docker tag "${API_IMAGE}" "${REGISTRY_PREFIX}/honeydue-api:latest"
docker tag "${WORKER_IMAGE}" "${REGISTRY_PREFIX}/honeydue-worker:latest"
docker tag "${ADMIN_IMAGE}" "${REGISTRY_PREFIX}/honeydue-admin:latest"
docker push "${REGISTRY_PREFIX}/honeydue-api:latest"
docker push "${REGISTRY_PREFIX}/honeydue-worker:latest"
docker push "${REGISTRY_PREFIX}/honeydue-admin:latest"
else
warn "Skipping build. Using images for tag: ${DEPLOY_TAG}"
fi
# --- Generate and apply ConfigMap from config.yaml ---
log "Generating env from config.yaml..."
ENV_FILE="$(mktemp)"
trap 'rm -f "${ENV_FILE}"' EXIT
generate_env > "${ENV_FILE}"
log "Creating ConfigMap..."
kubectl create configmap honeydue-config \
--namespace="${NAMESPACE}" \
--from-env-file="${ENV_FILE}" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Configure TLS ---
if [[ "${TLS_MODE}" == "letsencrypt" ]]; then
LE_EMAIL="$(cfg_require tls.letsencrypt_email "Let's Encrypt email")"
log "Configuring Traefik with Let's Encrypt (${LE_EMAIL})..."
sed "s|LETSENCRYPT_EMAIL_PLACEHOLDER|${LE_EMAIL}|" \
"${MANIFESTS}/traefik/helmchartconfig.yaml" | kubectl apply -f -
TLS_SECRET="letsencrypt-cert"
TLS_ANNOTATION="traefik.ingress.kubernetes.io/router.tls.certresolver: letsencrypt"
elif [[ "${TLS_MODE}" == "cloudflare" ]]; then
log "Using Cloudflare origin cert for TLS..."
TLS_SECRET="cloudflare-origin-cert"
TLS_ANNOTATION=""
else
die "Unknown tls.mode: ${TLS_MODE} (expected: letsencrypt or cloudflare)"
fi
# --- Apply manifests ---
log "Applying manifests..."
kubectl apply -f "${MANIFESTS}/namespace.yaml"
kubectl apply -f "${MANIFESTS}/rbac.yaml"
kubectl apply -f "${MANIFESTS}/postgres/"
kubectl apply -f "${MANIFESTS}/redis/"
kubectl apply -f "${MANIFESTS}/minio/deployment.yaml"
kubectl apply -f "${MANIFESTS}/minio/pvc.yaml"
kubectl apply -f "${MANIFESTS}/minio/service.yaml"
kubectl apply -f "${MANIFESTS}/ingress/middleware.yaml"
# Apply ingress with domain and TLS substitution
sed -e "s|API_DOMAIN_PLACEHOLDER|${API_DOMAIN}|g" \
-e "s|ADMIN_DOMAIN_PLACEHOLDER|${ADMIN_DOMAIN}|g" \
-e "s|TLS_SECRET_PLACEHOLDER|${TLS_SECRET}|g" \
-e "s|# TLS_ANNOTATIONS_PLACEHOLDER|${TLS_ANNOTATION}|g" \
"${MANIFESTS}/ingress/ingress.yaml" | kubectl apply -f -
# Apply app deployments with image substitution
sed "s|image: IMAGE_PLACEHOLDER|image: ${API_IMAGE}|" "${MANIFESTS}/api/deployment.yaml" | kubectl apply -f -
kubectl apply -f "${MANIFESTS}/api/service.yaml"
sed "s|image: IMAGE_PLACEHOLDER|image: ${WORKER_IMAGE}|" "${MANIFESTS}/worker/deployment.yaml" | kubectl apply -f -
sed "s|image: IMAGE_PLACEHOLDER|image: ${ADMIN_IMAGE}|" "${MANIFESTS}/admin/deployment.yaml" | kubectl apply -f -
kubectl apply -f "${MANIFESTS}/admin/service.yaml"
# Apply network policies
kubectl apply -f "${MANIFESTS}/network-policies.yaml"
# --- Wait for infrastructure rollouts ---
log "Waiting for infrastructure rollouts..."
kubectl rollout status deployment/postgres -n "${NAMESPACE}" --timeout=120s
kubectl rollout status deployment/redis -n "${NAMESPACE}" --timeout=120s
kubectl rollout status deployment/minio -n "${NAMESPACE}" --timeout=120s
# --- Create MinIO bucket ---
log "Creating MinIO bucket..."
# Delete previous job run if it exists (jobs are immutable)
kubectl delete job minio-create-bucket -n "${NAMESPACE}" 2>/dev/null || true
kubectl apply -f "${MANIFESTS}/minio/create-bucket-job.yaml"
kubectl wait --for=condition=complete job/minio-create-bucket -n "${NAMESPACE}" --timeout=120s
# --- Wait for app rollouts ---
log "Waiting for app rollouts..."
kubectl rollout status deployment/api -n "${NAMESPACE}" --timeout=300s
kubectl rollout status deployment/worker -n "${NAMESPACE}" --timeout=300s
kubectl rollout status deployment/admin -n "${NAMESPACE}" --timeout=300s
# --- Done ---
log ""
log "Deploy completed successfully."
log "Tag: ${DEPLOY_TAG}"
log "TLS: ${TLS_MODE}"
log "Images:"
log " API: ${API_IMAGE}"
log " Worker: ${WORKER_IMAGE}"
log " Admin: ${ADMIN_IMAGE}"
log ""
log "Run ./scripts/04-verify.sh to check cluster health."

View File

@@ -0,0 +1,161 @@
#!/usr/bin/env bash
set -euo pipefail
NAMESPACE="honeydue"
log() { printf '[verify] %s\n' "$*"; }
sep() { printf '\n%s\n' "--- $1 ---"; }
ok() { printf '[verify] ✓ %s\n' "$*"; }
fail() { printf '[verify] ✗ %s\n' "$*"; }
command -v kubectl >/dev/null 2>&1 || { echo "Missing: kubectl" >&2; exit 1; }
sep "Node"
kubectl get nodes -o wide
sep "Pods"
kubectl get pods -n "${NAMESPACE}" -o wide
sep "Services"
kubectl get svc -n "${NAMESPACE}"
sep "Ingress"
kubectl get ingress -n "${NAMESPACE}"
sep "PVCs"
kubectl get pvc -n "${NAMESPACE}"
sep "Secrets (names only)"
kubectl get secrets -n "${NAMESPACE}"
sep "ConfigMap keys"
kubectl get configmap honeydue-config -n "${NAMESPACE}" -o jsonpath='{.data}' 2>/dev/null | python3 -c "
import json, sys
try:
d = json.load(sys.stdin)
for k in sorted(d.keys()):
v = d[k]
if any(s in k.upper() for s in ['PASSWORD', 'SECRET', 'TOKEN', 'KEY']):
v = '***REDACTED***'
print(f' {k}={v}')
except:
print(' (could not parse)')
" 2>/dev/null || log "ConfigMap not found or not parseable"
sep "Warning Events (last 15 min)"
kubectl get events -n "${NAMESPACE}" --field-selector type=Warning --sort-by='.lastTimestamp' 2>/dev/null | tail -20 || log "No warning events"
sep "Pod Restart Counts"
kubectl get pods -n "${NAMESPACE}" -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{range .status.containerStatuses[*]}{.restartCount}{end}{"\n"}{end}' 2>/dev/null || true
# =============================================================================
# Infrastructure Health
# =============================================================================
sep "PostgreSQL Health"
PG_POD="$(kubectl get pods -n "${NAMESPACE}" -l app.kubernetes.io/name=postgres -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)"
if [[ -n "${PG_POD}" ]]; then
kubectl exec -n "${NAMESPACE}" "${PG_POD}" -- pg_isready -U honeydue 2>/dev/null && ok "PostgreSQL is ready" || fail "PostgreSQL is NOT ready"
else
fail "No PostgreSQL pod found"
fi
sep "Redis Health"
REDIS_POD="$(kubectl get pods -n "${NAMESPACE}" -l app.kubernetes.io/name=redis -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)"
if [[ -n "${REDIS_POD}" ]]; then
kubectl exec -n "${NAMESPACE}" "${REDIS_POD}" -- sh -c 'if [ -n "$REDIS_PASSWORD" ]; then redis-cli -a "$REDIS_PASSWORD" ping 2>/dev/null; else redis-cli ping; fi' 2>/dev/null | grep -q PONG && ok "Redis is ready" || fail "Redis is NOT ready"
else
fail "No Redis pod found"
fi
sep "MinIO Health"
MINIO_POD="$(kubectl get pods -n "${NAMESPACE}" -l app.kubernetes.io/name=minio -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)"
if [[ -n "${MINIO_POD}" ]]; then
kubectl exec -n "${NAMESPACE}" "${MINIO_POD}" -- curl -sf http://localhost:9000/minio/health/ready 2>/dev/null && ok "MinIO is ready" || fail "MinIO is NOT ready"
else
fail "No MinIO pod found"
fi
sep "API Health Check"
API_POD="$(kubectl get pods -n "${NAMESPACE}" -l app.kubernetes.io/name=api -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)"
if [[ -n "${API_POD}" ]]; then
kubectl exec -n "${NAMESPACE}" "${API_POD}" -- curl -sf http://localhost:8000/api/health/ 2>/dev/null && ok "API health check passed" || fail "API health check FAILED"
else
fail "No API pod found"
fi
sep "Resource Usage"
kubectl top pods -n "${NAMESPACE}" 2>/dev/null || log "Metrics server not available (install with: kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml)"
# =============================================================================
# Security Verification
# =============================================================================
sep "Security: Network Policies"
NP_COUNT="$(kubectl get networkpolicy -n "${NAMESPACE}" --no-headers 2>/dev/null | wc -l | tr -d ' ')"
if (( NP_COUNT >= 5 )); then
ok "Found ${NP_COUNT} network policies"
kubectl get networkpolicy -n "${NAMESPACE}" --no-headers 2>/dev/null | while read -r line; do
echo " ${line}"
done
else
fail "Expected 5+ network policies, found ${NP_COUNT}"
fi
sep "Security: Service Accounts"
SA_COUNT="$(kubectl get sa -n "${NAMESPACE}" --no-headers 2>/dev/null | grep -cv default | tr -d ' ')"
if (( SA_COUNT >= 6 )); then
ok "Found ${SA_COUNT} custom service accounts (api, worker, admin, redis, postgres, minio)"
else
fail "Expected 6 custom service accounts, found ${SA_COUNT}"
fi
kubectl get sa -n "${NAMESPACE}" --no-headers 2>/dev/null | while read -r line; do
echo " ${line}"
done
sep "Security: Pod Security Contexts"
PODS_WITHOUT_SECURITY="$(kubectl get pods -n "${NAMESPACE}" -o json 2>/dev/null | python3 -c "
import json, sys
try:
data = json.load(sys.stdin)
issues = []
for pod in data.get('items', []):
name = pod['metadata']['name']
spec = pod['spec']
sc = spec.get('securityContext', {})
# Postgres is exempt from runAsNonRoot (entrypoint requirement)
is_postgres = any('postgres' in c.get('image', '') for c in spec.get('containers', []))
if not sc.get('runAsNonRoot') and not is_postgres:
issues.append(f'{name}: missing runAsNonRoot')
for c in spec.get('containers', []):
csc = c.get('securityContext', {})
if csc.get('allowPrivilegeEscalation', True):
issues.append(f'{name}/{c[\"name\"]}: allowPrivilegeEscalation not false')
if issues:
for i in issues:
print(i)
else:
print('OK')
except Exception as e:
print(f'Error: {e}')
" 2>/dev/null || echo "Error parsing pod specs")"
if [[ "${PODS_WITHOUT_SECURITY}" == "OK" ]]; then
ok "All pods have proper security contexts"
else
fail "Pod security context issues:"
echo "${PODS_WITHOUT_SECURITY}" | while read -r line; do
echo " ${line}"
done
fi
sep "Security: Admin Basic Auth"
ADMIN_AUTH="$(kubectl get secret admin-basic-auth -n "${NAMESPACE}" -o name 2>/dev/null || true)"
if [[ -n "${ADMIN_AUTH}" ]]; then
ok "admin-basic-auth secret exists"
else
fail "admin-basic-auth secret not found — admin panel has no additional auth layer"
fi
echo ""
log "Verification complete."

152
deploy-k3s-dev/scripts/_config.sh Executable file
View File

@@ -0,0 +1,152 @@
#!/usr/bin/env bash
# Shared config helper — sourced by all deploy scripts.
# Provides cfg() to read values from config.yaml.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEPLOY_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
CONFIG_FILE="${DEPLOY_DIR}/config.yaml"
if [[ ! -f "${CONFIG_FILE}" ]]; then
if [[ -f "${CONFIG_FILE}.example" ]]; then
echo "[error] config.yaml not found. Run: cp config.yaml.example config.yaml" >&2
else
echo "[error] config.yaml not found." >&2
fi
exit 1
fi
# cfg "dotted.key.path" — reads a value from config.yaml
cfg() {
python3 -c "
import yaml, json, sys
with open(sys.argv[1]) as f:
c = yaml.safe_load(f)
keys = sys.argv[2].split('.')
v = c
for k in keys:
if isinstance(v, list):
v = v[int(k)]
else:
v = v[k]
if isinstance(v, bool):
print(str(v).lower())
elif isinstance(v, (dict, list)):
print(json.dumps(v))
else:
print('' if v is None else v)
" "${CONFIG_FILE}" "$1" 2>/dev/null
}
# cfg_require "key" "label" — reads value and dies if empty
cfg_require() {
local val
val="$(cfg "$1")"
if [[ -z "${val}" ]]; then
echo "[error] Missing required config: $1 ($2)" >&2
exit 1
fi
printf '%s' "${val}"
}
# generate_env — writes the flat env file the app expects to stdout
# Points DB at in-cluster PostgreSQL, storage at in-cluster MinIO
generate_env() {
python3 -c "
import yaml
with open('${CONFIG_FILE}') as f:
c = yaml.safe_load(f)
d = c['domains']
db = c['database']
em = c['email']
ps = c['push']
st = c['storage']
wk = c['worker']
ft = c['features']
aa = c.get('apple_auth', {})
ga = c.get('google_auth', {})
rd = c.get('redis', {})
def b(v):
return str(v).lower() if isinstance(v, bool) else str(v)
def val(v):
return '' if v is None else str(v)
lines = [
# API
'DEBUG=true',
f\"ALLOWED_HOSTS={d['api']},{d['base']},localhost\",
f\"CORS_ALLOWED_ORIGINS=https://{d['base']},https://{d['admin']}\",
'TIMEZONE=UTC',
f\"BASE_URL=https://{d['base']}\",
'PORT=8000',
# Admin
f\"NEXT_PUBLIC_API_URL=https://{d['api']}\",
f\"ADMIN_PANEL_URL=https://{d['admin']}\",
# Database (in-cluster PostgreSQL)
'DB_HOST=postgres.honeydue.svc.cluster.local',
'DB_PORT=5432',
f\"POSTGRES_USER={val(db['user'])}\",
f\"POSTGRES_DB={db['name']}\",
'DB_SSLMODE=disable',
f\"DB_MAX_OPEN_CONNS={db['max_open_conns']}\",
f\"DB_MAX_IDLE_CONNS={db['max_idle_conns']}\",
f\"DB_MAX_LIFETIME={db['max_lifetime']}\",
# Redis (in-cluster)
f\"REDIS_URL=redis://{':%s@' % val(rd.get('password')) if rd.get('password') else ''}redis.honeydue.svc.cluster.local:6379/0\",
'REDIS_DB=0',
# Email
f\"EMAIL_HOST={em['host']}\",
f\"EMAIL_PORT={em['port']}\",
f\"EMAIL_USE_TLS={b(em['use_tls'])}\",
f\"EMAIL_HOST_USER={val(em['user'])}\",
f\"DEFAULT_FROM_EMAIL={val(em['from'])}\",
# Push
'APNS_AUTH_KEY_PATH=/secrets/apns/apns_auth_key.p8',
f\"APNS_AUTH_KEY_ID={val(ps['apns_key_id'])}\",
f\"APNS_TEAM_ID={val(ps['apns_team_id'])}\",
f\"APNS_TOPIC={ps['apns_topic']}\",
f\"APNS_USE_SANDBOX={b(ps['apns_use_sandbox'])}\",
f\"APNS_PRODUCTION={b(ps['apns_production'])}\",
# Worker
f\"TASK_REMINDER_HOUR={wk['task_reminder_hour']}\",
f\"OVERDUE_REMINDER_HOUR={wk['overdue_reminder_hour']}\",
f\"DAILY_DIGEST_HOUR={wk['daily_digest_hour']}\",
# Storage (in-cluster MinIO — S3-compatible, same env vars as B2)
f\"B2_KEY_ID={val(st['minio_root_user'])}\",
# B2_APP_KEY injected from secret (MINIO_ROOT_PASSWORD)
f\"B2_BUCKET_NAME={val(st['bucket'])}\",
'B2_ENDPOINT=minio.honeydue.svc.cluster.local:9000',
'STORAGE_USE_SSL=false',
f\"STORAGE_MAX_FILE_SIZE={st['max_file_size']}\",
f\"STORAGE_ALLOWED_TYPES={st['allowed_types']}\",
# MinIO root user (for MinIO deployment + bucket init job)
f\"MINIO_ROOT_USER={val(st['minio_root_user'])}\",
# Features
f\"FEATURE_PUSH_ENABLED={b(ft['push_enabled'])}\",
f\"FEATURE_EMAIL_ENABLED={b(ft['email_enabled'])}\",
f\"FEATURE_WEBHOOKS_ENABLED={b(ft['webhooks_enabled'])}\",
f\"FEATURE_ONBOARDING_EMAILS_ENABLED={b(ft['onboarding_emails_enabled'])}\",
f\"FEATURE_PDF_REPORTS_ENABLED={b(ft['pdf_reports_enabled'])}\",
f\"FEATURE_WORKER_ENABLED={b(ft['worker_enabled'])}\",
# Apple auth/IAP
f\"APPLE_CLIENT_ID={val(aa.get('client_id'))}\",
f\"APPLE_TEAM_ID={val(aa.get('team_id'))}\",
f\"APPLE_IAP_KEY_ID={val(aa.get('iap_key_id'))}\",
f\"APPLE_IAP_ISSUER_ID={val(aa.get('iap_issuer_id'))}\",
f\"APPLE_IAP_BUNDLE_ID={val(aa.get('iap_bundle_id'))}\",
f\"APPLE_IAP_KEY_PATH={val(aa.get('iap_key_path'))}\",
f\"APPLE_IAP_SANDBOX={b(aa.get('iap_sandbox', True))}\",
# Google auth/IAP
f\"GOOGLE_CLIENT_ID={val(ga.get('client_id'))}\",
f\"GOOGLE_ANDROID_CLIENT_ID={val(ga.get('android_client_id'))}\",
f\"GOOGLE_IOS_CLIENT_ID={val(ga.get('ios_client_id'))}\",
f\"GOOGLE_IAP_PACKAGE_NAME={val(ga.get('iap_package_name'))}\",
f\"GOOGLE_IAP_SERVICE_ACCOUNT_PATH={val(ga.get('iap_service_account_path'))}\",
]
print('\n'.join(lines))
"
}

View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -euo pipefail
NAMESPACE="honeydue"
log() { printf '[rollback] %s\n' "$*"; }
die() { printf '[rollback][error] %s\n' "$*" >&2; exit 1; }
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
DEPLOYMENTS=("api" "worker" "admin")
# --- Show current state ---
echo "=== Current Rollout History ==="
for deploy in "${DEPLOYMENTS[@]}"; do
echo ""
echo "--- ${deploy} ---"
kubectl rollout history deployment/"${deploy}" -n "${NAMESPACE}" 2>/dev/null || echo " (not found)"
done
echo ""
echo "=== Current Images ==="
for deploy in "${DEPLOYMENTS[@]}"; do
IMAGE="$(kubectl get deployment "${deploy}" -n "${NAMESPACE}" -o jsonpath='{.spec.template.spec.containers[0].image}' 2>/dev/null || echo "n/a")"
echo " ${deploy}: ${IMAGE}"
done
# --- Confirm ---
echo ""
read -rp "Roll back all deployments to previous revision? [y/N] " confirm
if [[ "${confirm}" != "y" && "${confirm}" != "Y" ]]; then
log "Aborted."
exit 0
fi
# --- Rollback ---
for deploy in "${DEPLOYMENTS[@]}"; do
log "Rolling back ${deploy}..."
kubectl rollout undo deployment/"${deploy}" -n "${NAMESPACE}" 2>/dev/null || log "Skipping ${deploy} (not found or no previous revision)"
done
# --- Wait ---
log "Waiting for rollouts..."
for deploy in "${DEPLOYMENTS[@]}"; do
kubectl rollout status deployment/"${deploy}" -n "${NAMESPACE}" --timeout=300s 2>/dev/null || true
done
# --- Verify ---
echo ""
echo "=== Post-Rollback Images ==="
for deploy in "${DEPLOYMENTS[@]}"; do
IMAGE="$(kubectl get deployment "${deploy}" -n "${NAMESPACE}" -o jsonpath='{.spec.template.spec.containers[0].image}' 2>/dev/null || echo "n/a")"
echo " ${deploy}: ${IMAGE}"
done
log "Rollback complete. Run ./scripts/04-verify.sh to check health."

View File

@@ -0,0 +1,22 @@
# Secrets Directory
Create these files before running `scripts/02-setup-secrets.sh`:
| File | Purpose |
|------|---------|
| `postgres_password.txt` | In-cluster PostgreSQL password |
| `secret_key.txt` | App signing secret (minimum 32 characters) |
| `email_host_password.txt` | SMTP password (Fastmail app password) |
| `fcm_server_key.txt` | Firebase Cloud Messaging server key (optional — Android not yet ready) |
| `apns_auth_key.p8` | Apple Push Notification private key |
| `minio_root_password.txt` | MinIO root password (minimum 8 characters) |
Optional (only if `tls.mode: cloudflare` in config.yaml):
| File | Purpose |
|------|---------|
| `cloudflare-origin.crt` | Cloudflare origin certificate (PEM) |
| `cloudflare-origin.key` | Cloudflare origin certificate key (PEM) |
All string config (registry token, domains, etc.) goes in `config.yaml` instead.
These files are gitignored and should never be committed.

20
deploy-k3s/.gitignore vendored Normal file
View File

@@ -0,0 +1,20 @@
# Single config file (contains tokens and credentials)
config.yaml
# Generated files
kubeconfig
cluster-config.yaml
prod.env
# Secret files
secrets/*.txt
secrets/*.p8
secrets/*.pem
secrets/*.key
secrets/*.crt
!secrets/README.md
# Terraform / Hetzner state
*.tfstate
*.tfstate.backup
.terraform/

391
deploy-k3s/README.md Normal file
View File

@@ -0,0 +1,391 @@
# honeyDue — K3s Production Deployment
Production Kubernetes deployment for honeyDue on Hetzner Cloud using K3s.
**Architecture**: 3-node HA K3s cluster (CX33), Neon Postgres, Redis (in-cluster), Backblaze B2 (uploads), Cloudflare CDN/TLS.
**Domains**: `api.myhoneydue.com`, `admin.myhoneydue.com`
---
## Quick Start
```bash
cd honeyDueAPI-go/deploy-k3s
# 1. Fill in the single config file
cp config.yaml.example config.yaml
# Edit config.yaml — fill in ALL empty values
# 2. Create secret files
# See secrets/README.md for the full list
echo "your-neon-password" > secrets/postgres_password.txt
openssl rand -base64 48 > secrets/secret_key.txt
echo "your-smtp-password" > secrets/email_host_password.txt
echo "your-fcm-key" > secrets/fcm_server_key.txt
cp /path/to/AuthKey.p8 secrets/apns_auth_key.p8
cp /path/to/origin.pem secrets/cloudflare-origin.crt
cp /path/to/origin-key.pem secrets/cloudflare-origin.key
# 3. Provision → Secrets → Deploy
./scripts/01-provision-cluster.sh
./scripts/02-setup-secrets.sh
./scripts/03-deploy.sh
# 4. Set up Hetzner LB + Cloudflare DNS (see sections below)
# 5. Verify
./scripts/04-verify.sh
curl https://api.myhoneydue.com/api/health/
```
That's it. Everything reads from `config.yaml` + `secrets/`.
---
## Table of Contents
1. [Prerequisites](#1-prerequisites)
2. [Configuration](#2-configuration)
3. [Provision Cluster](#3-provision-cluster)
4. [Create Secrets](#4-create-secrets)
5. [Deploy](#5-deploy)
6. [Configure Load Balancer & DNS](#6-configure-load-balancer--dns)
7. [Verify](#7-verify)
8. [Monitoring & Logs](#8-monitoring--logs)
9. [Scaling](#9-scaling)
10. [Rollback](#10-rollback)
11. [Backup & DR](#11-backup--dr)
12. [Security Checklist](#12-security-checklist)
13. [Troubleshooting](#13-troubleshooting)
---
## 1. Prerequisites
| Tool | Install | Purpose |
|------|---------|---------|
| `hetzner-k3s` | `gem install hetzner-k3s` | Cluster provisioning |
| `kubectl` | https://kubernetes.io/docs/tasks/tools/ | Cluster management |
| `helm` | https://helm.sh/docs/intro/install/ | Optional: Prometheus/Grafana |
| `stern` | `brew install stern` | Multi-pod log tailing |
| `docker` | https://docs.docker.com/get-docker/ | Image building |
| `python3` | Pre-installed on macOS | Config parsing |
| `htpasswd` | `brew install httpd` or `apt install apache2-utils` | Admin basic auth secret |
Verify:
```bash
hetzner-k3s version && kubectl version --client && docker version && python3 --version
```
## 2. Configuration
There are two things to fill in:
### config.yaml — all string configuration
```bash
cp config.yaml.example config.yaml
```
Open `config.yaml` and fill in every empty `""` value:
| Section | What to fill in |
|---------|----------------|
| `cluster.hcloud_token` | Hetzner API token (Read/Write) — generate at console.hetzner.cloud |
| `registry.*` | GHCR credentials (same as Docker Swarm setup) |
| `database.host`, `database.user` | Neon PostgreSQL connection info |
| `email.user` | Fastmail email address |
| `push.apns_key_id`, `push.apns_team_id` | Apple Push Notification identifiers |
| `storage.b2_*` | Backblaze B2 bucket and credentials |
| `redis.password` | Strong password for Redis authentication (required for production) |
| `admin.basic_auth_user` | HTTP basic auth username for admin panel |
| `admin.basic_auth_password` | HTTP basic auth password for admin panel |
Everything else has sensible defaults. `config.yaml` is gitignored.
### secrets/ — file-based secrets
These are binary or multi-line files that can't go in YAML:
| File | Source |
|------|--------|
| `secrets/postgres_password.txt` | Your Neon database password |
| `secrets/secret_key.txt` | `openssl rand -base64 48` (min 32 chars) |
| `secrets/email_host_password.txt` | Fastmail app password |
| `secrets/fcm_server_key.txt` | Firebase console → Project Settings → Cloud Messaging |
| `secrets/apns_auth_key.p8` | Apple Developer → Keys → APNs key |
| `secrets/cloudflare-origin.crt` | Cloudflare → SSL/TLS → Origin Server → Create Certificate |
| `secrets/cloudflare-origin.key` | (saved with the certificate above) |
## 3. Provision Cluster
```bash
export KUBECONFIG=$(pwd)/kubeconfig
./scripts/01-provision-cluster.sh
```
This script:
1. Reads cluster config from `config.yaml`
2. Generates `cluster-config.yaml` for hetzner-k3s
3. Provisions 3x CX33 nodes with HA etcd (5-10 minutes)
4. Writes node IPs back into `config.yaml`
5. Labels the Redis node
After provisioning:
```bash
kubectl get nodes
```
## 4. Create Secrets
```bash
./scripts/02-setup-secrets.sh
```
This reads `config.yaml` for registry credentials and creates all Kubernetes Secrets from the `secrets/` files:
- `honeydue-secrets` — DB password, app secret, email password, FCM key, Redis password (if configured)
- `honeydue-apns-key` — APNS .p8 key (mounted as volume in pods)
- `ghcr-credentials` — GHCR image pull credentials
- `cloudflare-origin-cert` — TLS certificate for Ingress
- `admin-basic-auth` — htpasswd secret for admin panel basic auth (if configured)
## 5. Deploy
**Full deploy** (build + push + apply):
```bash
./scripts/03-deploy.sh
```
**Deploy pre-built images** (skip build):
```bash
./scripts/03-deploy.sh --skip-build --tag abc1234
```
The script:
1. Reads registry config from `config.yaml`
2. Builds and pushes 3 Docker images to GHCR
3. Generates a Kubernetes ConfigMap from `config.yaml` (converts to flat env vars)
4. Applies all manifests with image tag substitution
5. Waits for all rollouts to complete
## 6. Configure Load Balancer & DNS
### Hetzner Load Balancer
1. [Hetzner Console](https://console.hetzner.cloud/) → **Load Balancers → Create**
2. Location: **fsn1**, add all 3 nodes as targets
3. Service: TCP 443 → 443, health check on TCP 443
4. Note the LB IP and update `load_balancer_ip` in `config.yaml`
### Cloudflare DNS
1. [Cloudflare Dashboard](https://dash.cloudflare.com/) → `myhoneydue.com`**DNS**
| Type | Name | Content | Proxy |
|------|------|---------|-------|
| A | `api` | `<LB_IP>` | Proxied (orange cloud) |
| A | `admin` | `<LB_IP>` | Proxied (orange cloud) |
2. **SSL/TLS → Overview** → Set mode to **Full (Strict)**
3. If you haven't generated the origin cert yet:
**SSL/TLS → Origin Server → Create Certificate**
- Hostnames: `*.myhoneydue.com`, `myhoneydue.com`
- Validity: 15 years
- Save to `secrets/cloudflare-origin.crt` and `secrets/cloudflare-origin.key`
- Re-run `./scripts/02-setup-secrets.sh`
## 7. Verify
```bash
# Automated cluster health check
./scripts/04-verify.sh
# External health check (after DNS propagation)
curl -v https://api.myhoneydue.com/api/health/
```
Expected: `{"status": "ok"}` with HTTP 200.
## 8. Monitoring & Logs
### Logs with stern
```bash
stern -n honeydue api # All API pod logs
stern -n honeydue worker # All worker logs
stern -n honeydue . # Everything
stern -n honeydue api | grep ERROR # Filter
```
### kubectl logs
```bash
kubectl logs -n honeydue deployment/api -f
kubectl logs -n honeydue <pod-name> --previous # Crashed container
```
### Resource usage
```bash
kubectl top pods -n honeydue
kubectl top nodes
```
### Optional: Prometheus + Grafana
```bash
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install monitoring prometheus-community/kube-prometheus-stack \
--namespace monitoring --create-namespace \
--set grafana.adminPassword=your-password
# Access Grafana
kubectl port-forward -n monitoring svc/monitoring-grafana 3001:80
# Open http://localhost:3001
```
## 9. Scaling
### Manual
```bash
kubectl scale deployment/api -n honeydue --replicas=5
kubectl scale deployment/worker -n honeydue --replicas=3
```
### HPA (auto-scaling)
API auto-scales 3→6 replicas on CPU > 70% or memory > 80%:
```bash
kubectl get hpa -n honeydue
kubectl describe hpa api -n honeydue
```
### Adding nodes
Edit `config.yaml` to add nodes, then re-run provisioning:
```bash
./scripts/01-provision-cluster.sh
```
## 10. Rollback
```bash
./scripts/rollback.sh
```
Shows rollout history, asks for confirmation, rolls back all deployments to previous revision.
Single deployment rollback:
```bash
kubectl rollout undo deployment/api -n honeydue
```
## 11. Backup & DR
| Component | Strategy | Action Required |
|-----------|----------|-----------------|
| PostgreSQL | Neon PITR (automatic) | None |
| Redis | Reconstructible cache + Asynq queue | None |
| etcd | K3s auto-snapshots (12h, keeps 5) | None |
| B2 Storage | B2 versioning + lifecycle rules | Enable in B2 settings |
| Secrets | Local `secrets/` + `config.yaml` | Keep secure offline backup |
**Disaster recovery**: Re-provision → re-create secrets → re-deploy. Database recovers via Neon PITR.
## 12. Security
See **[SECURITY.md](SECURITY.md)** for the comprehensive hardening guide, incident response playbooks, and full compliance checklist.
### Summary of deployed security controls
| Control | Status | Manifests |
|---------|--------|-----------|
| Pod security contexts (non-root, read-only FS, no caps) | Applied | All `deployment.yaml` |
| Network policies (default-deny + explicit allows) | Applied | `manifests/network-policies.yaml` |
| RBAC (dedicated SAs, no K8s API access) | Applied | `manifests/rbac.yaml` |
| Pod disruption budgets | Applied | `manifests/pod-disruption-budgets.yaml` |
| Redis authentication | Applied (if `redis.password` set) | `redis/deployment.yaml` |
| Cloudflare-only origin lockdown | Applied | `ingress/ingress.yaml` |
| Admin basic auth | Applied (if `admin.*` set) | `ingress/middleware.yaml` |
| Security headers (HSTS, CSP, Permissions-Policy) | Applied | `ingress/middleware.yaml` |
| Secret encryption at rest | K3s config | `--secrets-encryption` |
### Quick checklist
- [ ] Hetzner Firewall: allow only 22, 443, 6443 from your IP
- [ ] SSH: key-only auth (`PasswordAuthentication no`)
- [ ] `redis.password` set in `config.yaml`
- [ ] `admin.basic_auth_user` and `admin.basic_auth_password` set in `config.yaml`
- [ ] `kubeconfig`: `chmod 600 kubeconfig`, never commit
- [ ] `config.yaml`: contains tokens — never commit, keep secure backup
- [ ] Image scanning: `trivy image` or `docker scout cves` before deploy
- [ ] Run `./scripts/04-verify.sh` — includes automated security checks
## 13. Troubleshooting
### ImagePullBackOff
```bash
kubectl describe pod <pod-name> -n honeydue
# Check: image name, GHCR credentials, image exists
```
Fix: verify `registry.*` in config.yaml, re-run `02-setup-secrets.sh`.
### CrashLoopBackOff
```bash
kubectl logs <pod-name> -n honeydue --previous
# Common: missing env vars, DB connection failure, invalid APNS key
```
### Redis connection refused / NOAUTH
```bash
kubectl get pods -n honeydue -l app.kubernetes.io/name=redis
# If redis.password is set, you must authenticate:
kubectl exec -it deploy/redis -n honeydue -- redis-cli -a "$REDIS_PASSWORD" ping
# Without -a: (error) NOAUTH Authentication required.
```
### Health check failures
```bash
kubectl exec -it deploy/api -n honeydue -- curl -v http://localhost:8000/api/health/
kubectl exec -it deploy/api -n honeydue -- env | sort
```
### Pods stuck in Pending
```bash
kubectl describe pod <pod-name> -n honeydue
# For Redis: ensure a node has label honeydue/redis=true
kubectl get nodes --show-labels | grep redis
```
### DNS not resolving
```bash
dig api.myhoneydue.com +short
# Verify LB IP matches what's in config.yaml
```
### Certificate / TLS errors
```bash
kubectl get secret cloudflare-origin-cert -n honeydue
kubectl describe ingress honeydue -n honeydue
curl -vk --resolve api.myhoneydue.com:443:<NODE_IP> https://api.myhoneydue.com/api/health/
```

813
deploy-k3s/SECURITY.md Normal file
View File

@@ -0,0 +1,813 @@
# honeyDue — Production Security Hardening Guide
Comprehensive security documentation for the honeyDue K3s deployment. Covers every layer from cloud provider to application.
**Last updated**: 2026-03-28
---
## Table of Contents
1. [Threat Model](#1-threat-model)
2. [Hetzner Cloud (Host)](#2-hetzner-cloud-host)
3. [K3s Cluster](#3-k3s-cluster)
4. [Pod Security](#4-pod-security)
5. [Network Segmentation](#5-network-segmentation)
6. [Redis](#6-redis)
7. [PostgreSQL (Neon)](#7-postgresql-neon)
8. [Cloudflare](#8-cloudflare)
9. [Container Images](#9-container-images)
10. [Secrets Management](#10-secrets-management)
11. [B2 Object Storage](#11-b2-object-storage)
12. [Monitoring & Alerting](#12-monitoring--alerting)
13. [Incident Response](#13-incident-response)
14. [Compliance Checklist](#14-compliance-checklist)
---
## 1. Threat Model
### What We're Protecting
| Asset | Impact if Compromised |
|-------|----------------------|
| User credentials (bcrypt hashes) | Account takeover, password reuse attacks |
| Auth tokens | Session hijacking |
| Personal data (email, name, residences) | Privacy violation, regulatory exposure |
| Push notification keys (APNs, FCM) | Spam push to all users, key revocation |
| Cloudflare origin cert | Direct TLS impersonation |
| Database credentials | Full data exfiltration |
| Redis data | Session replay, job queue manipulation |
| B2 storage keys | Document theft or deletion |
### Attack Surface
```
Internet
Cloudflare (WAF, DDoS protection, TLS termination)
▼ (origin cert, Full Strict)
Hetzner Cloud Firewall (ports 22, 443, 6443)
K3s Traefik Ingress (Cloudflare-only IP allowlist)
├──► API pods (Go) ──► Neon PostgreSQL (external, TLS)
│ ──► Redis (internal, authenticated)
│ ──► APNs/FCM (external, TLS)
│ ──► B2 Storage (external, TLS)
│ ──► SMTP (external, TLS)
├──► Admin pods (Next.js) ──► API pods (internal)
└──► Worker pods (Go) ──► same as API
```
### Trust Boundaries
1. **Internet → Cloudflare**: Untrusted. Cloudflare handles DDoS, WAF, TLS.
2. **Cloudflare → Origin**: Semi-trusted. Origin cert validates, IP allowlist enforces.
3. **Ingress → Pods**: Trusted network, but segmented by NetworkPolicy.
4. **Pods → External Services**: Outbound only, TLS required, credentials scoped.
5. **Pods → K8s API**: Denied. Service accounts have no permissions.
---
## 2. Hetzner Cloud (Host)
### Firewall Rules
Only three ports should be open on the Hetzner Cloud Firewall:
| Port | Protocol | Source | Purpose |
|------|----------|--------|---------|
| 22 | TCP | Your IP(s) only | SSH management |
| 443 | TCP | Cloudflare IPs only | HTTPS traffic |
| 6443 | TCP | Your IP(s) only | K3s API (kubectl) |
```bash
# Verify Hetzner firewall rules (Hetzner CLI)
hcloud firewall describe honeydue-fw
```
### SSH Hardening
- **Key-only authentication** — password auth disabled in `/etc/ssh/sshd_config`
- **Root login disabled** — `PermitRootLogin no`
- **fail2ban active** — auto-bans IPs after 5 failed SSH attempts
```bash
# Verify SSH config on each node
ssh user@NODE_IP "grep -E 'PasswordAuthentication|PermitRootLogin' /etc/ssh/sshd_config"
# Expected: PasswordAuthentication no, PermitRootLogin no
# Check fail2ban status
ssh user@NODE_IP "sudo fail2ban-client status sshd"
```
### OS Updates
```bash
# Enable unattended security updates (Ubuntu 24.04)
ssh user@NODE_IP "sudo apt install unattended-upgrades && sudo dpkg-reconfigure -plow unattended-upgrades"
```
---
## 3. K3s Cluster
### Secret Encryption at Rest
K3s is configured with `secrets-encryption: true` in the server config. This encrypts all Secret resources in etcd using AES-CBC.
```bash
# Verify encryption is active
k3s secrets-encrypt status
# Expected: Encryption Status: Enabled
# Rotate encryption keys (do periodically)
k3s secrets-encrypt rotate-keys
k3s secrets-encrypt reencrypt
```
### RBAC
Each workload has a dedicated ServiceAccount with `automountServiceAccountToken: false`:
| ServiceAccount | Used By | K8s API Access |
|---------------|---------|----------------|
| `api` | API deployment | None |
| `worker` | Worker deployment | None |
| `admin` | Admin deployment | None |
| `redis` | Redis deployment | None |
No Roles or RoleBindings are created — pods have zero K8s API access.
```bash
# Verify service accounts exist
kubectl get sa -n honeydue
# Verify no roles are bound
kubectl get rolebindings -n honeydue
kubectl get clusterrolebindings | grep honeydue
# Expected: no results
```
### Pod Disruption Budgets
Prevent node maintenance from taking down all replicas:
| Workload | Replicas | minAvailable |
|----------|----------|-------------|
| API | 3 | 2 |
| Worker | 2 | 1 |
```bash
# Verify PDBs
kubectl get pdb -n honeydue
```
### Audit Logging (Optional Enhancement)
K3s supports audit logging for API server requests:
```yaml
# Add to K3s server config for detailed audit logging
# /etc/rancher/k3s/audit-policy.yaml
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
- level: Metadata
resources:
- group: ""
resources: ["secrets", "configmaps"]
- level: RequestResponse
users: ["system:anonymous"]
- level: None
resources:
- group: ""
resources: ["events"]
```
### WireGuard (Optional Enhancement)
K3s supports WireGuard for encrypting inter-node traffic:
```bash
# Enable WireGuard on K3s (add to server args)
# --flannel-backend=wireguard-native
```
---
## 4. Pod Security
### Security Contexts
Every pod runs with these security restrictions:
**Pod-level:**
```yaml
securityContext:
runAsNonRoot: true
runAsUser: <uid> # 1000 (api/worker), 1001 (admin), 999 (redis)
runAsGroup: <gid>
fsGroup: <gid>
seccompProfile:
type: RuntimeDefault # Linux kernel syscall filtering
```
**Container-level:**
```yaml
securityContext:
allowPrivilegeEscalation: false # Cannot gain more privileges than parent
readOnlyRootFilesystem: true # Filesystem is immutable
capabilities:
drop: ["ALL"] # No Linux capabilities
```
### Writable Directories
With `readOnlyRootFilesystem: true`, writable paths use emptyDir volumes:
| Pod | Path | Purpose | Backing |
|-----|------|---------|---------|
| API | `/tmp` | Temp files | emptyDir (64Mi) |
| Worker | `/tmp` | Temp files | emptyDir (64Mi) |
| Admin | `/app/.next/cache` | Next.js ISR cache | emptyDir (256Mi) |
| Admin | `/tmp` | Temp files | emptyDir (64Mi) |
| Redis | `/data` | Persistence | PVC (5Gi) |
| Redis | `/tmp` | AOF rewrite temp | emptyDir tmpfs (64Mi) |
### User IDs
| Container | UID:GID | Source |
|-----------|---------|--------|
| API | 1000:1000 | Dockerfile `app` user |
| Worker | 1000:1000 | Dockerfile `app` user |
| Admin | 1001:1001 | Dockerfile `nextjs` user |
| Redis | 999:999 | Alpine `redis` user |
```bash
# Verify all pods run as non-root
kubectl get pods -n honeydue -o jsonpath='{range .items[*]}{.metadata.name}{" runAsNonRoot="}{.spec.securityContext.runAsNonRoot}{"\n"}{end}'
```
---
## 5. Network Segmentation
### Default-Deny Policy
All ingress and egress traffic in the `honeydue` namespace is denied by default. Explicit NetworkPolicy rules allow only necessary traffic.
### Allowed Traffic
```
┌─────────────┐
│ Traefik │
│ (kube-system)│
└──────┬──────┘
┌──────────┼──────────┐
│ │ │
▼ ▼ │
┌────────┐ ┌────────┐ │
│ API │ │ Admin │ │
│ :8000 │ │ :3000 │ │
└───┬────┘ └────┬───┘ │
│ │ │
┌───────┤ │ │
│ │ │ │
▼ ▼ ▼ │
┌───────┐ ┌────────┐ ┌────────┐ │
│ Redis │ │External│ │ API │ │
│ :6379 │ │Services│ │(in-clr)│ │
└───────┘ └────────┘ └────────┘ │
▲ │
│ ┌────────┐ │
└───────│ Worker │────────────┘
└────────┘
```
| Policy | From | To | Ports |
|--------|------|----|-------|
| `default-deny-all` | all | all | none |
| `allow-dns` | all pods | kube-dns | 53 UDP/TCP |
| `allow-ingress-to-api` | Traefik (kube-system) | API pods | 8000 |
| `allow-ingress-to-admin` | Traefik (kube-system) | Admin pods | 3000 |
| `allow-ingress-to-redis` | API + Worker pods | Redis | 6379 |
| `allow-egress-from-api` | API pods | Redis, external (443, 5432, 587) | various |
| `allow-egress-from-worker` | Worker pods | Redis, external (443, 5432, 587) | various |
| `allow-egress-from-admin` | Admin pods | API pods (in-cluster) | 8000 |
**Key restrictions:**
- Redis is reachable ONLY from API and Worker pods
- Admin can ONLY reach the API service (no direct DB/Redis access)
- No pod can reach private IP ranges except in-cluster services
- External egress limited to specific ports (443, 5432, 587)
```bash
# Verify network policies
kubectl get networkpolicy -n honeydue
# Test: admin pod should NOT be able to reach Redis
kubectl exec -n honeydue deploy/admin -- nc -zv redis.honeydue.svc.cluster.local 6379
# Expected: timeout/refused
```
---
## 6. Redis
### Authentication
Redis requires a password when `redis.password` is set in `config.yaml`:
- Password passed via `REDIS_PASSWORD` environment variable from `honeydue-secrets`
- Redis starts with `--requirepass $REDIS_PASSWORD`
- Health probes authenticate with `-a $REDIS_PASSWORD`
- Go API connects via `redis://:PASSWORD@redis.honeydue.svc.cluster.local:6379/0`
### Network Isolation
- Redis has **no Ingress** — not exposed outside the cluster
- NetworkPolicy restricts access to API and Worker pods only
- Admin pods cannot reach Redis
### Memory Limits
- `--maxmemory 256mb` — hard cap on Redis memory
- `--maxmemory-policy noeviction` — returns errors rather than silently evicting data
- K8s resource limit: 512Mi (headroom for AOF rewrite)
### Dangerous Command Renaming (Optional Enhancement)
For additional protection, rename dangerous commands in a custom `redis.conf`:
```
rename-command FLUSHDB ""
rename-command FLUSHALL ""
rename-command DEBUG ""
rename-command CONFIG "HONEYDUE_CONFIG_a7f3b"
```
```bash
# Verify Redis auth is required
kubectl exec -n honeydue deploy/redis -- redis-cli ping
# Expected: (error) NOAUTH Authentication required.
kubectl exec -n honeydue deploy/redis -- redis-cli -a "$REDIS_PASSWORD" ping
# Expected: PONG
```
---
## 7. PostgreSQL (Neon)
### Connection Security
- **SSL required**: `sslmode=require` in connection string
- **Connection limits**: `max_open_conns=25`, `max_idle_conns=10`
- **Scoped credentials**: Database user has access only to `honeydue` database
- **Password rotation**: Change in Neon dashboard, update `secrets/postgres_password.txt`, re-run `02-setup-secrets.sh`
### Access Control
- Only API and Worker pods have egress to port 5432 (NetworkPolicy enforced)
- Admin pods cannot reach the database directly
- Redis pods have no external egress
```bash
# Verify only API/Worker can reach Neon
kubectl exec -n honeydue deploy/admin -- nc -zv ep-xxx.us-east-2.aws.neon.tech 5432
# Expected: timeout (blocked by network policy)
```
### Query Safety
- GORM uses parameterized queries (SQL injection prevention)
- No raw SQL in handlers — all queries go through repositories
- Decimal fields use `shopspring/decimal` (no floating-point errors)
---
## 8. Cloudflare
### TLS Configuration
- **Mode**: Full (Strict) — Cloudflare validates the origin certificate
- **Origin cert**: Stored as K8s Secret `cloudflare-origin-cert`
- **Minimum TLS**: 1.2 (set in Cloudflare dashboard)
- **HSTS**: Enabled via security headers middleware
### Origin Lockdown
The `cloudflare-only` Traefik middleware restricts all ingress to Cloudflare IP ranges only. Direct requests to the origin IP are rejected with 403.
```bash
# Test: direct request to origin should fail
curl -k https://ORIGIN_IP/api/health/
# Expected: 403 Forbidden
# Test: request through Cloudflare should work
curl https://api.myhoneydue.com/api/health/
# Expected: 200 OK
```
### Cloudflare IP Range Updates
Cloudflare IP ranges change infrequently but should be checked periodically:
```bash
# Compare current ranges with deployed middleware
diff <(curl -s https://www.cloudflare.com/ips-v4; curl -s https://www.cloudflare.com/ips-v6) \
<(kubectl get middleware cloudflare-only -n honeydue -o jsonpath='{.spec.ipAllowList.sourceRange[*]}' | tr ' ' '\n')
```
### WAF & Rate Limiting
- **Cloudflare WAF**: Enable managed rulesets in dashboard (OWASP Core, Cloudflare Specials)
- **Rate limiting**: Traefik middleware (100 req/min, burst 200) + Go API auth rate limiting
- **Bot management**: Enable in Cloudflare dashboard for API routes
### Security Headers
Applied via Traefik middleware to all responses:
| Header | Value |
|--------|-------|
| `Strict-Transport-Security` | `max-age=31536000; includeSubDomains` |
| `X-Frame-Options` | `DENY` |
| `X-Content-Type-Options` | `nosniff` |
| `X-XSS-Protection` | `1; mode=block` |
| `Referrer-Policy` | `strict-origin-when-cross-origin` |
| `Content-Security-Policy` | `default-src 'self'; frame-ancestors 'none'` |
| `Permissions-Policy` | `camera=(), microphone=(), geolocation=()` |
| `X-Permitted-Cross-Domain-Policies` | `none` |
---
## 9. Container Images
### Build Security
- **Multi-stage builds**: Build stage discarded, only runtime artifacts copied
- **Alpine base**: Minimal attack surface (~5MB base)
- **Non-root users**: `app:1000` (Go), `nextjs:1001` (admin)
- **Stripped binaries**: Go binaries built with `-ldflags "-s -w"` (no debug symbols)
- **No shell in final image** (Go containers): Only the binary + CA certs
### Image Scanning (Recommended)
Add image scanning to CI/CD before pushing to GHCR:
```bash
# Trivy scan (run in CI)
trivy image --severity HIGH,CRITICAL --exit-code 1 ghcr.io/NAMESPACE/honeydue-api:latest
# Grype alternative
grype ghcr.io/NAMESPACE/honeydue-api:latest --fail-on high
```
### Version Pinning
- Redis image: `redis:7-alpine` (pin to specific tag in production, e.g., `redis:7.4.2-alpine`)
- Go base: pinned in Dockerfile
- Node base: pinned in admin Dockerfile
---
## 10. Secrets Management
### At-Rest Encryption
K3s encrypts all Secret resources in etcd with AES-CBC (`--secrets-encryption` flag).
### Secret Inventory
| Secret | Contains | Rotation Procedure |
|--------|----------|--------------------|
| `honeydue-secrets` | DB password, SECRET_KEY, SMTP password, FCM key, Redis password | Update source files + re-run `02-setup-secrets.sh` |
| `honeydue-apns-key` | APNs .p8 private key | Replace file + re-run `02-setup-secrets.sh` |
| `cloudflare-origin-cert` | TLS cert + key | Regenerate in Cloudflare + re-run `02-setup-secrets.sh` |
| `ghcr-credentials` | Registry PAT | Regenerate GitHub PAT + re-run `02-setup-secrets.sh` |
| `admin-basic-auth` | htpasswd hash | Update config.yaml + re-run `02-setup-secrets.sh` |
### Rotation Procedure
```bash
# 1. Update the secret source (file or config.yaml value)
# 2. Re-run the secrets script
./scripts/02-setup-secrets.sh
# 3. Restart affected pods to pick up new secret values
kubectl rollout restart deployment/api deployment/worker -n honeydue
# 4. Verify pods are healthy
kubectl get pods -n honeydue -w
```
### Secret Hygiene
- `secrets/` directory is gitignored — never committed
- `config.yaml` is gitignored — never committed
- Scripts validate secret files exist and aren't empty before creating K8s secrets
- `SECRET_KEY` requires minimum 32 characters
- ConfigMap redacts sensitive values in `04-verify.sh` output
---
## 11. B2 Object Storage
### Access Control
- **Scoped application key**: Create a B2 key with access to only the `honeydue` bucket
- **Permissions**: Read + Write only (no `deleteFiles`, no `listAllBucketNames`)
- **Bucket-only**: Key cannot access other buckets in the account
```bash
# Create scoped B2 key (Backblaze CLI)
b2 create-key --bucket BUCKET_NAME honeydue-api readFiles,writeFiles,listFiles
```
### Upload Validation (Go API)
- File size limit: `STORAGE_MAX_FILE_SIZE` (10MB default)
- Allowed MIME types: `STORAGE_ALLOWED_TYPES` (images + PDF only)
- Path traversal protection in upload handler
- Files served via authenticated proxy (`media_handler`) — no direct B2 URLs exposed to clients
### Versioning
Enable B2 bucket versioning to protect against accidental deletion:
```bash
# Enable versioning on the B2 bucket
b2 update-bucket --versioning enabled BUCKET_NAME
```
---
## 12. Monitoring & Alerting
### Log Aggregation
K3s logs are available via `kubectl logs`. For persistent log aggregation:
```bash
# View API logs
kubectl logs -n honeydue -l app.kubernetes.io/name=api --tail=100 -f
# View worker logs
kubectl logs -n honeydue -l app.kubernetes.io/name=worker --tail=100 -f
# View all warning events
kubectl get events -n honeydue --field-selector type=Warning --sort-by='.lastTimestamp'
```
**Recommended**: Deploy Loki + Grafana for persistent log search and alerting.
### Health Monitoring
```bash
# Continuous health monitoring
watch -n 10 "kubectl get pods -n honeydue -o wide && echo && kubectl top pods -n honeydue 2>/dev/null"
# Check pod restart counts (indicator of crashes)
kubectl get pods -n honeydue -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{range .status.containerStatuses[*]}{.restartCount}{end}{"\n"}{end}'
```
### Alerting Thresholds
| Metric | Warning | Critical | Check Command |
|--------|---------|----------|---------------|
| Pod restarts | > 3 in 1h | > 10 in 1h | `kubectl get pods` |
| API response time | > 500ms p95 | > 2s p95 | Cloudflare Analytics |
| Memory usage | > 80% limit | > 95% limit | `kubectl top pods` |
| Redis memory | > 200MB | > 250MB | `redis-cli info memory` |
| Disk (PVC) | > 80% | > 95% | `kubectl exec ... df -h` |
| Certificate expiry | < 30 days | < 7 days | Cloudflare dashboard |
### Audit Trail
- **K8s events**: `kubectl get events -n honeydue` (auto-pruned after 1h)
- **Go API**: zerolog structured logging with credential masking
- **Cloudflare**: Access logs, WAF logs, rate limiting logs in dashboard
- **Hetzner**: SSH auth logs in `/var/log/auth.log`
---
## 13. Incident Response
### Playbook: Compromised API Token
```bash
# 1. Rotate SECRET_KEY to invalidate ALL tokens
echo "$(openssl rand -hex 32)" > secrets/secret_key.txt
./scripts/02-setup-secrets.sh
kubectl rollout restart deployment/api deployment/worker -n honeydue
# 2. All users will need to re-authenticate
```
### Playbook: Compromised Database Credentials
```bash
# 1. Rotate password in Neon dashboard
# 2. Update local secret file
echo "NEW_PASSWORD" > secrets/postgres_password.txt
./scripts/02-setup-secrets.sh
kubectl rollout restart deployment/api deployment/worker -n honeydue
# 3. Monitor for connection errors
kubectl logs -n honeydue -l app.kubernetes.io/name=api --tail=50 -f
```
### Playbook: Compromised Push Notification Keys
```bash
# APNs: Revoke key in Apple Developer Console, generate new .p8
cp new_key.p8 secrets/apns_auth_key.p8
./scripts/02-setup-secrets.sh
kubectl rollout restart deployment/api deployment/worker -n honeydue
# FCM: Rotate server key in Firebase Console
echo "NEW_FCM_KEY" > secrets/fcm_server_key.txt
./scripts/02-setup-secrets.sh
kubectl rollout restart deployment/api deployment/worker -n honeydue
```
### Playbook: Suspicious Pod Behavior
```bash
# 1. Isolate the pod (remove from service)
kubectl label pod SUSPICIOUS_POD -n honeydue app.kubernetes.io/name-
# 2. Capture state for investigation
kubectl logs SUSPICIOUS_POD -n honeydue > /tmp/suspicious-logs.txt
kubectl describe pod SUSPICIOUS_POD -n honeydue > /tmp/suspicious-describe.txt
# 3. Delete and let deployment recreate
kubectl delete pod SUSPICIOUS_POD -n honeydue
```
### Communication Plan
1. **Internal**: Document incident timeline in a private channel
2. **Users**: If data breach — notify affected users within 72 hours
3. **Vendors**: Revoke/rotate all potentially compromised credentials
4. **Post-mortem**: Document root cause, timeline, remediation, prevention
---
## 14. Compliance Checklist
Run through this checklist before production launch and periodically thereafter.
### Infrastructure
- [ ] Hetzner firewall allows only ports 22, 443, 6443
- [ ] SSH password auth disabled on all nodes
- [ ] fail2ban active on all nodes
- [ ] OS security updates enabled (unattended-upgrades)
```bash
# Verify
hcloud firewall describe honeydue-fw
ssh user@NODE "grep PasswordAuthentication /etc/ssh/sshd_config"
ssh user@NODE "sudo fail2ban-client status sshd"
```
### K3s Cluster
- [ ] Secret encryption enabled
- [ ] Service accounts created with no API access
- [ ] Pod disruption budgets deployed
- [ ] No default service account used by workloads
```bash
# Verify
k3s secrets-encrypt status
kubectl get sa -n honeydue
kubectl get pdb -n honeydue
kubectl get pods -n honeydue -o jsonpath='{range .items[*]}{.metadata.name}{" sa="}{.spec.serviceAccountName}{"\n"}{end}'
```
### Pod Security
- [ ] All pods: `runAsNonRoot: true`
- [ ] All containers: `allowPrivilegeEscalation: false`
- [ ] All containers: `readOnlyRootFilesystem: true`
- [ ] All containers: `capabilities.drop: ["ALL"]`
- [ ] All pods: `seccompProfile.type: RuntimeDefault`
```bash
# Verify (automated check in 04-verify.sh)
./scripts/04-verify.sh
```
### Network
- [ ] Default-deny NetworkPolicy applied
- [ ] 8+ explicit allow policies deployed
- [ ] Redis only reachable from API + Worker
- [ ] Admin only reaches API service
- [ ] Cloudflare-only middleware applied to all ingress
```bash
# Verify
kubectl get networkpolicy -n honeydue
kubectl get ingress -n honeydue -o yaml | grep cloudflare-only
```
### Authentication & Authorization
- [ ] Redis requires password
- [ ] Admin panel has basic auth layer
- [ ] API uses bcrypt for passwords
- [ ] Auth tokens have expiration
- [ ] Rate limiting on auth endpoints
```bash
# Verify Redis auth
kubectl exec -n honeydue deploy/redis -- redis-cli ping
# Expected: NOAUTH error
# Verify admin auth
kubectl get secret admin-basic-auth -n honeydue
```
### Secrets
- [ ] All secrets stored as K8s Secrets (not ConfigMap)
- [ ] Secrets encrypted at rest (K3s)
- [ ] No secrets in git history
- [ ] SECRET_KEY >= 32 characters
- [ ] Secret rotation documented
```bash
# Verify no secrets in ConfigMap
kubectl get configmap honeydue-config -n honeydue -o yaml | grep -iE 'password|secret|token|key'
# Should show only non-sensitive config keys (EMAIL_HOST, APNS_KEY_ID, etc.)
```
### TLS & Headers
- [ ] Cloudflare Full (Strict) mode enabled
- [ ] Origin cert valid and not expired
- [ ] HSTS header present with includeSubDomains
- [ ] CSP header: `default-src 'self'; frame-ancestors 'none'`
- [ ] Permissions-Policy blocks camera/mic/geo
- [ ] X-Frame-Options: DENY
```bash
# Verify headers (via Cloudflare)
curl -sI https://api.myhoneydue.com/api/health/ | grep -iE 'strict-transport|content-security|permissions-policy|x-frame'
```
### Container Images
- [ ] Multi-stage Dockerfile (no build tools in runtime)
- [ ] Non-root user in all images
- [ ] Alpine base (minimal surface)
- [ ] No secrets baked into images
```bash
# Verify non-root
kubectl get pods -n honeydue -o jsonpath='{range .items[*]}{.metadata.name}{" uid="}{.spec.securityContext.runAsUser}{"\n"}{end}'
```
### External Services
- [ ] PostgreSQL: `sslmode=require`
- [ ] B2: Scoped application key (single bucket)
- [ ] APNs: .p8 key (not .p12 certificate)
- [ ] SMTP: TLS enabled (`use_tls: true`)
---
## Quick Reference Commands
```bash
# Full security verification
./scripts/04-verify.sh
# Rotate all secrets
./scripts/02-setup-secrets.sh && \
kubectl rollout restart deployment/api deployment/worker deployment/admin -n honeydue
# Check for security events
kubectl get events -n honeydue --field-selector type=Warning
# Emergency: scale down everything
kubectl scale deployment --all -n honeydue --replicas=0
# Emergency: restore
kubectl scale deployment api -n honeydue --replicas=3
kubectl scale deployment worker -n honeydue --replicas=2
kubectl scale deployment admin -n honeydue --replicas=1
kubectl scale deployment redis -n honeydue --replicas=1
```

View File

@@ -0,0 +1,118 @@
# config.yaml — single source of truth for honeyDue K3s deployment
# Copy to config.yaml, fill in all empty values, then run scripts in order.
# This file is gitignored — never commit it with real values.
# --- Hetzner Cloud ---
cluster:
hcloud_token: "" # Hetzner API token (Read/Write)
ssh_public_key: ~/.ssh/id_ed25519.pub
ssh_private_key: ~/.ssh/id_ed25519
k3s_version: v1.31.4+k3s1
location: fsn1 # Hetzner datacenter
instance_type: cx33 # 4 vCPU, 16GB RAM
# Filled by 01-provision-cluster.sh, or manually after creating servers
nodes:
- name: honeydue-master1
ip: ""
roles: [master, redis] # 'redis' = pin Redis PVC here
- name: honeydue-master2
ip: ""
roles: [master]
- name: honeydue-master3
ip: ""
roles: [master]
# Hetzner Load Balancer IP (created in console after provisioning)
load_balancer_ip: ""
# --- Domains ---
domains:
api: api.myhoneydue.com
admin: admin.myhoneydue.com
base: myhoneydue.com
# --- Container Registry (GHCR) ---
registry:
server: ghcr.io
namespace: "" # GitHub username or org
username: "" # GitHub username
token: "" # PAT with read:packages, write:packages
# --- Database (Neon PostgreSQL) ---
database:
host: "" # e.g. ep-xxx.us-east-2.aws.neon.tech
port: 5432
user: ""
name: honeydue
sslmode: require
max_open_conns: 25
max_idle_conns: 10
max_lifetime: "600s"
# --- Email (Fastmail) ---
email:
host: smtp.fastmail.com
port: 587
user: "" # Fastmail email address
from: "honeyDue <noreply@myhoneydue.com>"
use_tls: true
# --- Push Notifications ---
push:
apns_key_id: ""
apns_team_id: ""
apns_topic: com.tt.honeyDue
apns_production: true
apns_use_sandbox: false
# --- B2 Object Storage ---
storage:
b2_key_id: ""
b2_app_key: ""
b2_bucket: ""
b2_endpoint: "" # e.g. s3.us-west-004.backblazeb2.com
max_file_size: 10485760
allowed_types: "image/jpeg,image/png,image/gif,image/webp,application/pdf"
# --- Worker Schedules (UTC hours) ---
worker:
task_reminder_hour: 14
overdue_reminder_hour: 15
daily_digest_hour: 3
# --- Feature Flags ---
features:
push_enabled: true
email_enabled: true
webhooks_enabled: true
onboarding_emails_enabled: true
pdf_reports_enabled: true
worker_enabled: true
# --- Redis ---
redis:
password: "" # Set a strong password; leave empty for no auth (NOT recommended for production)
# --- Admin Panel ---
admin:
basic_auth_user: "" # HTTP basic auth username for admin panel
basic_auth_password: "" # HTTP basic auth password for admin panel
# --- Apple Auth / IAP (optional, leave empty if unused) ---
apple_auth:
client_id: ""
team_id: ""
iap_key_id: ""
iap_issuer_id: ""
iap_bundle_id: ""
iap_key_path: ""
iap_sandbox: false
# --- Google Auth / IAP (optional, leave empty if unused) ---
google_auth:
client_id: ""
android_client_id: ""
ios_client_id: ""
iap_package_name: ""
iap_service_account_path: ""

View File

@@ -0,0 +1,94 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 0
maxSurge: 1
selector:
matchLabels:
app.kubernetes.io/name: admin
template:
metadata:
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: admin
imagePullSecrets:
- name: ghcr-credentials
securityContext:
runAsNonRoot: true
runAsUser: 1001
runAsGroup: 1001
fsGroup: 1001
seccompProfile:
type: RuntimeDefault
containers:
- name: admin
image: IMAGE_PLACEHOLDER # Replaced by 03-deploy.sh
ports:
- containerPort: 3000
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: PORT
value: "3000"
- name: HOSTNAME
value: "0.0.0.0"
- name: NEXT_PUBLIC_API_URL
valueFrom:
configMapKeyRef:
name: honeydue-config
key: NEXT_PUBLIC_API_URL
volumeMounts:
- name: nextjs-cache
mountPath: /app/.next/cache
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 50m
memory: 64Mi
limits:
cpu: 500m
memory: 256Mi
startupProbe:
httpGet:
path: /admin/
port: 3000
failureThreshold: 12
periodSeconds: 5
readinessProbe:
httpGet:
path: /admin/
port: 3000
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
httpGet:
path: /admin/
port: 3000
initialDelaySeconds: 30
periodSeconds: 30
timeoutSeconds: 10
volumes:
- name: nextjs-cache
emptyDir:
sizeLimit: 256Mi
- name: tmp
emptyDir:
sizeLimit: 64Mi

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: admin
ports:
- port: 3000
targetPort: 3000
protocol: TCP

View File

@@ -0,0 +1,54 @@
# API Ingress — Cloudflare-only + security headers + rate limiting
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: honeydue-api
namespace: honeydue
labels:
app.kubernetes.io/part-of: honeydue
annotations:
traefik.ingress.kubernetes.io/router.middlewares: honeydue-cloudflare-only@kubernetescrd,honeydue-security-headers@kubernetescrd,honeydue-rate-limit@kubernetescrd
spec:
tls:
- hosts:
- api.myhoneydue.com
secretName: cloudflare-origin-cert
rules:
- host: api.myhoneydue.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api
port:
number: 8000
---
# Admin Ingress — Cloudflare-only + security headers + rate limiting + basic auth
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: honeydue-admin
namespace: honeydue
labels:
app.kubernetes.io/part-of: honeydue
annotations:
traefik.ingress.kubernetes.io/router.middlewares: honeydue-cloudflare-only@kubernetescrd,honeydue-security-headers@kubernetescrd,honeydue-rate-limit@kubernetescrd,honeydue-admin-auth@kubernetescrd
spec:
tls:
- hosts:
- admin.myhoneydue.com
secretName: cloudflare-origin-cert
rules:
- host: admin.myhoneydue.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: admin
port:
number: 3000

View File

@@ -0,0 +1,82 @@
# Traefik CRD middleware for rate limiting
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: rate-limit
namespace: honeydue
spec:
rateLimit:
average: 100
burst: 200
period: 1m
---
# Security headers
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: security-headers
namespace: honeydue
spec:
headers:
frameDeny: true
contentTypeNosniff: true
browserXssFilter: true
referrerPolicy: "strict-origin-when-cross-origin"
customResponseHeaders:
X-Content-Type-Options: "nosniff"
X-Frame-Options: "DENY"
Strict-Transport-Security: "max-age=31536000; includeSubDomains"
Content-Security-Policy: "default-src 'self'; frame-ancestors 'none'"
Permissions-Policy: "camera=(), microphone=(), geolocation=()"
X-Permitted-Cross-Domain-Policies: "none"
---
# Cloudflare IP allowlist (restrict origin to Cloudflare only)
# https://www.cloudflare.com/ips-v4 and /ips-v6
# Update periodically: curl -s https://www.cloudflare.com/ips-v4 && curl -s https://www.cloudflare.com/ips-v6
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: cloudflare-only
namespace: honeydue
spec:
ipAllowList:
sourceRange:
# Cloudflare IPv4 ranges
- 173.245.48.0/20
- 103.21.244.0/22
- 103.22.200.0/22
- 103.31.4.0/22
- 141.101.64.0/18
- 108.162.192.0/18
- 190.93.240.0/20
- 188.114.96.0/20
- 197.234.240.0/22
- 198.41.128.0/17
- 162.158.0.0/15
- 104.16.0.0/13
- 104.24.0.0/14
- 172.64.0.0/13
- 131.0.72.0/22
# Cloudflare IPv6 ranges
- 2400:cb00::/32
- 2606:4700::/32
- 2803:f800::/32
- 2405:b500::/32
- 2405:8100::/32
- 2a06:98c0::/29
- 2c0f:f248::/32
---
# Admin basic auth — additional auth layer for admin panel
# Secret created by 02-setup-secrets.sh from config.yaml credentials
apiVersion: traefik.io/v1alpha1
kind: Middleware
metadata:
name: admin-auth
namespace: honeydue
spec:
basicAuth:
secret: admin-basic-auth
realm: "honeyDue Admin"

View File

@@ -0,0 +1,6 @@
apiVersion: v1
kind: Namespace
metadata:
name: honeydue
labels:
app.kubernetes.io/part-of: honeydue

View File

@@ -0,0 +1,202 @@
# Network Policies — default-deny with explicit allows
# Apply AFTER namespace and deployments are created.
# Verify: kubectl get networkpolicy -n honeydue
# --- Default deny all ingress and egress ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: honeydue
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
---
# --- Allow DNS for all pods (required for service discovery) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns
namespace: honeydue
spec:
podSelector: {}
policyTypes:
- Egress
egress:
- to: []
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
---
# --- API: allow ingress from Traefik (kube-system namespace) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-api
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: api
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: TCP
port: 8000
---
# --- Admin: allow ingress from Traefik (kube-system namespace) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-admin
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: admin
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
ports:
- protocol: TCP
port: 3000
---
# --- Redis: allow ingress ONLY from api + worker pods ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-ingress-to-redis
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: redis
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
- podSelector:
matchLabels:
app.kubernetes.io/name: worker
ports:
- protocol: TCP
port: 6379
---
# --- API: allow egress to Redis, external services (Neon DB, APNs, FCM, B2, SMTP) ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-api
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: api
policyTypes:
- Egress
egress:
# Redis (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: redis
ports:
- protocol: TCP
port: 6379
# External services: Neon DB (5432), SMTP (587), HTTPS (443 — APNs, FCM, B2, PostHog)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 5432
- protocol: TCP
port: 587
- protocol: TCP
port: 443
---
# --- Worker: allow egress to Redis, external services ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-worker
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: worker
policyTypes:
- Egress
egress:
# Redis (in-cluster)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: redis
ports:
- protocol: TCP
port: 6379
# External services: Neon DB (5432), SMTP (587), HTTPS (443 — APNs, FCM, B2)
- to:
- ipBlock:
cidr: 0.0.0.0/0
except:
- 10.0.0.0/8
- 172.16.0.0/12
- 192.168.0.0/16
ports:
- protocol: TCP
port: 5432
- protocol: TCP
port: 587
- protocol: TCP
port: 443
---
# --- Admin: allow egress to API (internal) for SSR ---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-from-admin
namespace: honeydue
spec:
podSelector:
matchLabels:
app.kubernetes.io/name: admin
policyTypes:
- Egress
egress:
# API service (in-cluster, for server-side API calls)
- to:
- podSelector:
matchLabels:
app.kubernetes.io/name: api
ports:
- protocol: TCP
port: 8000

View File

@@ -0,0 +1,32 @@
# Pod Disruption Budgets — prevent node maintenance from killing all replicas
# API: at least 2 of 3 replicas must stay up during voluntary disruptions
# Worker: at least 1 of 2 replicas must stay up
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: api-pdb
namespace: honeydue
labels:
app.kubernetes.io/name: api
app.kubernetes.io/part-of: honeydue
spec:
minAvailable: 2
selector:
matchLabels:
app.kubernetes.io/name: api
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
name: worker-pdb
namespace: honeydue
labels:
app.kubernetes.io/name: worker
app.kubernetes.io/part-of: honeydue
spec:
minAvailable: 1
selector:
matchLabels:
app.kubernetes.io/name: worker

View File

@@ -0,0 +1,46 @@
# RBAC — Dedicated service accounts with no K8s API access
# Each pod gets its own SA with automountServiceAccountToken: false,
# so a compromised pod cannot query the Kubernetes API.
apiVersion: v1
kind: ServiceAccount
metadata:
name: api
namespace: honeydue
labels:
app.kubernetes.io/name: api
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: worker
namespace: honeydue
labels:
app.kubernetes.io/name: worker
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin
namespace: honeydue
labels:
app.kubernetes.io/name: admin
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
automountServiceAccountToken: false

View File

@@ -0,0 +1,106 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
replicas: 1
strategy:
type: Recreate # ReadWriteOnce PVC — can't attach to two pods
selector:
matchLabels:
app.kubernetes.io/name: redis
template:
metadata:
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
serviceAccountName: redis
nodeSelector:
honeydue/redis: "true"
securityContext:
runAsNonRoot: true
runAsUser: 999
runAsGroup: 999
fsGroup: 999
seccompProfile:
type: RuntimeDefault
containers:
- name: redis
image: redis:7-alpine
command:
- sh
- -c
- |
ARGS="--appendonly yes --appendfsync everysec --maxmemory 256mb --maxmemory-policy noeviction"
if [ -n "$REDIS_PASSWORD" ]; then
ARGS="$ARGS --requirepass $REDIS_PASSWORD"
fi
exec redis-server $ARGS
ports:
- containerPort: 6379
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop: ["ALL"]
env:
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: honeydue-secrets
key: REDIS_PASSWORD
optional: true
volumeMounts:
- name: redis-data
mountPath: /data
- name: tmp
mountPath: /tmp
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 500m
memory: 512Mi
readinessProbe:
exec:
command:
- sh
- -c
- |
if [ -n "$REDIS_PASSWORD" ]; then
redis-cli -a "$REDIS_PASSWORD" ping 2>/dev/null | grep -q PONG
else
redis-cli ping | grep -q PONG
fi
initialDelaySeconds: 5
periodSeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- |
if [ -n "$REDIS_PASSWORD" ]; then
redis-cli -a "$REDIS_PASSWORD" ping 2>/dev/null | grep -q PONG
else
redis-cli ping | grep -q PONG
fi
initialDelaySeconds: 15
periodSeconds: 20
timeoutSeconds: 5
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data
- name: tmp
emptyDir:
medium: Memory
sizeLimit: 64Mi

View File

@@ -0,0 +1,15 @@
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 5Gi

View File

@@ -0,0 +1,16 @@
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: honeydue
labels:
app.kubernetes.io/name: redis
app.kubernetes.io/part-of: honeydue
spec:
type: ClusterIP
selector:
app.kubernetes.io/name: redis
ports:
- port: 6379
targetPort: 6379
protocol: TCP

View File

@@ -0,0 +1,47 @@
# EXAMPLE ONLY — never commit real values.
# Secrets are created by scripts/02-setup-secrets.sh.
# This file shows the expected structure for reference.
---
apiVersion: v1
kind: Secret
metadata:
name: honeydue-secrets
namespace: honeydue
type: Opaque
stringData:
POSTGRES_PASSWORD: "CHANGEME"
SECRET_KEY: "CHANGEME_MIN_32_CHARS"
EMAIL_HOST_PASSWORD: "CHANGEME"
FCM_SERVER_KEY: "CHANGEME"
---
apiVersion: v1
kind: Secret
metadata:
name: honeydue-apns-key
namespace: honeydue
type: Opaque
data:
apns_auth_key.p8: "" # base64-encoded .p8 file contents
---
apiVersion: v1
kind: Secret
metadata:
name: ghcr-credentials
namespace: honeydue
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: "" # base64-encoded Docker config
---
apiVersion: v1
kind: Secret
metadata:
name: cloudflare-origin-cert
namespace: honeydue
type: kubernetes.io/tls
data:
tls.crt: "" # base64-encoded origin certificate
tls.key: "" # base64-encoded origin private key

View File

@@ -0,0 +1,124 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
log() { printf '[provision] %s\n' "$*"; }
die() { printf '[provision][error] %s\n' "$*" >&2; exit 1; }
# --- Prerequisites ---
command -v hetzner-k3s >/dev/null 2>&1 || die "Missing: hetzner-k3s CLI. Install: https://github.com/vitobotta/hetzner-k3s"
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
HCLOUD_TOKEN="$(cfg_require cluster.hcloud_token "Hetzner API token")"
export HCLOUD_TOKEN
# Validate SSH keys
SSH_PUB="$(cfg cluster.ssh_public_key | sed "s|~|${HOME}|g")"
SSH_PRIV="$(cfg cluster.ssh_private_key | sed "s|~|${HOME}|g")"
[[ -f "${SSH_PUB}" ]] || die "SSH public key not found: ${SSH_PUB}"
[[ -f "${SSH_PRIV}" ]] || die "SSH private key not found: ${SSH_PRIV}"
# --- Generate hetzner-k3s cluster config from config.yaml ---
CLUSTER_CONFIG="${DEPLOY_DIR}/cluster-config.yaml"
log "Generating cluster-config.yaml from config.yaml..."
generate_cluster_config > "${CLUSTER_CONFIG}"
# --- Provision ---
INSTANCE_TYPE="$(cfg cluster.instance_type)"
LOCATION="$(cfg cluster.location)"
NODE_COUNT="$(node_count)"
log "Provisioning K3s cluster on Hetzner Cloud..."
log " Nodes: ${NODE_COUNT}x ${INSTANCE_TYPE} in ${LOCATION}"
log " This takes about 5-10 minutes."
echo ""
hetzner-k3s create --config "${CLUSTER_CONFIG}"
KUBECONFIG_PATH="${DEPLOY_DIR}/kubeconfig"
if [[ ! -f "${KUBECONFIG_PATH}" ]]; then
die "Provisioning completed but kubeconfig not found. Check hetzner-k3s output."
fi
# --- Write node IPs back to config.yaml ---
log "Querying node IPs..."
export KUBECONFIG="${KUBECONFIG_PATH}"
python3 -c "
import yaml, subprocess, json
# Get node info from kubectl
result = subprocess.run(
['kubectl', 'get', 'nodes', '-o', 'json'],
capture_output=True, text=True
)
nodes_json = json.loads(result.stdout)
# Build name → IP map
ip_map = {}
for node in nodes_json.get('items', []):
name = node['metadata']['name']
for addr in node.get('status', {}).get('addresses', []):
if addr['type'] == 'ExternalIP':
ip_map[name] = addr['address']
break
else:
for addr in node.get('status', {}).get('addresses', []):
if addr['type'] == 'InternalIP':
ip_map[name] = addr['address']
break
# Update config.yaml with IPs
with open('${CONFIG_FILE}') as f:
config = yaml.safe_load(f)
updated = 0
for i, node in enumerate(config.get('nodes', [])):
for real_name, ip in ip_map.items():
if node['name'] in real_name or real_name in node['name']:
config['nodes'][i]['ip'] = ip
config['nodes'][i]['name'] = real_name
updated += 1
break
if updated == 0 and ip_map:
# Names didn't match — assign by index
for i, (name, ip) in enumerate(sorted(ip_map.items())):
if i < len(config['nodes']):
config['nodes'][i]['name'] = name
config['nodes'][i]['ip'] = ip
updated += 1
with open('${CONFIG_FILE}', 'w') as f:
yaml.dump(config, f, default_flow_style=False, sort_keys=False)
print(f'Updated {updated} node IPs in config.yaml')
for name, ip in sorted(ip_map.items()):
print(f' {name}: {ip}')
"
# --- Label Redis node ---
REDIS_NODE="$(nodes_with_role redis | head -1)"
if [[ -n "${REDIS_NODE}" ]]; then
# Find the actual K8s node name that matches
ACTUAL_NODE="$(kubectl get nodes -o jsonpath='{.items[*].metadata.name}' | tr ' ' '\n' | head -1)"
log "Labeling node ${ACTUAL_NODE} for Redis..."
kubectl label node "${ACTUAL_NODE}" honeydue/redis=true --overwrite
fi
log ""
log "Cluster provisioned successfully."
log ""
log "Next steps:"
log " export KUBECONFIG=${KUBECONFIG_PATH}"
log " kubectl get nodes"
log " ./scripts/02-setup-secrets.sh"

View File

@@ -0,0 +1,131 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
SECRETS_DIR="${DEPLOY_DIR}/secrets"
NAMESPACE="honeydue"
log() { printf '[secrets] %s\n' "$*"; }
warn() { printf '[secrets][warn] %s\n' "$*" >&2; }
die() { printf '[secrets][error] %s\n' "$*" >&2; exit 1; }
# --- Prerequisites ---
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
kubectl get namespace "${NAMESPACE}" >/dev/null 2>&1 || {
log "Creating namespace ${NAMESPACE}..."
kubectl apply -f "${DEPLOY_DIR}/manifests/namespace.yaml"
}
# --- Validate secret files ---
require_file() {
local path="$1" label="$2"
[[ -f "${path}" ]] || die "Missing: ${path} (${label})"
[[ -s "${path}" ]] || die "Empty: ${path} (${label})"
}
require_file "${SECRETS_DIR}/postgres_password.txt" "Postgres password"
require_file "${SECRETS_DIR}/secret_key.txt" "SECRET_KEY"
require_file "${SECRETS_DIR}/email_host_password.txt" "SMTP password"
require_file "${SECRETS_DIR}/fcm_server_key.txt" "FCM server key"
require_file "${SECRETS_DIR}/apns_auth_key.p8" "APNS private key"
require_file "${SECRETS_DIR}/cloudflare-origin.crt" "Cloudflare origin cert"
require_file "${SECRETS_DIR}/cloudflare-origin.key" "Cloudflare origin key"
# Validate APNS key format
if ! grep -q "BEGIN PRIVATE KEY" "${SECRETS_DIR}/apns_auth_key.p8"; then
die "APNS key file does not look like a private key: ${SECRETS_DIR}/apns_auth_key.p8"
fi
# Validate secret_key length (minimum 32 chars)
SECRET_KEY_LEN="$(tr -d '\r\n' < "${SECRETS_DIR}/secret_key.txt" | wc -c | tr -d ' ')"
if (( SECRET_KEY_LEN < 32 )); then
die "secret_key.txt must be at least 32 characters (got ${SECRET_KEY_LEN})."
fi
# --- Read optional config values ---
REDIS_PASSWORD="$(cfg redis.password 2>/dev/null || true)"
ADMIN_AUTH_USER="$(cfg admin.basic_auth_user 2>/dev/null || true)"
ADMIN_AUTH_PASSWORD="$(cfg admin.basic_auth_password 2>/dev/null || true)"
# --- Create app secrets ---
log "Creating honeydue-secrets..."
SECRET_ARGS=(
--namespace="${NAMESPACE}"
--from-literal="POSTGRES_PASSWORD=$(tr -d '\r\n' < "${SECRETS_DIR}/postgres_password.txt")"
--from-literal="SECRET_KEY=$(tr -d '\r\n' < "${SECRETS_DIR}/secret_key.txt")"
--from-literal="EMAIL_HOST_PASSWORD=$(tr -d '\r\n' < "${SECRETS_DIR}/email_host_password.txt")"
--from-literal="FCM_SERVER_KEY=$(tr -d '\r\n' < "${SECRETS_DIR}/fcm_server_key.txt")"
)
if [[ -n "${REDIS_PASSWORD}" ]]; then
log " Including REDIS_PASSWORD in secrets"
SECRET_ARGS+=(--from-literal="REDIS_PASSWORD=${REDIS_PASSWORD}")
fi
kubectl create secret generic honeydue-secrets \
"${SECRET_ARGS[@]}" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Create APNS key secret ---
log "Creating honeydue-apns-key..."
kubectl create secret generic honeydue-apns-key \
--namespace="${NAMESPACE}" \
--from-file="apns_auth_key.p8=${SECRETS_DIR}/apns_auth_key.p8" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Create GHCR registry credentials ---
REGISTRY_SERVER="$(cfg registry.server)"
REGISTRY_USER="$(cfg registry.username)"
REGISTRY_TOKEN="$(cfg registry.token)"
if [[ -n "${REGISTRY_SERVER}" && -n "${REGISTRY_USER}" && -n "${REGISTRY_TOKEN}" ]]; then
log "Creating ghcr-credentials..."
kubectl create secret docker-registry ghcr-credentials \
--namespace="${NAMESPACE}" \
--docker-server="${REGISTRY_SERVER}" \
--docker-username="${REGISTRY_USER}" \
--docker-password="${REGISTRY_TOKEN}" \
--dry-run=client -o yaml | kubectl apply -f -
else
warn "Registry credentials incomplete in config.yaml — skipping ghcr-credentials."
fi
# --- Create Cloudflare origin cert ---
log "Creating cloudflare-origin-cert..."
kubectl create secret tls cloudflare-origin-cert \
--namespace="${NAMESPACE}" \
--cert="${SECRETS_DIR}/cloudflare-origin.crt" \
--key="${SECRETS_DIR}/cloudflare-origin.key" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Create admin basic auth secret ---
if [[ -n "${ADMIN_AUTH_USER}" && -n "${ADMIN_AUTH_PASSWORD}" ]]; then
command -v htpasswd >/dev/null 2>&1 || die "Missing: htpasswd (install apache2-utils)"
log "Creating admin-basic-auth secret..."
HTPASSWD="$(htpasswd -nb "${ADMIN_AUTH_USER}" "${ADMIN_AUTH_PASSWORD}")"
kubectl create secret generic admin-basic-auth \
--namespace="${NAMESPACE}" \
--from-literal=users="${HTPASSWD}" \
--dry-run=client -o yaml | kubectl apply -f -
else
warn "admin.basic_auth_user/password not set in config.yaml — skipping admin-basic-auth."
warn "Admin panel will NOT have basic auth protection."
fi
# --- Done ---
log ""
log "All secrets created in namespace '${NAMESPACE}'."
log "Verify: kubectl get secrets -n ${NAMESPACE}"

143
deploy-k3s/scripts/03-deploy.sh Executable file
View File

@@ -0,0 +1,143 @@
#!/usr/bin/env bash
set -euo pipefail
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
# shellcheck source=_config.sh
source "${SCRIPT_DIR}/_config.sh"
REPO_DIR="$(cd "${DEPLOY_DIR}/.." && pwd)"
NAMESPACE="honeydue"
MANIFESTS="${DEPLOY_DIR}/manifests"
log() { printf '[deploy] %s\n' "$*"; }
warn() { printf '[deploy][warn] %s\n' "$*" >&2; }
die() { printf '[deploy][error] %s\n' "$*" >&2; exit 1; }
# --- Parse arguments ---
SKIP_BUILD=false
DEPLOY_TAG=""
while (( $# > 0 )); do
case "$1" in
--skip-build) SKIP_BUILD=true; shift ;;
--tag)
[[ -n "${2:-}" ]] || die "--tag requires a value"
DEPLOY_TAG="$2"; shift 2 ;;
-h|--help)
cat <<'EOF'
Usage: ./scripts/03-deploy.sh [OPTIONS]
Options:
--skip-build Skip Docker build/push, use existing images
--tag <tag> Image tag (default: git short SHA)
-h, --help Show this help
EOF
exit 0 ;;
*) die "Unknown argument: $1" ;;
esac
done
# --- Prerequisites ---
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
command -v docker >/dev/null 2>&1 || die "Missing: docker"
if [[ -z "${DEPLOY_TAG}" ]]; then
DEPLOY_TAG="$(git -C "${REPO_DIR}" rev-parse --short HEAD 2>/dev/null || echo "latest")"
fi
# --- Read registry config ---
REGISTRY_SERVER="$(cfg_require registry.server "Container registry server")"
REGISTRY_NS="$(cfg_require registry.namespace "Registry namespace")"
REGISTRY_USER="$(cfg_require registry.username "Registry username")"
REGISTRY_TOKEN="$(cfg_require registry.token "Registry token")"
REGISTRY_PREFIX="${REGISTRY_SERVER%/}/${REGISTRY_NS#/}"
API_IMAGE="${REGISTRY_PREFIX}/honeydue-api:${DEPLOY_TAG}"
WORKER_IMAGE="${REGISTRY_PREFIX}/honeydue-worker:${DEPLOY_TAG}"
ADMIN_IMAGE="${REGISTRY_PREFIX}/honeydue-admin:${DEPLOY_TAG}"
# --- Build and push ---
if [[ "${SKIP_BUILD}" == "false" ]]; then
log "Logging in to ${REGISTRY_SERVER}..."
printf '%s' "${REGISTRY_TOKEN}" | docker login "${REGISTRY_SERVER}" -u "${REGISTRY_USER}" --password-stdin >/dev/null
log "Building API image: ${API_IMAGE}"
docker build --target api -t "${API_IMAGE}" "${REPO_DIR}"
log "Building Worker image: ${WORKER_IMAGE}"
docker build --target worker -t "${WORKER_IMAGE}" "${REPO_DIR}"
log "Building Admin image: ${ADMIN_IMAGE}"
docker build --target admin -t "${ADMIN_IMAGE}" "${REPO_DIR}"
log "Pushing images..."
docker push "${API_IMAGE}"
docker push "${WORKER_IMAGE}"
docker push "${ADMIN_IMAGE}"
# Also tag and push :latest
docker tag "${API_IMAGE}" "${REGISTRY_PREFIX}/honeydue-api:latest"
docker tag "${WORKER_IMAGE}" "${REGISTRY_PREFIX}/honeydue-worker:latest"
docker tag "${ADMIN_IMAGE}" "${REGISTRY_PREFIX}/honeydue-admin:latest"
docker push "${REGISTRY_PREFIX}/honeydue-api:latest"
docker push "${REGISTRY_PREFIX}/honeydue-worker:latest"
docker push "${REGISTRY_PREFIX}/honeydue-admin:latest"
else
warn "Skipping build. Using images for tag: ${DEPLOY_TAG}"
fi
# --- Generate and apply ConfigMap from config.yaml ---
log "Generating env from config.yaml..."
ENV_FILE="$(mktemp)"
trap 'rm -f "${ENV_FILE}"' EXIT
generate_env > "${ENV_FILE}"
log "Creating ConfigMap..."
kubectl create configmap honeydue-config \
--namespace="${NAMESPACE}" \
--from-env-file="${ENV_FILE}" \
--dry-run=client -o yaml | kubectl apply -f -
# --- Apply manifests ---
log "Applying manifests..."
kubectl apply -f "${MANIFESTS}/namespace.yaml"
kubectl apply -f "${MANIFESTS}/redis/"
kubectl apply -f "${MANIFESTS}/ingress/"
# Apply deployments with image substitution
sed "s|image: IMAGE_PLACEHOLDER|image: ${API_IMAGE}|" "${MANIFESTS}/api/deployment.yaml" | kubectl apply -f -
kubectl apply -f "${MANIFESTS}/api/service.yaml"
kubectl apply -f "${MANIFESTS}/api/hpa.yaml"
sed "s|image: IMAGE_PLACEHOLDER|image: ${WORKER_IMAGE}|" "${MANIFESTS}/worker/deployment.yaml" | kubectl apply -f -
sed "s|image: IMAGE_PLACEHOLDER|image: ${ADMIN_IMAGE}|" "${MANIFESTS}/admin/deployment.yaml" | kubectl apply -f -
kubectl apply -f "${MANIFESTS}/admin/service.yaml"
# --- Wait for rollouts ---
log "Waiting for rollouts..."
kubectl rollout status deployment/redis -n "${NAMESPACE}" --timeout=120s
kubectl rollout status deployment/api -n "${NAMESPACE}" --timeout=300s
kubectl rollout status deployment/worker -n "${NAMESPACE}" --timeout=300s
kubectl rollout status deployment/admin -n "${NAMESPACE}" --timeout=300s
# --- Done ---
log ""
log "Deploy completed successfully."
log "Tag: ${DEPLOY_TAG}"
log "Images:"
log " API: ${API_IMAGE}"
log " Worker: ${WORKER_IMAGE}"
log " Admin: ${ADMIN_IMAGE}"
log ""
log "Run ./scripts/04-verify.sh to check cluster health."

180
deploy-k3s/scripts/04-verify.sh Executable file
View File

@@ -0,0 +1,180 @@
#!/usr/bin/env bash
set -euo pipefail
NAMESPACE="honeydue"
log() { printf '[verify] %s\n' "$*"; }
sep() { printf '\n%s\n' "--- $1 ---"; }
ok() { printf '[verify] ✓ %s\n' "$*"; }
fail() { printf '[verify] ✗ %s\n' "$*"; }
command -v kubectl >/dev/null 2>&1 || { echo "Missing: kubectl" >&2; exit 1; }
sep "Nodes"
kubectl get nodes -o wide
sep "Pods"
kubectl get pods -n "${NAMESPACE}" -o wide
sep "Services"
kubectl get svc -n "${NAMESPACE}"
sep "Ingress"
kubectl get ingress -n "${NAMESPACE}"
sep "HPA"
kubectl get hpa -n "${NAMESPACE}"
sep "PVCs"
kubectl get pvc -n "${NAMESPACE}"
sep "Secrets (names only)"
kubectl get secrets -n "${NAMESPACE}"
sep "ConfigMap keys"
kubectl get configmap honeydue-config -n "${NAMESPACE}" -o jsonpath='{.data}' 2>/dev/null | python3 -c "
import json, sys
try:
d = json.load(sys.stdin)
for k in sorted(d.keys()):
v = d[k]
if any(s in k.upper() for s in ['PASSWORD', 'SECRET', 'TOKEN', 'KEY']):
v = '***REDACTED***'
print(f' {k}={v}')
except:
print(' (could not parse)')
" 2>/dev/null || log "ConfigMap not found or not parseable"
sep "Warning Events (last 15 min)"
kubectl get events -n "${NAMESPACE}" --field-selector type=Warning --sort-by='.lastTimestamp' 2>/dev/null | tail -20 || log "No warning events"
sep "Pod Restart Counts"
kubectl get pods -n "${NAMESPACE}" -o jsonpath='{range .items[*]}{.metadata.name}{"\t"}{range .status.containerStatuses[*]}{.restartCount}{end}{"\n"}{end}' 2>/dev/null || true
sep "In-Cluster Health Check"
API_POD="$(kubectl get pods -n "${NAMESPACE}" -l app.kubernetes.io/name=api -o jsonpath='{.items[0].metadata.name}' 2>/dev/null || true)"
if [[ -n "${API_POD}" ]]; then
log "Running health check from pod ${API_POD}..."
kubectl exec -n "${NAMESPACE}" "${API_POD}" -- curl -sf http://localhost:8000/api/health/ 2>/dev/null && log "Health check: OK" || log "Health check: FAILED"
else
log "No API pod found — skipping in-cluster health check"
fi
sep "Resource Usage"
kubectl top pods -n "${NAMESPACE}" 2>/dev/null || log "Metrics server not available (install with: kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml)"
# =============================================================================
# Security Verification
# =============================================================================
sep "Security: Secret Encryption"
# Check that secrets-encryption is configured on the K3s server
if kubectl get nodes -o jsonpath='{.items[0].metadata.name}' >/dev/null 2>&1; then
# Verify secrets are stored encrypted by checking the encryption config exists
if kubectl -n kube-system get cm k3s-config -o yaml 2>/dev/null | grep -q "secrets-encryption"; then
ok "secrets-encryption found in K3s config"
else
# Alternative: check if etcd stores encrypted data
ENCRYPTED_CHECK="$(kubectl get secret honeydue-secrets -n "${NAMESPACE}" -o jsonpath='{.metadata.name}' 2>/dev/null || true)"
if [[ -n "${ENCRYPTED_CHECK}" ]]; then
ok "honeydue-secrets exists (verify encryption with: k3s secrets-encrypt status)"
else
fail "Cannot verify secret encryption — run 'k3s secrets-encrypt status' on the server"
fi
fi
else
fail "Cannot reach cluster to verify secret encryption"
fi
sep "Security: Network Policies"
NP_COUNT="$(kubectl get networkpolicy -n "${NAMESPACE}" --no-headers 2>/dev/null | wc -l | tr -d ' ')"
if (( NP_COUNT >= 5 )); then
ok "Found ${NP_COUNT} network policies"
kubectl get networkpolicy -n "${NAMESPACE}" --no-headers 2>/dev/null | while read -r line; do
echo " ${line}"
done
else
fail "Expected 5+ network policies, found ${NP_COUNT}"
fi
sep "Security: Service Accounts"
SA_COUNT="$(kubectl get sa -n "${NAMESPACE}" --no-headers 2>/dev/null | grep -cv default | tr -d ' ')"
if (( SA_COUNT >= 4 )); then
ok "Found ${SA_COUNT} custom service accounts (api, worker, admin, redis)"
else
fail "Expected 4 custom service accounts, found ${SA_COUNT}"
fi
kubectl get sa -n "${NAMESPACE}" --no-headers 2>/dev/null | while read -r line; do
echo " ${line}"
done
sep "Security: Pod Security Contexts"
PODS_WITHOUT_SECURITY="$(kubectl get pods -n "${NAMESPACE}" -o json 2>/dev/null | python3 -c "
import json, sys
try:
data = json.load(sys.stdin)
issues = []
for pod in data.get('items', []):
name = pod['metadata']['name']
spec = pod['spec']
sc = spec.get('securityContext', {})
if not sc.get('runAsNonRoot'):
issues.append(f'{name}: missing runAsNonRoot')
for c in spec.get('containers', []):
csc = c.get('securityContext', {})
if csc.get('allowPrivilegeEscalation', True):
issues.append(f'{name}/{c[\"name\"]}: allowPrivilegeEscalation not false')
if not csc.get('readOnlyRootFilesystem'):
issues.append(f'{name}/{c[\"name\"]}: readOnlyRootFilesystem not true')
if issues:
for i in issues:
print(i)
else:
print('OK')
except Exception as e:
print(f'Error: {e}')
" 2>/dev/null || echo "Error parsing pod specs")"
if [[ "${PODS_WITHOUT_SECURITY}" == "OK" ]]; then
ok "All pods have proper security contexts"
else
fail "Pod security context issues:"
echo "${PODS_WITHOUT_SECURITY}" | while read -r line; do
echo " ${line}"
done
fi
sep "Security: Pod Disruption Budgets"
PDB_COUNT="$(kubectl get pdb -n "${NAMESPACE}" --no-headers 2>/dev/null | wc -l | tr -d ' ')"
if (( PDB_COUNT >= 2 )); then
ok "Found ${PDB_COUNT} pod disruption budgets"
else
fail "Expected 2+ PDBs, found ${PDB_COUNT}"
fi
kubectl get pdb -n "${NAMESPACE}" 2>/dev/null || true
sep "Security: Cloudflare-Only Middleware"
CF_MIDDLEWARE="$(kubectl get middleware cloudflare-only -n "${NAMESPACE}" -o name 2>/dev/null || true)"
if [[ -n "${CF_MIDDLEWARE}" ]]; then
ok "cloudflare-only middleware exists"
# Check ingress annotations reference it
INGRESS_ANNOTATIONS="$(kubectl get ingress -n "${NAMESPACE}" -o jsonpath='{.items[*].metadata.annotations.traefik\.ingress\.kubernetes\.io/router\.middlewares}' 2>/dev/null || true)"
if echo "${INGRESS_ANNOTATIONS}" | grep -q "cloudflare-only"; then
ok "Ingress references cloudflare-only middleware"
else
fail "Ingress does NOT reference cloudflare-only middleware"
fi
else
fail "cloudflare-only middleware not found"
fi
sep "Security: Admin Basic Auth"
ADMIN_AUTH="$(kubectl get secret admin-basic-auth -n "${NAMESPACE}" -o name 2>/dev/null || true)"
if [[ -n "${ADMIN_AUTH}" ]]; then
ok "admin-basic-auth secret exists"
else
fail "admin-basic-auth secret not found — admin panel has no additional auth layer"
fi
echo ""
log "Verification complete."

214
deploy-k3s/scripts/_config.sh Executable file
View File

@@ -0,0 +1,214 @@
#!/usr/bin/env bash
# Shared config helper — sourced by all deploy scripts.
# Provides cfg() to read values from config.yaml.
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
DEPLOY_DIR="$(cd "${SCRIPT_DIR}/.." && pwd)"
CONFIG_FILE="${DEPLOY_DIR}/config.yaml"
if [[ ! -f "${CONFIG_FILE}" ]]; then
if [[ -f "${CONFIG_FILE}.example" ]]; then
echo "[error] config.yaml not found. Run: cp config.yaml.example config.yaml" >&2
else
echo "[error] config.yaml not found." >&2
fi
exit 1
fi
# cfg "dotted.key.path" — reads a value from config.yaml
# Examples: cfg database.host, cfg nodes.0.ip, cfg features.push_enabled
cfg() {
python3 -c "
import yaml, json, sys
with open(sys.argv[1]) as f:
c = yaml.safe_load(f)
keys = sys.argv[2].split('.')
v = c
for k in keys:
if isinstance(v, list):
v = v[int(k)]
else:
v = v[k]
if isinstance(v, bool):
print(str(v).lower())
elif isinstance(v, (dict, list)):
print(json.dumps(v))
else:
print('' if v is None else v)
" "${CONFIG_FILE}" "$1" 2>/dev/null
}
# cfg_require "key" "label" — reads value and dies if empty
cfg_require() {
local val
val="$(cfg "$1")"
if [[ -z "${val}" ]]; then
echo "[error] Missing required config: $1 ($2)" >&2
exit 1
fi
printf '%s' "${val}"
}
# node_count — returns number of nodes
node_count() {
python3 -c "
import yaml
with open('${CONFIG_FILE}') as f:
c = yaml.safe_load(f)
print(len(c.get('nodes', [])))
"
}
# nodes_with_role "role" — returns node names with a given role
nodes_with_role() {
python3 -c "
import yaml
with open('${CONFIG_FILE}') as f:
c = yaml.safe_load(f)
for n in c.get('nodes', []):
if '$1' in n.get('roles', []):
print(n['name'])
"
}
# generate_env — writes the flat env file the app expects to stdout
generate_env() {
python3 -c "
import yaml
with open('${CONFIG_FILE}') as f:
c = yaml.safe_load(f)
d = c['domains']
db = c['database']
em = c['email']
ps = c['push']
st = c['storage']
wk = c['worker']
ft = c['features']
aa = c.get('apple_auth', {})
ga = c.get('google_auth', {})
rd = c.get('redis', {})
def b(v):
return str(v).lower() if isinstance(v, bool) else str(v)
def val(v):
return '' if v is None else str(v)
lines = [
# API
'DEBUG=false',
f\"ALLOWED_HOSTS={d['api']},{d['base']}\",
f\"CORS_ALLOWED_ORIGINS=https://{d['base']},https://{d['admin']}\",
'TIMEZONE=UTC',
f\"BASE_URL=https://{d['base']}\",
'PORT=8000',
# Admin
f\"NEXT_PUBLIC_API_URL=https://{d['api']}\",
f\"ADMIN_PANEL_URL=https://{d['admin']}\",
# Database
f\"DB_HOST={val(db['host'])}\",
f\"DB_PORT={db['port']}\",
f\"POSTGRES_USER={val(db['user'])}\",
f\"POSTGRES_DB={db['name']}\",
f\"DB_SSLMODE={db['sslmode']}\",
f\"DB_MAX_OPEN_CONNS={db['max_open_conns']}\",
f\"DB_MAX_IDLE_CONNS={db['max_idle_conns']}\",
f\"DB_MAX_LIFETIME={db['max_lifetime']}\",
# Redis (K8s internal DNS — password injected if configured)
f\"REDIS_URL=redis://{':%s@' % val(rd.get('password')) if rd.get('password') else ''}redis.honeydue.svc.cluster.local:6379/0\",
'REDIS_DB=0',
# Email
f\"EMAIL_HOST={em['host']}\",
f\"EMAIL_PORT={em['port']}\",
f\"EMAIL_USE_TLS={b(em['use_tls'])}\",
f\"EMAIL_HOST_USER={val(em['user'])}\",
f\"DEFAULT_FROM_EMAIL={val(em['from'])}\",
# Push
'APNS_AUTH_KEY_PATH=/secrets/apns/apns_auth_key.p8',
f\"APNS_AUTH_KEY_ID={val(ps['apns_key_id'])}\",
f\"APNS_TEAM_ID={val(ps['apns_team_id'])}\",
f\"APNS_TOPIC={ps['apns_topic']}\",
f\"APNS_USE_SANDBOX={b(ps['apns_use_sandbox'])}\",
f\"APNS_PRODUCTION={b(ps['apns_production'])}\",
# Worker
f\"TASK_REMINDER_HOUR={wk['task_reminder_hour']}\",
f\"OVERDUE_REMINDER_HOUR={wk['overdue_reminder_hour']}\",
f\"DAILY_DIGEST_HOUR={wk['daily_digest_hour']}\",
# B2 Storage
f\"B2_KEY_ID={val(st['b2_key_id'])}\",
f\"B2_APP_KEY={val(st['b2_app_key'])}\",
f\"B2_BUCKET_NAME={val(st['b2_bucket'])}\",
f\"B2_ENDPOINT={val(st['b2_endpoint'])}\",
f\"STORAGE_MAX_FILE_SIZE={st['max_file_size']}\",
f\"STORAGE_ALLOWED_TYPES={st['allowed_types']}\",
# Features
f\"FEATURE_PUSH_ENABLED={b(ft['push_enabled'])}\",
f\"FEATURE_EMAIL_ENABLED={b(ft['email_enabled'])}\",
f\"FEATURE_WEBHOOKS_ENABLED={b(ft['webhooks_enabled'])}\",
f\"FEATURE_ONBOARDING_EMAILS_ENABLED={b(ft['onboarding_emails_enabled'])}\",
f\"FEATURE_PDF_REPORTS_ENABLED={b(ft['pdf_reports_enabled'])}\",
f\"FEATURE_WORKER_ENABLED={b(ft['worker_enabled'])}\",
# Apple auth/IAP
f\"APPLE_CLIENT_ID={val(aa.get('client_id'))}\",
f\"APPLE_TEAM_ID={val(aa.get('team_id'))}\",
f\"APPLE_IAP_KEY_ID={val(aa.get('iap_key_id'))}\",
f\"APPLE_IAP_ISSUER_ID={val(aa.get('iap_issuer_id'))}\",
f\"APPLE_IAP_BUNDLE_ID={val(aa.get('iap_bundle_id'))}\",
f\"APPLE_IAP_KEY_PATH={val(aa.get('iap_key_path'))}\",
f\"APPLE_IAP_SANDBOX={b(aa.get('iap_sandbox', False))}\",
# Google auth/IAP
f\"GOOGLE_CLIENT_ID={val(ga.get('client_id'))}\",
f\"GOOGLE_ANDROID_CLIENT_ID={val(ga.get('android_client_id'))}\",
f\"GOOGLE_IOS_CLIENT_ID={val(ga.get('ios_client_id'))}\",
f\"GOOGLE_IAP_PACKAGE_NAME={val(ga.get('iap_package_name'))}\",
f\"GOOGLE_IAP_SERVICE_ACCOUNT_PATH={val(ga.get('iap_service_account_path'))}\",
]
print('\n'.join(lines))
"
}
# generate_cluster_config — writes hetzner-k3s YAML to stdout
generate_cluster_config() {
python3 -c "
import yaml
with open('${CONFIG_FILE}') as f:
c = yaml.safe_load(f)
cl = c['cluster']
config = {
'cluster_name': 'honeydue',
'kubeconfig_path': './kubeconfig',
'k3s_version': cl['k3s_version'],
'networking': {
'ssh': {
'port': 22,
'use_agent': False,
'public_key_path': cl['ssh_public_key'],
'private_key_path': cl['ssh_private_key'],
},
'allowed_networks': {
'ssh': ['0.0.0.0/0'],
'api': ['0.0.0.0/0'],
},
},
'api_server_hostname': '',
'schedule_workloads_on_masters': True,
'masters_pool': {
'instance_type': cl['instance_type'],
'instance_count': len(c.get('nodes', [])),
'location': cl['location'],
'image': 'ubuntu-24.04',
},
'additional_packages': ['open-iscsi'],
'post_create_commands': ['sudo systemctl enable --now iscsid'],
'k3s_config_file': 'secrets-encryption: true\n',
}
print(yaml.dump(config, default_flow_style=False, sort_keys=False))
"
}

61
deploy-k3s/scripts/rollback.sh Executable file
View File

@@ -0,0 +1,61 @@
#!/usr/bin/env bash
set -euo pipefail
NAMESPACE="honeydue"
log() { printf '[rollback] %s\n' "$*"; }
die() { printf '[rollback][error] %s\n' "$*" >&2; exit 1; }
command -v kubectl >/dev/null 2>&1 || die "Missing: kubectl"
DEPLOYMENTS=("api" "worker" "admin")
# --- Show current state ---
echo "=== Current Rollout History ==="
for deploy in "${DEPLOYMENTS[@]}"; do
echo ""
echo "--- ${deploy} ---"
kubectl rollout history deployment/"${deploy}" -n "${NAMESPACE}" 2>/dev/null || echo " (not found)"
done
echo ""
echo "=== Current Images ==="
for deploy in "${DEPLOYMENTS[@]}"; do
IMAGE="$(kubectl get deployment "${deploy}" -n "${NAMESPACE}" -o jsonpath='{.spec.template.spec.containers[0].image}' 2>/dev/null || echo "n/a")"
echo " ${deploy}: ${IMAGE}"
done
# --- Confirm ---
echo ""
read -rp "Roll back all deployments to previous revision? [y/N] " confirm
if [[ "${confirm}" != "y" && "${confirm}" != "Y" ]]; then
log "Aborted."
exit 0
fi
# --- Rollback ---
for deploy in "${DEPLOYMENTS[@]}"; do
log "Rolling back ${deploy}..."
kubectl rollout undo deployment/"${deploy}" -n "${NAMESPACE}" 2>/dev/null || log "Skipping ${deploy} (not found or no previous revision)"
done
# --- Wait ---
log "Waiting for rollouts..."
for deploy in "${DEPLOYMENTS[@]}"; do
kubectl rollout status deployment/"${deploy}" -n "${NAMESPACE}" --timeout=300s 2>/dev/null || true
done
# --- Verify ---
echo ""
echo "=== Post-Rollback Images ==="
for deploy in "${DEPLOYMENTS[@]}"; do
IMAGE="$(kubectl get deployment "${deploy}" -n "${NAMESPACE}" -o jsonpath='{.spec.template.spec.containers[0].image}' 2>/dev/null || echo "n/a")"
echo " ${deploy}: ${IMAGE}"
done
log "Rollback complete. Run ./scripts/04-verify.sh to check health."

View File

@@ -0,0 +1,19 @@
# Secrets Directory
Create these files before running `scripts/02-setup-secrets.sh`:
| File | Purpose |
|------|---------|
| `postgres_password.txt` | Neon PostgreSQL password |
| `secret_key.txt` | App signing secret (minimum 32 characters) |
| `email_host_password.txt` | SMTP password (Fastmail app password) |
| `fcm_server_key.txt` | Firebase Cloud Messaging server key |
| `apns_auth_key.p8` | Apple Push Notification private key |
| `cloudflare-origin.crt` | Cloudflare origin certificate (PEM) |
| `cloudflare-origin.key` | Cloudflare origin certificate key (PEM) |
The first five files are the same format as the Docker Swarm `deploy/secrets/` directory.
The Cloudflare files are new for K3s (TLS termination at the ingress).
All string config (database host, registry token, etc.) goes in `config.yaml` instead.
These files are gitignored and should never be committed.

View File

@@ -2704,6 +2704,105 @@ paths:
'404':
$ref: '#/components/responses/NotFound'
/auth/account/:
delete:
tags: [Authentication]
summary: Delete user account
description: Permanently deletes the authenticated user's account and all associated data
security:
- tokenAuth: []
requestBody:
content:
application/json:
schema:
type: object
properties:
password:
type: string
description: Required for email-auth users
confirmation:
type: string
description: Must be "DELETE" for social-auth users
responses:
'200':
description: Account deleted successfully
'400':
$ref: '#/components/responses/BadRequest'
'401':
$ref: '#/components/responses/Unauthorized'
/auth/refresh/:
post:
tags: [Authentication]
summary: Refresh auth token
description: Returns a new token if current token is in the renewal window (60-90 days old)
security:
- tokenAuth: []
responses:
'200':
description: Token refreshed
content:
application/json:
schema:
type: object
properties:
token:
type: string
'401':
$ref: '#/components/responses/Unauthorized'
/health/live:
get:
tags: [Health]
summary: Liveness probe
description: Simple liveness check, always returns 200
responses:
'200':
description: Alive
/tasks/suggestions/:
get:
tags: [Tasks]
summary: Get personalized task template suggestions
description: Returns task templates ranked by relevance to the residence's home profile
security:
- tokenAuth: []
parameters:
- name: residence_id
in: query
required: true
schema:
type: integer
responses:
'200':
description: Suggestions with relevance scores
content:
application/json:
schema:
type: object
properties:
suggestions:
type: array
items:
type: object
properties:
template:
$ref: '#/components/schemas/TaskTemplate'
relevance_score:
type: number
match_reasons:
type: array
items:
type: string
total_count:
type: integer
profile_completeness:
type: number
'401':
$ref: '#/components/responses/Unauthorized'
'403':
$ref: '#/components/responses/Forbidden'
# =============================================================================
# Components
# =============================================================================

302
docs/server_2026_2_24.md Normal file
View File

@@ -0,0 +1,302 @@
# Casera Infrastructure Plan — February 2026
## Architecture Overview
```
┌─────────────┐
│ Cloudflare │
│ (CDN/DNS) │
└──────┬──────┘
│ HTTPS
┌──────┴──────┐
│ Hetzner LB │
│ ($5.99) │
└──────┬──────┘
┌────────────────┼────────────────┐
│ │ │
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ CX33 #1 │ │ CX33 #2 │ │ CX33 #3 │
│ (manager) │ │ (manager) │ │ (manager) │
│ │ │ │ │ │
│ api (x2) │ │ api (x2) │ │ api (x1) │
│ admin │ │ worker │ │ worker │
│ redis │ │ dozzle │ │ │
└──────┬──────┘ └──────┬──────┘ └──────┬──────┘
│ │ │
│ Docker Swarm Overlay (IPsec) │
└────────────────┼────────────────┘
┌────────────┼────────────────┐
│ │
┌──────┴──────┐ ┌───────┴──────┐
│ Neon │ │ Backblaze │
│ (Postgres) │ │ B2 │
│ Launch │ │ (media) │
└─────────────┘ └──────────────┘
```
## Swarm Nodes — Hetzner CX33
All 3 nodes are manager+worker (Raft consensus requires 3 managers for fault tolerance — 1 node can go down and the cluster stays operational).
| Spec | Value |
|------|-------|
| Plan | CX33 (Shared Regular Performance) |
| vCPU | 4 |
| RAM | 8 GB |
| Disk | 80 GB SSD |
| Traffic | 20 TB/mo included |
| Price | $6.59/mo per node |
| Region | Pick closest to users (US: Ashburn or Hillsboro, EU: Nuremberg/Falkenstein/Helsinki) |
**Why CX33 over CX23:** 8 GB RAM gives headroom for Redis, multiple API replicas, and the admin panel without pressure. The $2.50/mo difference per node isn't worth optimizing away.
### Container Distribution
| Container | Replicas | Notes |
|-----------|----------|-------|
| api | 3-6 | Spread across all nodes by Swarm |
| worker | 2-3 | Asynq workers pull jobs from Redis concurrently |
| admin | 1 | Next.js admin panel |
| redis | 1 | Pinned to one node with its volume |
| dozzle | 1 | Pinned to a manager node (needs Docker socket) |
### Scaling Path
- Need more capacity? Add another CX33 with `docker swarm join`. Swarm rebalances automatically.
- Need more API throughput? Bump replicas in the compose file. No infra change.
- Only infrastructure addition needed at scale: the Hetzner Load Balancer ($5.99/mo).
## Load Balancer — Hetzner LB
| Spec | Value |
|------|-------|
| Price | $5.99/mo |
| Purpose | Distribute traffic across Swarm nodes, TLS termination |
| When to add | When you need redundant ingress (not required day 1 if using Cloudflare to proxy to a single node) |
## Database — Neon Postgres (Launch Plan)
| Spec | Value |
|------|-------|
| Plan | Launch (usage-based, no monthly minimum) |
| Compute | $0.106/CU-hr, up to 16 CU (64 GB RAM) |
| Storage | $0.35/GB-month |
| Connections | Up to 10,000 via built-in PgBouncer |
| Typical cost | ~$5-15/mo for light load, ~$20-40/mo at 100k users |
| Free tier | Available for dev/staging (100 CU-hrs/mo, 0.5 GB) |
### Connection Pooling
Neon includes built-in PgBouncer on all plans. Enable by adding `-pooler` to the hostname:
```
# Direct connection
ep-cool-darkness-123456.us-east-2.aws.neon.tech
# Pooled connection (use this in production)
ep-cool-darkness-123456-pooler.us-east-2.aws.neon.tech
```
Runs in transaction mode — compatible with GORM out of the box.
### Configuration
```env
DB_HOST=ep-xxxxx-pooler.us-east-2.aws.neon.tech
DB_PORT=5432
DB_SSLMODE=require
POSTGRES_USER=<from neon dashboard>
POSTGRES_PASSWORD=<from neon dashboard>
POSTGRES_DB=casera
```
## Object Storage — Backblaze B2
| Spec | Value |
|------|-------|
| Storage | $6/TB/mo ($0.006/GB) |
| Egress | $0.01/GB (first 3x stored amount is free) |
| Free tier | 10 GB storage always free |
| API calls | Class A free, Class B/C free first 2,500/day |
| Spending cap | Built-in data caps with alerts at 75% and 100% |
### Bucket Setup
| Bucket | Visibility | Key Permissions | Contents |
|--------|------------|-----------------|----------|
| `casera-uploads` | Private | Read/Write (API containers) | User-uploaded photos, documents |
| `casera-certs` | Private | Read-only (API + worker) | APNs push certificates |
Serve files through the API using signed URLs — never expose buckets publicly.
### Why B2 Over Others
- **Spending cap**: only S3-compatible provider with built-in hard caps and alerts. No surprise bills.
- **Cheapest storage**: $6/TB vs Cloudflare R2 at $15/TB vs Tigris at $20/TB.
- **Free egress partner CDNs**: Cloudflare, Fastly, bunny.net — zero egress when behind Cloudflare.
## CDN — Cloudflare (Free Tier)
| Spec | Value |
|------|-------|
| Price | $0 |
| Purpose | DNS, CDN caching, DDoS protection, TLS termination |
| Setup | Point DNS to Cloudflare, proxy traffic to Hetzner LB (or directly to a Swarm node) |
Add this on day 1. No reason not to.
## Logging — Dozzle
| Spec | Value |
|------|-------|
| Price | $0 (open source) |
| Port | 9999 (internal only — do not expose publicly) |
| Features | Real-time log viewer, webhook support for alerts |
Runs as a container in the Swarm. Needs Docker socket access, so it's pinned to a manager node.
For 100k+ users, consider adding Prometheus + Grafana (self-hosted, free) or Betterstack (~$10/mo) for metrics and alerting beyond log viewing.
## Security
### Swarm Node Firewall (Hetzner Cloud Firewall — free)
| Port | Protocol | Source | Purpose |
|------|----------|--------|---------|
| Custom (e.g. 2222) | TCP | Your IP only | SSH |
| 80, 443 | TCP | Anywhere | Public traffic |
| 2377 | TCP | Swarm nodes only | Cluster management |
| 7946 | TCP/UDP | Swarm nodes only | Node discovery |
| 4789 | UDP | Swarm nodes only | Overlay network (VXLAN) |
| Everything else | — | — | Blocked |
Set up once in Hetzner dashboard, apply to all 3 nodes.
### SSH Hardening
```
# /etc/ssh/sshd_config
Port 2222 # Non-default port
PermitRootLogin no # No root SSH
PasswordAuthentication no # Key-only auth
PubkeyAuthentication yes
AllowUsers deploy # Only your deploy user
```
### Swarm ↔ Neon (Postgres)
| Layer | Method |
|-------|--------|
| Encryption | TLS enforced by Neon (`DB_SSLMODE=require`) |
| Authentication | Strong password stored as Docker secret |
| Access control | IP allowlist in Neon dashboard — restrict to 3 Swarm node IPs |
### Swarm ↔ B2 (Object Storage)
| Layer | Method |
|-------|--------|
| Encryption | HTTPS always (enforced by B2 API) |
| Authentication | Scoped application keys (not master key) |
| Access control | Per-bucket key permissions (read-only where possible) |
### Swarm Internal
| Layer | Method |
|-------|--------|
| Overlay encryption | `driver_opts: encrypted: "true"` on overlay network (IPsec between nodes) |
| Secrets | Use `docker secret create` for DB password, SECRET_KEY, B2 keys, APNs keys. Mounted at `/run/secrets/`, encrypted in Swarm raft log. |
| Container isolation | Non-root users in all containers (already configured in Dockerfile) |
### Docker Secrets Migration
Current setup uses environment variables for secrets. Migrate to Docker secrets for production:
```bash
# Create secrets
echo "your-db-password" | docker secret create postgres_password -
echo "your-secret-key" | docker secret create secret_key -
echo "your-b2-app-key" | docker secret create b2_app_key -
# Reference in compose file
services:
api:
secrets:
- postgres_password
- secret_key
secrets:
postgres_password:
external: true
secret_key:
external: true
```
Application code reads from `/run/secrets/<name>` instead of env vars.
## Redis (In-Cluster)
Redis stays inside the Swarm — no need to externalize.
| Purpose | Details |
|---------|---------|
| Asynq job queue | Background jobs: push notifications, digests, reminders, onboarding emails |
| Static data cache | Cached lookup tables with ETag support |
| Resource usage | ~20-50 MB RAM, negligible CPU |
At 100k users, Redis handles job queuing for nightly digests (100k enqueue + dequeue operations) without issue. A single Redis instance handles millions of operations per second.
Asynq coordinates multiple worker replicas automatically — each job is dequeued atomically by exactly one worker, no double-processing.
## Performance Estimates
| Metric | Value |
|--------|-------|
| Single CX33 API throughput | ~1,000-2,000 req/s (blended, with Neon latency) |
| 3-node cluster throughput | ~3,000-6,000 req/s |
| Avg requests per user per day | ~50 |
| Estimated user capacity (3 nodes) | ~200k-500k registered users |
| Bottleneck at scale | Neon compute tier, not Go or Swarm |
These are napkin estimates. Load test before launch.
## Monthly Cost Summary
### Starting Out
| Component | Provider | Cost |
|-----------|----------|------|
| 3x Swarm nodes | Hetzner CX33 | $19.77/mo |
| Postgres | Neon Launch | ~$5-15/mo |
| Object storage | Backblaze B2 | <$1/mo |
| CDN | Cloudflare Free | $0 |
| Logging | Dozzle (self-hosted) | $0 |
| **Total** | | **~$25-35/mo** |
### At Scale (100k users)
| Component | Provider | Cost |
|-----------|----------|------|
| 3x Swarm nodes | Hetzner CX33 | $19.77/mo |
| Load balancer | Hetzner LB | $5.99/mo |
| Postgres | Neon Launch | ~$20-40/mo |
| Object storage | Backblaze B2 | ~$1-3/mo |
| CDN | Cloudflare Free | $0 |
| Monitoring | Betterstack or self-hosted | ~$0-10/mo |
| **Total** | | **~$47-79/mo** |
## TODO
- [ ] Set up 3x Hetzner CX33 instances
- [ ] Initialize Docker Swarm (`docker swarm init` on first node, `docker swarm join` on others)
- [ ] Configure Hetzner Cloud Firewall
- [ ] Harden SSH on all nodes
- [ ] Create Neon project (Launch plan), configure IP allowlist
- [ ] Create Backblaze B2 buckets with scoped application keys
- [ ] Set up Cloudflare DNS proxying
- [ ] Update prod compose file: remove `db` service, add overlay encryption, add Docker secrets
- [ ] Add B2 SDK integration for file uploads (code change)
- [ ] Update config to read from `/run/secrets/` for Docker secrets
- [ ] Set B2 spending cap and alerts
- [ ] Load test the deployed stack
- [ ] Add Hetzner LB when needed

27
go.mod
View File

@@ -1,6 +1,6 @@
module github.com/treytartt/honeydue-api
go 1.24.0
go 1.25
require (
github.com/go-pdf/fpdf v0.9.0
@@ -10,6 +10,7 @@ require (
github.com/gorilla/websocket v1.5.3
github.com/hibiken/asynq v0.25.1
github.com/labstack/echo/v4 v4.11.4
github.com/minio/minio-go/v7 v7.0.99
github.com/nicksnyder/go-i18n/v2 v2.6.0
github.com/redis/go-redis/v9 v9.17.1
github.com/rs/zerolog v1.34.0
@@ -20,9 +21,9 @@ require (
github.com/stretchr/testify v1.11.1
github.com/stripe/stripe-go/v81 v81.4.0
github.com/wneessen/go-mail v0.7.2
golang.org/x/crypto v0.45.0
golang.org/x/crypto v0.46.0
golang.org/x/oauth2 v0.34.0
golang.org/x/text v0.31.0
golang.org/x/text v0.32.0
golang.org/x/time v0.14.0
google.golang.org/api v0.257.0
gopkg.in/yaml.v3 v3.0.1
@@ -31,6 +32,20 @@ require (
gorm.io/gorm v1.31.1
)
require (
github.com/dustin/go-humanize v1.0.1 // indirect
github.com/go-ini/ini v1.67.0 // indirect
github.com/klauspost/compress v1.18.2 // indirect
github.com/klauspost/cpuid/v2 v2.2.11 // indirect
github.com/klauspost/crc32 v1.3.0 // indirect
github.com/minio/crc64nvme v1.1.1 // indirect
github.com/minio/md5-simd v1.1.2 // indirect
github.com/philhofer/fwd v1.2.0 // indirect
github.com/rs/xid v1.6.0 // indirect
github.com/tinylib/msgp v1.6.1 // indirect
go.yaml.in/yaml/v3 v3.0.4 // indirect
)
require (
cloud.google.com/go/auth v0.17.0 // indirect
cloud.google.com/go/auth/oauth2adapt v0.2.8 // indirect
@@ -85,9 +100,9 @@ require (
go.opentelemetry.io/otel v1.38.0 // indirect
go.opentelemetry.io/otel/metric v1.38.0 // indirect
go.opentelemetry.io/otel/trace v1.38.0 // indirect
golang.org/x/net v0.47.0 // indirect
golang.org/x/sync v0.18.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/net v0.48.0 // indirect
golang.org/x/sync v0.19.0 // indirect
golang.org/x/sys v0.39.0 // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20251124214823-79d6a2a48846 // indirect
google.golang.org/grpc v1.77.0 // indirect
google.golang.org/protobuf v1.36.10 // indirect

44
go.sum
View File

@@ -20,6 +20,8 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
github.com/dustin/go-humanize v1.0.1 h1:GzkhY7T5VNhEkwH0PVJgjz+fX1rhBrR7pRT3mDkpeCY=
github.com/dustin/go-humanize v1.0.1/go.mod h1:Mu1zIs6XwVuF/gI1OepvI0qD18qycQx+mFykh5fBlto=
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
github.com/frankban/quicktest v1.14.6 h1:7Xjx+VpznH+oBnejlPUj8oUpdxnVs4f8XU8WnHkI4W8=
@@ -28,6 +30,8 @@ github.com/fsnotify/fsnotify v1.9.0 h1:2Ml+OJNzbYCTzsxtv8vKSFD9PbJjmhYF14k/jKC7S
github.com/fsnotify/fsnotify v1.9.0/go.mod h1:8jBTzvmWwFyi3Pb8djgCCO5IBqzKJ/Jwo8TRcHyHii0=
github.com/gabriel-vasile/mimetype v1.4.8 h1:FfZ3gj38NjllZIeJAmMhr+qKL8Wu+nOoI3GqacKw1NM=
github.com/gabriel-vasile/mimetype v1.4.8/go.mod h1:ByKUIKGjh1ODkGM1asKUbQZOLGrPjydw3hYPU2YU9t8=
github.com/go-ini/ini v1.67.0 h1:z6ZrTEZqSWOTyH2FlglNbNgARyHG8oLW9gMELqKr06A=
github.com/go-ini/ini v1.67.0/go.mod h1:ByCAeIL28uOIIG0E3PJtZPDL8WnHpFKFOtgjp+3Ies8=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
@@ -84,6 +88,13 @@ github.com/jinzhu/inflection v1.0.0 h1:K317FqzuhWc8YvSVlFMCCUb36O/S9MCKRDI7QkRKD
github.com/jinzhu/inflection v1.0.0/go.mod h1:h+uFLlag+Qp1Va5pdKtLDYj+kHp5pxUVkryuEj+Srlc=
github.com/jinzhu/now v1.1.5 h1:/o9tlHleP7gOFmsnYNz3RGnqzefHA47wQpKrrdTIwXQ=
github.com/jinzhu/now v1.1.5/go.mod h1:d3SSVoowX0Lcu0IBviAWJpolVfI5UJVZZ7cO71lE/z8=
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
github.com/klauspost/cpuid/v2 v2.0.1/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.11 h1:0OwqZRYI2rFrjS4kvkDnqJkKHdHaRnCm68/DY4OxRzU=
github.com/klauspost/cpuid/v2 v2.2.11/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/klauspost/crc32 v1.3.0 h1:sSmTt3gUt81RP655XGZPElI0PelVTZ6YwCRnPSupoFM=
github.com/klauspost/crc32 v1.3.0/go.mod h1:D7kQaZhnkX/Y0tstFGf8VUzv2UofNGqCjnC3zdHB0Hw=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
@@ -104,10 +115,18 @@ github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWE
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/mattn/go-sqlite3 v2.0.3+incompatible h1:gXHsfypPkaMZrKbD5209QV9jbUTJKjyR5WD3HYQSd+U=
github.com/mattn/go-sqlite3 v2.0.3+incompatible/go.mod h1:FPy6KqzDD04eiIsT53CuJW3U88zkxoIYsOqkbpncsNc=
github.com/minio/crc64nvme v1.1.1 h1:8dwx/Pz49suywbO+auHCBpCtlW1OfpcLN7wYgVR6wAI=
github.com/minio/crc64nvme v1.1.1/go.mod h1:eVfm2fAzLlxMdUGc0EEBGSMmPwmXD5XiNRpnu9J3bvg=
github.com/minio/md5-simd v1.1.2 h1:Gdi1DZK69+ZVMoNHRXJyNcxrMA4dSxoYHZSQbirFg34=
github.com/minio/md5-simd v1.1.2/go.mod h1:MzdKDxYpY2BT9XQFocsiZf/NKVtR7nkE4RoEpN+20RM=
github.com/minio/minio-go/v7 v7.0.99 h1:2vH/byrwUkIpFQFOilvTfaUpvAX3fEFhEzO+DR3DlCE=
github.com/minio/minio-go/v7 v7.0.99/go.mod h1:EtGNKtlX20iL2yaYnxEigaIvj0G0GwSDnifnG8ClIdw=
github.com/nicksnyder/go-i18n/v2 v2.6.0 h1:C/m2NNWNiTB6SK4Ao8df5EWm3JETSTIGNXBpMJTxzxQ=
github.com/nicksnyder/go-i18n/v2 v2.6.0/go.mod h1:88sRqr0C6OPyJn0/KRNaEz1uWorjxIKP7rUUcvycecE=
github.com/pelletier/go-toml/v2 v2.2.4 h1:mye9XuhQ6gvn5h28+VilKrrPoQVanw5PMw/TB0t5Ec4=
github.com/pelletier/go-toml/v2 v2.2.4/go.mod h1:2gIqNv+qfxSVS7cM2xJQKtLSTLUE9V8t9Stt+h56mCY=
github.com/philhofer/fwd v1.2.0 h1:e6DnBTl7vGY+Gz322/ASL4Gyp1FspeMvx1RNDoToZuM=
github.com/philhofer/fwd v1.2.0/go.mod h1:RqIHx9QI14HlwKwm98g9Re5prTQ6LdeRQn+gXJFxsJM=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
@@ -119,6 +138,7 @@ github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/rs/xid v1.6.0 h1:fV591PaemRlL6JfRxGDEPl69wICngIQ3shQtzfy2gxU=
github.com/rs/xid v1.6.0/go.mod h1:7XoLgs4eV+QndskICGsho+ADou8ySMSjJKDIan90Nz0=
github.com/rs/zerolog v1.34.0 h1:k43nTLIwcTVQAncfCw4KZ2VY6ukYoZaBPNOE8txlOeY=
github.com/rs/zerolog v1.34.0/go.mod h1:bJsvje4Z08ROH4Nhs5iH600c3IkWhwp44iRc54W6wYQ=
@@ -154,6 +174,8 @@ github.com/stripe/stripe-go/v81 v81.4.0 h1:AuD9XzdAvl193qUCSaLocf8H+nRopOouXhxqJ
github.com/stripe/stripe-go/v81 v81.4.0/go.mod h1:C/F4jlmnGNacvYtBp/LUHCvVUJEZffFQCobkzwY1WOo=
github.com/subosito/gotenv v1.6.0 h1:9NlTDc1FTs4qu0DDq7AEtTPNw6SVm7uBMsUCUjABIf8=
github.com/subosito/gotenv v1.6.0/go.mod h1:Dk4QP5c2W3ibzajGcXpNraDfq2IrhjMIvMSWPKKo0FU=
github.com/tinylib/msgp v1.6.1 h1:ESRv8eL3u+DNHUoSAAQRE50Hm162zqAnBoGv9PzScPY=
github.com/tinylib/msgp v1.6.1/go.mod h1:RSp0LW9oSxFut3KzESt5Voq4GVWyS+PSulT77roAqEA=
github.com/tklauser/go-sysconf v0.3.12 h1:0QaGUFOdQaIVdPgfITYzaTegZvdCjmYO52cSFAEVmqU=
github.com/tklauser/go-sysconf v0.3.12/go.mod h1:Ho14jnntGE1fpdOqQEEaiKRpvIavV0hSfmBq8nJbHYI=
github.com/tklauser/numcpus v0.6.1 h1:ng9scYS7az0Bk4OZLvrNXNSAO2Pxr1XXRAPyjhIx+Fk=
@@ -182,17 +204,19 @@ go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJr
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.yaml.in/yaml/v3 v3.0.4 h1:tfq32ie2Jv2UxXFdLJdh3jXuOzWiL1fo0bu/FbuKpbc=
go.yaml.in/yaml/v3 v3.0.4/go.mod h1:DhzuOOF2ATzADvBadXxruRBLzYTpT36CKvDb3+aBEFg=
golang.org/x/crypto v0.0.0-20170512130425-ab89591268e0/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/crypto v0.46.0 h1:cKRW/pmt1pKAfetfu+RCEvjvZkA9RimPbh7bhFjGVBU=
golang.org/x/crypto v0.46.0/go.mod h1:Evb/oLKmMraqjZ2iQTwDwvCtJkczlDuTmdJXoZVzqU0=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220403103023-749bd193bc2b/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU=
golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY=
golang.org/x/oauth2 v0.34.0 h1:hqK/t4AKgbqWkdkcAeI8XLmbK+4m4G5YeQRrmiotGlw=
golang.org/x/oauth2 v0.34.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
golang.org/x/sync v0.18.0 h1:kr88TuHDroi+UVf+0hZnirlk8o8T+4MrK6mr60WkH/I=
golang.org/x/sync v0.18.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4=
golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20190916202348-b4ddaad3f8a3/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201204225414-ed752295db88/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
@@ -204,14 +228,14 @@ golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/sys v0.39.0 h1:CvCKL8MeisomCi6qNZ+wbb0DN9E5AATixKsvNtMoMFk=
golang.org/x/sys v0.39.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
golang.org/x/text v0.32.0 h1:ZD01bjUt1FQ9WJ0ClOL5vxgxOI/sVCNgX1YtKwcY0mU=
golang.org/x/text v0.32.0/go.mod h1:o/rUWzghvpD5TXrTIBuJU77MTaN0ljMWE47kxGJQ7jY=
golang.org/x/time v0.14.0 h1:MRx4UaLrDotUKUdCIqzPC48t1Y9hANFKIRpNx+Te8PI=
golang.org/x/time v0.14.0/go.mod h1:eL/Oa2bBBK0TkX57Fyni+NgnyQQN4LitPmob2Hjnqw4=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=

View File

@@ -0,0 +1,176 @@
package dto
import (
"testing"
)
// --- GetPage ---
func TestGetPage_Zero_Returns1(t *testing.T) {
p := &PaginationParams{Page: 0}
if got := p.GetPage(); got != 1 {
t.Errorf("GetPage(0) = %d, want 1", got)
}
}
func TestGetPage_Negative_Returns1(t *testing.T) {
p := &PaginationParams{Page: -5}
if got := p.GetPage(); got != 1 {
t.Errorf("GetPage(-5) = %d, want 1", got)
}
}
func TestGetPage_Valid_ReturnsValue(t *testing.T) {
p := &PaginationParams{Page: 3}
if got := p.GetPage(); got != 3 {
t.Errorf("GetPage(3) = %d, want 3", got)
}
}
// --- GetPerPage ---
func TestGetPerPage_Zero_Returns20(t *testing.T) {
p := &PaginationParams{PerPage: 0}
if got := p.GetPerPage(); got != 20 {
t.Errorf("GetPerPage(0) = %d, want 20", got)
}
}
func TestGetPerPage_Negative_Returns20(t *testing.T) {
p := &PaginationParams{PerPage: -1}
if got := p.GetPerPage(); got != 20 {
t.Errorf("GetPerPage(-1) = %d, want 20", got)
}
}
func TestGetPerPage_TooLarge_Returns10000(t *testing.T) {
p := &PaginationParams{PerPage: 20000}
if got := p.GetPerPage(); got != 10000 {
t.Errorf("GetPerPage(20000) = %d, want 10000", got)
}
}
func TestGetPerPage_Valid_ReturnsValue(t *testing.T) {
p := &PaginationParams{PerPage: 50}
if got := p.GetPerPage(); got != 50 {
t.Errorf("GetPerPage(50) = %d, want 50", got)
}
}
// --- GetOffset ---
func TestGetOffset_Page1_Returns0(t *testing.T) {
p := &PaginationParams{Page: 1, PerPage: 20}
if got := p.GetOffset(); got != 0 {
t.Errorf("GetOffset(page=1, perPage=20) = %d, want 0", got)
}
}
func TestGetOffset_Page3_PerPage10_Returns20(t *testing.T) {
p := &PaginationParams{Page: 3, PerPage: 10}
if got := p.GetOffset(); got != 20 {
t.Errorf("GetOffset(page=3, perPage=10) = %d, want 20", got)
}
}
func TestGetOffset_Defaults_Returns0(t *testing.T) {
p := &PaginationParams{}
if got := p.GetOffset(); got != 0 {
t.Errorf("GetOffset(defaults) = %d, want 0", got)
}
}
// --- GetSortDir ---
func TestGetSortDir_Asc(t *testing.T) {
p := &PaginationParams{SortDir: "asc"}
if got := p.GetSortDir(); got != "ASC" {
t.Errorf("GetSortDir('asc') = %q, want 'ASC'", got)
}
}
func TestGetSortDir_Desc(t *testing.T) {
p := &PaginationParams{SortDir: "desc"}
if got := p.GetSortDir(); got != "DESC" {
t.Errorf("GetSortDir('desc') = %q, want 'DESC'", got)
}
}
func TestGetSortDir_Empty_ReturnsDesc(t *testing.T) {
p := &PaginationParams{SortDir: ""}
if got := p.GetSortDir(); got != "DESC" {
t.Errorf("GetSortDir('') = %q, want 'DESC'", got)
}
}
func TestGetSortDir_Invalid_ReturnsDesc(t *testing.T) {
p := &PaginationParams{SortDir: "RANDOM"}
if got := p.GetSortDir(); got != "DESC" {
t.Errorf("GetSortDir('RANDOM') = %q, want 'DESC'", got)
}
}
// --- GetSafeSortBy ---
func TestGetSafeSortBy_Allowed(t *testing.T) {
p := &PaginationParams{SortBy: "name"}
got := p.GetSafeSortBy([]string{"name", "email"}, "id")
if got != "name" {
t.Errorf("GetSafeSortBy('name') = %q, want 'name'", got)
}
}
func TestGetSafeSortBy_NotAllowed_ReturnsDefault(t *testing.T) {
p := &PaginationParams{SortBy: "password"}
got := p.GetSafeSortBy([]string{"name", "email"}, "id")
if got != "id" {
t.Errorf("GetSafeSortBy('password') = %q, want 'id'", got)
}
}
func TestGetSafeSortBy_Empty_ReturnsDefault(t *testing.T) {
p := &PaginationParams{SortBy: ""}
got := p.GetSafeSortBy([]string{"name", "email"}, "id")
if got != "id" {
t.Errorf("GetSafeSortBy('') = %q, want 'id'", got)
}
}
// --- NewPaginatedResponse ---
func TestNewPaginatedResponse_ExactPages(t *testing.T) {
resp := NewPaginatedResponse([]string{"a", "b"}, 40, 1, 20)
if resp.TotalPages != 2 {
t.Errorf("TotalPages = %d, want 2", resp.TotalPages)
}
if resp.Total != 40 {
t.Errorf("Total = %d, want 40", resp.Total)
}
if resp.Page != 1 {
t.Errorf("Page = %d, want 1", resp.Page)
}
if resp.PerPage != 20 {
t.Errorf("PerPage = %d, want 20", resp.PerPage)
}
}
func TestNewPaginatedResponse_PartialLastPage(t *testing.T) {
resp := NewPaginatedResponse(nil, 21, 1, 20)
if resp.TotalPages != 2 {
t.Errorf("TotalPages = %d, want 2", resp.TotalPages)
}
}
func TestNewPaginatedResponse_SinglePage(t *testing.T) {
resp := NewPaginatedResponse(nil, 5, 1, 20)
if resp.TotalPages != 1 {
t.Errorf("TotalPages = %d, want 1", resp.TotalPages)
}
}
func TestNewPaginatedResponse_ZeroTotal(t *testing.T) {
resp := NewPaginatedResponse(nil, 0, 1, 20)
if resp.TotalPages != 0 {
t.Errorf("TotalPages = %d, want 0", resp.TotalPages)
}
}

View File

@@ -0,0 +1,109 @@
package apperrors
import (
"errors"
"net/http"
"testing"
"github.com/stretchr/testify/assert"
)
func TestNotFound(t *testing.T) {
err := NotFound("error.task_not_found")
assert.Equal(t, http.StatusNotFound, err.Code)
assert.Equal(t, "error.task_not_found", err.MessageKey)
assert.Empty(t, err.Message)
assert.Nil(t, err.Err)
}
func TestForbidden(t *testing.T) {
err := Forbidden("error.residence_access_denied")
assert.Equal(t, http.StatusForbidden, err.Code)
assert.Equal(t, "error.residence_access_denied", err.MessageKey)
}
func TestBadRequest(t *testing.T) {
err := BadRequest("error.invalid_request_body")
assert.Equal(t, http.StatusBadRequest, err.Code)
assert.Equal(t, "error.invalid_request_body", err.MessageKey)
}
func TestUnauthorized(t *testing.T) {
err := Unauthorized("error.not_authenticated")
assert.Equal(t, http.StatusUnauthorized, err.Code)
assert.Equal(t, "error.not_authenticated", err.MessageKey)
}
func TestConflict(t *testing.T) {
err := Conflict("error.email_taken")
assert.Equal(t, http.StatusConflict, err.Code)
assert.Equal(t, "error.email_taken", err.MessageKey)
}
func TestTooManyRequests(t *testing.T) {
err := TooManyRequests("error.rate_limit_exceeded")
assert.Equal(t, http.StatusTooManyRequests, err.Code)
assert.Equal(t, "error.rate_limit_exceeded", err.MessageKey)
}
func TestInternal(t *testing.T) {
underlying := errors.New("database connection failed")
err := Internal(underlying)
assert.Equal(t, http.StatusInternalServerError, err.Code)
assert.Equal(t, "error.internal", err.MessageKey)
assert.Equal(t, underlying, err.Err)
}
func TestAppError_Error_WithWrappedError(t *testing.T) {
underlying := errors.New("connection refused")
err := Internal(underlying).WithMessage("database error")
assert.Equal(t, "database error: connection refused", err.Error())
}
func TestAppError_Error_WithMessageOnly(t *testing.T) {
err := NotFound("error.task_not_found").WithMessage("Task not found")
assert.Equal(t, "Task not found", err.Error())
}
func TestAppError_Error_MessageKeyFallback(t *testing.T) {
err := NotFound("error.task_not_found")
// No Message set, no Err set — should fall back to MessageKey
assert.Equal(t, "error.task_not_found", err.Error())
}
func TestAppError_Unwrap(t *testing.T) {
underlying := errors.New("wrapped error")
err := Internal(underlying)
assert.Equal(t, underlying, errors.Unwrap(err))
}
func TestAppError_Unwrap_Nil(t *testing.T) {
err := NotFound("error.task_not_found")
assert.Nil(t, errors.Unwrap(err))
}
func TestAppError_WithMessage(t *testing.T) {
err := NotFound("error.task_not_found").WithMessage("custom message")
assert.Equal(t, "custom message", err.Message)
assert.Equal(t, "error.task_not_found", err.MessageKey)
}
func TestAppError_Wrap(t *testing.T) {
underlying := errors.New("some error")
err := BadRequest("error.invalid_request_body").Wrap(underlying)
assert.Equal(t, underlying, err.Err)
assert.Equal(t, http.StatusBadRequest, err.Code)
}
func TestAppError_ImplementsError(t *testing.T) {
var err error = NotFound("error.task_not_found")
assert.NotNil(t, err)
assert.Equal(t, "error.task_not_found", err.Error())
}
func TestAppError_ErrorsAs(t *testing.T) {
var appErr *AppError
err := NotFound("error.task_not_found")
assert.True(t, errors.As(err, &appErr))
assert.Equal(t, http.StatusNotFound, appErr.Code)
}

View File

@@ -134,17 +134,37 @@ type SecurityConfig struct {
PasswordResetExpiry time.Duration
ConfirmationExpiry time.Duration
MaxPasswordResetRate int // per hour
TokenExpiryDays int // Number of days before auth tokens expire (default 90)
TokenRefreshDays int // Token must be at least this many days old before refresh (default 60)
}
// StorageConfig holds file storage settings
// StorageConfig holds file storage settings.
// When S3Endpoint is set, files are stored in S3-compatible storage (B2, MinIO).
// When S3Endpoint is empty, files are stored on the local filesystem using UploadDir.
type StorageConfig struct {
UploadDir string // Directory to store uploaded files
BaseURL string // Public URL prefix for serving files (e.g., "/uploads")
// Local filesystem settings
UploadDir string // Directory to store uploaded files (local mode)
BaseURL string // Public URL prefix for serving files (e.g., "/uploads")
// S3-compatible storage settings (B2, MinIO)
S3Endpoint string // S3 endpoint (e.g., "s3.us-west-004.backblazeb2.com" or "minio:9000")
S3KeyID string // Access key ID
S3AppKey string // Secret access key
S3Bucket string // Bucket name
S3UseSSL bool // Use HTTPS (true for B2, false for in-cluster MinIO)
S3Region string // Region (optional, defaults to "us-east-1")
// Shared settings
MaxFileSize int64 // Max file size in bytes (default 10MB)
AllowedTypes string // Comma-separated MIME types
EncryptionKey string // 64-char hex key for file encryption at rest (optional)
}
// IsS3 returns true if S3-compatible storage is configured
func (c *StorageConfig) IsS3() bool {
return c.S3Endpoint != "" && c.S3KeyID != "" && c.S3AppKey != "" && c.S3Bucket != ""
}
// FeatureFlags holds kill switches for major subsystems.
// All default to true (enabled). Set to false via env vars to disable.
type FeatureFlags struct {
@@ -262,10 +282,18 @@ func Load() (*Config, error) {
PasswordResetExpiry: 15 * time.Minute,
ConfirmationExpiry: 24 * time.Hour,
MaxPasswordResetRate: 3,
TokenExpiryDays: viper.GetInt("TOKEN_EXPIRY_DAYS"),
TokenRefreshDays: viper.GetInt("TOKEN_REFRESH_DAYS"),
},
Storage: StorageConfig{
UploadDir: viper.GetString("STORAGE_UPLOAD_DIR"),
BaseURL: viper.GetString("STORAGE_BASE_URL"),
S3Endpoint: viper.GetString("B2_ENDPOINT"),
S3KeyID: viper.GetString("B2_KEY_ID"),
S3AppKey: viper.GetString("B2_APP_KEY"),
S3Bucket: viper.GetString("B2_BUCKET_NAME"),
S3UseSSL: viper.GetString("STORAGE_USE_SSL") == "" || viper.GetBool("STORAGE_USE_SSL"),
S3Region: viper.GetString("B2_REGION"),
MaxFileSize: viper.GetInt64("STORAGE_MAX_FILE_SIZE"),
AllowedTypes: viper.GetString("STORAGE_ALLOWED_TYPES"),
EncryptionKey: viper.GetString("STORAGE_ENCRYPTION_KEY"),
@@ -369,6 +397,10 @@ func setDefaults() {
viper.SetDefault("OVERDUE_REMINDER_HOUR", 15) // 9:00 AM UTC
viper.SetDefault("DAILY_DIGEST_HOUR", 3) // 3:00 AM UTC
// Token expiry defaults
viper.SetDefault("TOKEN_EXPIRY_DAYS", 90) // Tokens expire after 90 days
viper.SetDefault("TOKEN_REFRESH_DAYS", 60) // Tokens can be refreshed after 60 days
// Storage defaults
viper.SetDefault("STORAGE_UPLOAD_DIR", "./uploads")
viper.SetDefault("STORAGE_BASE_URL", "/uploads")

View File

@@ -0,0 +1,324 @@
package config
import (
"sync"
"testing"
"github.com/spf13/viper"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
// resetConfigState resets the package-level singleton so each test starts fresh.
func resetConfigState() {
cfg = nil
cfgOnce = sync.Once{}
viper.Reset()
}
func TestLoad_DefaultValues(t *testing.T) {
resetConfigState()
// Provide required SECRET_KEY so validation passes
t.Setenv("SECRET_KEY", "a-strong-secret-key-for-tests")
c, err := Load()
require.NoError(t, err)
// Server defaults
assert.Equal(t, 8000, c.Server.Port)
assert.False(t, c.Server.Debug)
assert.False(t, c.Server.DebugFixedCodes)
assert.Equal(t, "UTC", c.Server.Timezone)
assert.Equal(t, "/app/static", c.Server.StaticDir)
assert.Equal(t, "https://api.myhoneydue.com", c.Server.BaseURL)
// Database defaults
assert.Equal(t, "localhost", c.Database.Host)
assert.Equal(t, 5432, c.Database.Port)
assert.Equal(t, "postgres", c.Database.User)
assert.Equal(t, "honeydue", c.Database.Database)
assert.Equal(t, "disable", c.Database.SSLMode)
assert.Equal(t, 25, c.Database.MaxOpenConns)
assert.Equal(t, 10, c.Database.MaxIdleConns)
// Redis defaults
assert.Equal(t, "redis://localhost:6379/0", c.Redis.URL)
assert.Equal(t, 0, c.Redis.DB)
// Worker defaults
assert.Equal(t, 14, c.Worker.TaskReminderHour)
assert.Equal(t, 15, c.Worker.OverdueReminderHour)
assert.Equal(t, 3, c.Worker.DailyNotifHour)
// Token expiry defaults
assert.Equal(t, 90, c.Security.TokenExpiryDays)
assert.Equal(t, 60, c.Security.TokenRefreshDays)
// Feature flags default to true
assert.True(t, c.Features.PushEnabled)
assert.True(t, c.Features.EmailEnabled)
assert.True(t, c.Features.WebhooksEnabled)
assert.True(t, c.Features.OnboardingEmailsEnabled)
assert.True(t, c.Features.PDFReportsEnabled)
assert.True(t, c.Features.WorkerEnabled)
}
func TestLoad_EnvOverrides(t *testing.T) {
resetConfigState()
t.Setenv("SECRET_KEY", "a-strong-secret-key-for-tests")
t.Setenv("PORT", "9090")
t.Setenv("DEBUG", "true")
t.Setenv("DB_HOST", "db.example.com")
t.Setenv("DB_PORT", "5433")
t.Setenv("TOKEN_EXPIRY_DAYS", "180")
t.Setenv("TOKEN_REFRESH_DAYS", "120")
t.Setenv("FEATURE_PUSH_ENABLED", "false")
c, err := Load()
require.NoError(t, err)
assert.Equal(t, 9090, c.Server.Port)
assert.True(t, c.Server.Debug)
assert.Equal(t, "db.example.com", c.Database.Host)
assert.Equal(t, 5433, c.Database.Port)
assert.Equal(t, 180, c.Security.TokenExpiryDays)
assert.Equal(t, 120, c.Security.TokenRefreshDays)
assert.False(t, c.Features.PushEnabled)
}
func TestLoad_Validation_MissingSecretKey_Production(t *testing.T) {
// Test validate() directly to avoid the sync.Once mutex issue
// that occurs when Load() resets cfgOnce inside cfgOnce.Do()
cfg := &Config{
Server: ServerConfig{Debug: false},
Security: SecurityConfig{SecretKey: ""},
}
err := validate(cfg)
require.Error(t, err)
assert.Contains(t, err.Error(), "SECRET_KEY")
}
func TestLoad_Validation_MissingSecretKey_DebugMode(t *testing.T) {
resetConfigState()
t.Setenv("SECRET_KEY", "")
t.Setenv("DEBUG", "true")
c, err := Load()
require.NoError(t, err)
// In debug mode, a default key is assigned
assert.Equal(t, "change-me-in-production-secret-key-12345", c.Security.SecretKey)
}
func TestLoad_Validation_WeakSecretKey_Production(t *testing.T) {
// Test validate() directly to avoid the sync.Once mutex issue
cfg := &Config{
Server: ServerConfig{Debug: false},
Security: SecurityConfig{SecretKey: "password"},
}
err := validate(cfg)
require.Error(t, err)
assert.Contains(t, err.Error(), "well-known weak value")
}
func TestLoad_Validation_WeakSecretKey_DebugMode(t *testing.T) {
resetConfigState()
t.Setenv("SECRET_KEY", "secret")
t.Setenv("DEBUG", "true")
// In debug mode, weak keys produce a warning but no error
c, err := Load()
require.NoError(t, err)
assert.Equal(t, "secret", c.Security.SecretKey)
}
func TestLoad_Validation_EncryptionKey_Valid(t *testing.T) {
resetConfigState()
t.Setenv("SECRET_KEY", "a-strong-secret-key-for-tests")
// Valid 64-char hex key (32 bytes)
t.Setenv("STORAGE_ENCRYPTION_KEY", "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef")
c, err := Load()
require.NoError(t, err)
assert.Equal(t, "0123456789abcdef0123456789abcdef0123456789abcdef0123456789abcdef", c.Storage.EncryptionKey)
}
func TestLoad_Validation_EncryptionKey_WrongLength(t *testing.T) {
// Test validate() directly to avoid the sync.Once mutex issue
cfg := &Config{
Server: ServerConfig{Debug: false},
Security: SecurityConfig{SecretKey: "a-strong-secret-key-for-tests"},
Storage: StorageConfig{EncryptionKey: "tooshort"},
}
err := validate(cfg)
require.Error(t, err)
assert.Contains(t, err.Error(), "STORAGE_ENCRYPTION_KEY must be exactly 64 hex characters")
}
func TestLoad_Validation_EncryptionKey_InvalidHex(t *testing.T) {
// Test validate() directly to avoid the sync.Once mutex issue
cfg := &Config{
Server: ServerConfig{Debug: false},
Security: SecurityConfig{SecretKey: "a-strong-secret-key-for-tests"},
Storage: StorageConfig{EncryptionKey: "zzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzzz"},
}
err := validate(cfg)
require.Error(t, err)
assert.Contains(t, err.Error(), "invalid hex")
}
func TestLoad_DatabaseURL_Override(t *testing.T) {
resetConfigState()
t.Setenv("SECRET_KEY", "a-strong-secret-key-for-tests")
t.Setenv("DATABASE_URL", "postgres://myuser:mypass@dbhost:5433/mydb?sslmode=require")
c, err := Load()
require.NoError(t, err)
assert.Equal(t, "dbhost", c.Database.Host)
assert.Equal(t, 5433, c.Database.Port)
assert.Equal(t, "myuser", c.Database.User)
assert.Equal(t, "mypass", c.Database.Password)
assert.Equal(t, "mydb", c.Database.Database)
assert.Equal(t, "require", c.Database.SSLMode)
}
func TestDSN(t *testing.T) {
d := DatabaseConfig{
Host: "localhost",
Port: 5432,
User: "testuser",
Password: "Password123",
Database: "testdb",
SSLMode: "disable",
}
dsn := d.DSN()
assert.Contains(t, dsn, "host=localhost")
assert.Contains(t, dsn, "port=5432")
assert.Contains(t, dsn, "user=testuser")
assert.Contains(t, dsn, "password=Password123")
assert.Contains(t, dsn, "dbname=testdb")
assert.Contains(t, dsn, "sslmode=disable")
}
func TestMaskURLCredentials(t *testing.T) {
tests := []struct {
name string
input string
expected string
}{
{
name: "URL with password",
input: "postgres://user:secret@host:5432/db",
expected: "postgres://user:xxxxx@host:5432/db",
},
{
name: "URL without password",
input: "postgres://user@host:5432/db",
expected: "postgres://user@host:5432/db",
},
{
name: "URL without user info",
input: "postgres://host:5432/db",
expected: "postgres://host:5432/db",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := MaskURLCredentials(tc.input)
assert.Equal(t, tc.expected, result)
})
}
}
func TestParseCorsOrigins(t *testing.T) {
tests := []struct {
name string
input string
expected []string
}{
{"empty string", "", nil},
{"single origin", "https://example.com", []string{"https://example.com"}},
{"multiple origins", "https://a.com, https://b.com", []string{"https://a.com", "https://b.com"}},
{"whitespace trimmed", " https://a.com , https://b.com ", []string{"https://a.com", "https://b.com"}},
{"empty parts skipped", "https://a.com,,https://b.com", []string{"https://a.com", "https://b.com"}},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := parseCorsOrigins(tc.input)
assert.Equal(t, tc.expected, result)
})
}
}
func TestParseDatabaseURL(t *testing.T) {
tests := []struct {
name string
url string
wantHost string
wantPort int
wantUser string
wantPass string
wantDB string
wantSSL string
expectError bool
}{
{
name: "full URL",
url: "postgres://user:Password123@host:5433/mydb?sslmode=require",
wantHost: "host",
wantPort: 5433,
wantUser: "user",
wantPass: "Password123",
wantDB: "mydb",
wantSSL: "require",
},
{
name: "default port",
url: "postgres://user:pass@host/mydb",
wantHost: "host",
wantPort: 5432,
wantUser: "user",
wantPass: "pass",
wantDB: "mydb",
wantSSL: "",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result, err := parseDatabaseURL(tc.url)
if tc.expectError {
require.Error(t, err)
return
}
require.NoError(t, err)
assert.Equal(t, tc.wantHost, result.Host)
assert.Equal(t, tc.wantPort, result.Port)
assert.Equal(t, tc.wantUser, result.User)
assert.Equal(t, tc.wantPass, result.Password)
assert.Equal(t, tc.wantDB, result.Database)
assert.Equal(t, tc.wantSSL, result.SSLMode)
})
}
}
func TestIsWeakSecretKey(t *testing.T) {
assert.True(t, isWeakSecretKey("secret"))
assert.True(t, isWeakSecretKey("Secret")) // case-insensitive
assert.True(t, isWeakSecretKey(" changeme ")) // whitespace trimmed
assert.True(t, isWeakSecretKey("password"))
assert.True(t, isWeakSecretKey("change-me"))
assert.False(t, isWeakSecretKey("a-strong-unique-production-key"))
}
func TestGet_ReturnsNilBeforeLoad(t *testing.T) {
resetConfigState()
assert.Nil(t, Get())
}

View File

@@ -0,0 +1,103 @@
package database
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"gorm.io/driver/sqlite"
"gorm.io/gorm"
)
// --- Unit tests for Paginate parameter clamping ---
func TestPaginate_PageZeroDefaultsToOne(t *testing.T) {
scope := Paginate(0, 10)
db := openTestDB(t)
createTestRows(t, db, 5)
var rows []testRow
err := db.Scopes(scope).Find(&rows).Error
require.NoError(t, err)
// page=0 normalised to page=1, pageSize=10 → should get all 5 rows
assert.Len(t, rows, 5)
}
func TestPaginate_PageSizeZeroDefaultsTo100(t *testing.T) {
scope := Paginate(1, 0)
db := openTestDB(t)
createTestRows(t, db, 5)
var rows []testRow
err := db.Scopes(scope).Find(&rows).Error
require.NoError(t, err)
// pageSize=0 normalised to 100, only 5 rows exist → 5 returned
assert.Len(t, rows, 5)
}
func TestPaginate_PageSizeOverMaxCappedAt1000(t *testing.T) {
scope := Paginate(1, 2000)
db := openTestDB(t)
createTestRows(t, db, 5)
var rows []testRow
err := db.Scopes(scope).Find(&rows).Error
require.NoError(t, err)
// pageSize=2000 capped to 1000, only 5 rows → 5 returned
assert.Len(t, rows, 5)
}
func TestPaginate_NormalValues(t *testing.T) {
scope := Paginate(1, 3)
db := openTestDB(t)
createTestRows(t, db, 10)
var rows []testRow
err := db.Scopes(scope).Order("id ASC").Find(&rows).Error
require.NoError(t, err)
assert.Len(t, rows, 3)
assert.Equal(t, "row_1", rows[0].Name)
assert.Equal(t, "row_3", rows[2].Name)
}
func TestPaginate_SQLiteIntegration_Page2Size10(t *testing.T) {
db := openTestDB(t)
createTestRows(t, db, 25)
scope := Paginate(2, 10)
var rows []testRow
err := db.Scopes(scope).Order("id ASC").Find(&rows).Error
require.NoError(t, err)
// Page 2 with size 10 → rows 11..20
assert.Len(t, rows, 10)
assert.Equal(t, "row_11", rows[0].Name)
assert.Equal(t, "row_20", rows[9].Name)
}
// --- helpers ---
type testRow struct {
ID uint `gorm:"primaryKey"`
Name string
}
func openTestDB(t *testing.T) *gorm.DB {
t.Helper()
db, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{})
require.NoError(t, err)
require.NoError(t, db.AutoMigrate(&testRow{}))
return db
}
func createTestRows(t *testing.T, db *gorm.DB, n int) {
t.Helper()
for i := 1; i <= n; i++ {
require.NoError(t, db.Create(&testRow{Name: fmt.Sprintf("row_%d", i)}).Error)
}
}

View File

@@ -0,0 +1,47 @@
package database
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
func TestClassifyCompletion_CompletedAfterDue(t *testing.T) {
dueDate := time.Date(2025, 6, 1, 0, 0, 0, 0, time.UTC)
completedAt := time.Date(2025, 6, 5, 14, 30, 0, 0, time.UTC) // 4 days after due
result := classifyCompletion(completedAt, dueDate, 30)
assert.Equal(t, "overdue_tasks", result)
}
func TestClassifyCompletion_CompletedOnDueDate(t *testing.T) {
dueDate := time.Date(2025, 6, 15, 0, 0, 0, 0, time.UTC)
completedAt := time.Date(2025, 6, 15, 10, 0, 0, 0, time.UTC) // same day
result := classifyCompletion(completedAt, dueDate, 30)
// Completed on the due date: daysBefore == 0, which is <= threshold → due_soon_tasks
assert.Equal(t, "due_soon_tasks", result)
}
func TestClassifyCompletion_CompletedWithinThreshold(t *testing.T) {
dueDate := time.Date(2025, 7, 1, 0, 0, 0, 0, time.UTC)
completedAt := time.Date(2025, 6, 10, 8, 0, 0, 0, time.UTC) // 21 days before due
result := classifyCompletion(completedAt, dueDate, 30)
// 21 days before due, within 30-day threshold → due_soon_tasks
assert.Equal(t, "due_soon_tasks", result)
}
func TestClassifyCompletion_CompletedBeyondThreshold(t *testing.T) {
dueDate := time.Date(2025, 9, 1, 0, 0, 0, 0, time.UTC)
completedAt := time.Date(2025, 6, 1, 12, 0, 0, 0, time.UTC) // 92 days before due
result := classifyCompletion(completedAt, dueDate, 30)
// 92 days before due, beyond 30-day threshold → upcoming_tasks
assert.Equal(t, "upcoming_tasks", result)
}

View File

@@ -0,0 +1,31 @@
package database
import "sort"
// sortMigrationNames returns a sorted copy of the names slice.
func sortMigrationNames(names []string) []string {
sorted := make([]string, len(names))
copy(sorted, names)
sort.Strings(sorted)
return sorted
}
// buildAppliedSet converts a list of applied migrations to a lookup set.
func buildAppliedSet(applied []DataMigration) map[string]bool {
set := make(map[string]bool, len(applied))
for _, m := range applied {
set[m.Name] = true
}
return set
}
// filterPending returns names not present in the applied set.
func filterPending(names []string, applied map[string]bool) []string {
var pending []string
for _, name := range names {
if !applied[name] {
pending = append(pending, name)
}
}
return pending
}

View File

@@ -0,0 +1,82 @@
package database
import (
"testing"
"time"
"github.com/stretchr/testify/assert"
)
// --- sortMigrationNames ---
func TestSortMigrationNames_Alphabetical(t *testing.T) {
input := []string{"charlie", "alpha", "bravo"}
result := sortMigrationNames(input)
assert.Equal(t, []string{"alpha", "bravo", "charlie"}, result)
// Verify original slice is not mutated
assert.Equal(t, []string{"charlie", "alpha", "bravo"}, input)
}
func TestSortMigrationNames_Empty(t *testing.T) {
result := sortMigrationNames([]string{})
assert.Equal(t, []string{}, result)
assert.Len(t, result, 0)
}
// --- buildAppliedSet ---
func TestBuildAppliedSet_Multiple(t *testing.T) {
applied := []DataMigration{
{ID: 1, Name: "20250101_first", AppliedAt: time.Now()},
{ID: 2, Name: "20250201_second", AppliedAt: time.Now()},
{ID: 3, Name: "20250301_third", AppliedAt: time.Now()},
}
set := buildAppliedSet(applied)
assert.Len(t, set, 3)
assert.True(t, set["20250101_first"])
assert.True(t, set["20250201_second"])
assert.True(t, set["20250301_third"])
assert.False(t, set["nonexistent"])
}
func TestBuildAppliedSet_Empty(t *testing.T) {
set := buildAppliedSet([]DataMigration{})
assert.Len(t, set, 0)
}
// --- filterPending ---
func TestFilterPending_SomePending(t *testing.T) {
names := []string{"20250101_first", "20250201_second", "20250301_third"}
applied := map[string]bool{
"20250101_first": true,
}
pending := filterPending(names, applied)
assert.Equal(t, []string{"20250201_second", "20250301_third"}, pending)
}
func TestFilterPending_AllApplied(t *testing.T) {
names := []string{"20250101_first", "20250201_second"}
applied := map[string]bool{
"20250101_first": true,
"20250201_second": true,
}
pending := filterPending(names, applied)
assert.Nil(t, pending)
}
func TestFilterPending_NoneApplied(t *testing.T) {
names := []string{"20250101_first", "20250201_second", "20250301_third"}
applied := map[string]bool{}
pending := filterPending(names, applied)
assert.Equal(t, []string{"20250101_first", "20250201_second", "20250301_third"}, pending)
}

View File

@@ -11,7 +11,7 @@ type LoginRequest struct {
type RegisterRequest struct {
Username string `json:"username" validate:"required,min=3,max=150"`
Email string `json:"email" validate:"required,email,max=254"`
Password string `json:"password" validate:"required,min=8"`
Password string `json:"password" validate:"required,min=8,password_complexity"`
FirstName string `json:"first_name" validate:"max=150"`
LastName string `json:"last_name" validate:"max=150"`
}
@@ -35,7 +35,7 @@ type VerifyResetCodeRequest struct {
// ResetPasswordRequest represents the reset password request body
type ResetPasswordRequest struct {
ResetToken string `json:"reset_token" validate:"required"`
NewPassword string `json:"new_password" validate:"required,min=8"`
NewPassword string `json:"new_password" validate:"required,min=8,password_complexity"`
}
// UpdateProfileRequest represents the profile update request body

View File

@@ -0,0 +1,130 @@
package requests
import (
"encoding/json"
"testing"
"time"
)
func TestFlexibleDate_UnmarshalJSON_DateOnly(t *testing.T) {
var fd FlexibleDate
err := fd.UnmarshalJSON([]byte(`"2025-11-27"`))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
want := time.Date(2025, 11, 27, 0, 0, 0, 0, time.UTC)
if !fd.Time.Equal(want) {
t.Errorf("got %v, want %v", fd.Time, want)
}
}
func TestFlexibleDate_UnmarshalJSON_RFC3339(t *testing.T) {
var fd FlexibleDate
err := fd.UnmarshalJSON([]byte(`"2025-11-27T15:30:00Z"`))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
want := time.Date(2025, 11, 27, 15, 30, 0, 0, time.UTC)
if !fd.Time.Equal(want) {
t.Errorf("got %v, want %v", fd.Time, want)
}
}
func TestFlexibleDate_UnmarshalJSON_Null(t *testing.T) {
var fd FlexibleDate
err := fd.UnmarshalJSON([]byte(`null`))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !fd.Time.IsZero() {
t.Errorf("expected zero time, got %v", fd.Time)
}
}
func TestFlexibleDate_UnmarshalJSON_EmptyString(t *testing.T) {
var fd FlexibleDate
err := fd.UnmarshalJSON([]byte(`""`))
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if !fd.Time.IsZero() {
t.Errorf("expected zero time, got %v", fd.Time)
}
}
func TestFlexibleDate_UnmarshalJSON_Invalid(t *testing.T) {
var fd FlexibleDate
err := fd.UnmarshalJSON([]byte(`"not-a-date"`))
if err == nil {
t.Fatal("expected error for invalid date, got nil")
}
}
func TestFlexibleDate_MarshalJSON_Valid(t *testing.T) {
fd := FlexibleDate{Time: time.Date(2025, 11, 27, 15, 30, 0, 0, time.UTC)}
data, err := fd.MarshalJSON()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
var s string
if err := json.Unmarshal(data, &s); err != nil {
t.Fatalf("result is not a JSON string: %v", err)
}
want := "2025-11-27T15:30:00Z"
if s != want {
t.Errorf("got %q, want %q", s, want)
}
}
func TestFlexibleDate_MarshalJSON_Zero(t *testing.T) {
fd := FlexibleDate{}
data, err := fd.MarshalJSON()
if err != nil {
t.Fatalf("unexpected error: %v", err)
}
if string(data) != "null" {
t.Errorf("got %s, want null", string(data))
}
}
func TestFlexibleDate_ToTimePtr_Valid(t *testing.T) {
fd := &FlexibleDate{Time: time.Date(2025, 11, 27, 0, 0, 0, 0, time.UTC)}
ptr := fd.ToTimePtr()
if ptr == nil {
t.Fatal("expected non-nil pointer")
}
if !ptr.Equal(fd.Time) {
t.Errorf("got %v, want %v", *ptr, fd.Time)
}
}
func TestFlexibleDate_ToTimePtr_Zero(t *testing.T) {
fd := &FlexibleDate{}
ptr := fd.ToTimePtr()
if ptr != nil {
t.Errorf("expected nil, got %v", *ptr)
}
}
func TestFlexibleDate_ToTimePtr_NilReceiver(t *testing.T) {
var fd *FlexibleDate
ptr := fd.ToTimePtr()
if ptr != nil {
t.Errorf("expected nil for nil receiver, got %v", *ptr)
}
}
func TestFlexibleDate_RoundTrip(t *testing.T) {
original := FlexibleDate{Time: time.Date(2025, 6, 15, 10, 0, 0, 0, time.UTC)}
data, err := original.MarshalJSON()
if err != nil {
t.Fatalf("marshal error: %v", err)
}
var restored FlexibleDate
if err := restored.UnmarshalJSON(data); err != nil {
t.Fatalf("unmarshal error: %v", err)
}
if !original.Time.Equal(restored.Time) {
t.Errorf("round-trip mismatch: original %v, restored %v", original.Time, restored.Time)
}
}

View File

@@ -25,6 +25,22 @@ type CreateResidenceRequest struct {
PurchaseDate *time.Time `json:"purchase_date"`
PurchasePrice *decimal.Decimal `json:"purchase_price"`
IsPrimary *bool `json:"is_primary"`
// Home Profile
HeatingType *string `json:"heating_type" validate:"omitempty,oneof=gas_furnace electric_furnace heat_pump boiler radiant other"`
CoolingType *string `json:"cooling_type" validate:"omitempty,oneof=central_ac window_ac heat_pump evaporative none other"`
WaterHeaterType *string `json:"water_heater_type" validate:"omitempty,oneof=tank_gas tank_electric tankless_gas tankless_electric heat_pump solar other"`
RoofType *string `json:"roof_type" validate:"omitempty,oneof=asphalt_shingle metal tile slate wood_shake flat other"`
HasPool *bool `json:"has_pool"`
HasSprinklerSystem *bool `json:"has_sprinkler_system"`
HasSeptic *bool `json:"has_septic"`
HasFireplace *bool `json:"has_fireplace"`
HasGarage *bool `json:"has_garage"`
HasBasement *bool `json:"has_basement"`
HasAttic *bool `json:"has_attic"`
ExteriorType *string `json:"exterior_type" validate:"omitempty,oneof=brick vinyl_siding wood_siding stucco stone fiber_cement other"`
FlooringPrimary *string `json:"flooring_primary" validate:"omitempty,oneof=hardwood laminate tile carpet vinyl concrete other"`
LandscapingType *string `json:"landscaping_type" validate:"omitempty,oneof=lawn desert xeriscape garden mixed none other"`
}
// UpdateResidenceRequest represents the request to update a residence
@@ -46,6 +62,22 @@ type UpdateResidenceRequest struct {
PurchaseDate *time.Time `json:"purchase_date"`
PurchasePrice *decimal.Decimal `json:"purchase_price"`
IsPrimary *bool `json:"is_primary"`
// Home Profile
HeatingType *string `json:"heating_type" validate:"omitempty,oneof=gas_furnace electric_furnace heat_pump boiler radiant other"`
CoolingType *string `json:"cooling_type" validate:"omitempty,oneof=central_ac window_ac heat_pump evaporative none other"`
WaterHeaterType *string `json:"water_heater_type" validate:"omitempty,oneof=tank_gas tank_electric tankless_gas tankless_electric heat_pump solar other"`
RoofType *string `json:"roof_type" validate:"omitempty,oneof=asphalt_shingle metal tile slate wood_shake flat other"`
HasPool *bool `json:"has_pool"`
HasSprinklerSystem *bool `json:"has_sprinkler_system"`
HasSeptic *bool `json:"has_septic"`
HasFireplace *bool `json:"has_fireplace"`
HasGarage *bool `json:"has_garage"`
HasBasement *bool `json:"has_basement"`
HasAttic *bool `json:"has_attic"`
ExteriorType *string `json:"exterior_type" validate:"omitempty,oneof=brick vinyl_siding wood_siding stucco stone fiber_cement other"`
FlooringPrimary *string `json:"flooring_primary" validate:"omitempty,oneof=hardwood laminate tile carpet vinyl concrete other"`
LandscapingType *string `json:"landscaping_type" validate:"omitempty,oneof=lawn desert xeriscape garden mixed none other"`
}
// JoinWithCodeRequest represents the request to join a residence via share code

View File

@@ -79,6 +79,12 @@ type ResetPasswordResponse struct {
Message string `json:"message"`
}
// RefreshTokenResponse represents the token refresh response
type RefreshTokenResponse struct {
Token string `json:"token"`
Message string `json:"message"`
}
// MessageResponse represents a simple message response
type MessageResponse struct {
Message string `json:"message"`

View File

@@ -46,6 +46,22 @@ type ResidenceResponse struct {
Description string `json:"description"`
PurchaseDate *time.Time `json:"purchase_date"`
PurchasePrice *decimal.Decimal `json:"purchase_price"`
// Home Profile
HeatingType *string `json:"heating_type"`
CoolingType *string `json:"cooling_type"`
WaterHeaterType *string `json:"water_heater_type"`
RoofType *string `json:"roof_type"`
HasPool bool `json:"has_pool"`
HasSprinklerSystem bool `json:"has_sprinkler_system"`
HasSeptic bool `json:"has_septic"`
HasFireplace bool `json:"has_fireplace"`
HasGarage bool `json:"has_garage"`
HasBasement bool `json:"has_basement"`
HasAttic bool `json:"has_attic"`
ExteriorType *string `json:"exterior_type"`
FlooringPrimary *string `json:"flooring_primary"`
LandscapingType *string `json:"landscaping_type"`
IsPrimary bool `json:"is_primary"`
IsActive bool `json:"is_active"`
OverdueCount int `json:"overdue_count"`
@@ -184,9 +200,23 @@ func NewResidenceResponse(residence *models.Residence) ResidenceResponse {
YearBuilt: residence.YearBuilt,
Description: residence.Description,
PurchaseDate: residence.PurchaseDate,
PurchasePrice: residence.PurchasePrice,
IsPrimary: residence.IsPrimary,
IsActive: residence.IsActive,
PurchasePrice: residence.PurchasePrice,
HeatingType: residence.HeatingType,
CoolingType: residence.CoolingType,
WaterHeaterType: residence.WaterHeaterType,
RoofType: residence.RoofType,
HasPool: residence.HasPool,
HasSprinklerSystem: residence.HasSprinklerSystem,
HasSeptic: residence.HasSeptic,
HasFireplace: residence.HasFireplace,
HasGarage: residence.HasGarage,
HasBasement: residence.HasBasement,
HasAttic: residence.HasAttic,
ExteriorType: residence.ExteriorType,
FlooringPrimary: residence.FlooringPrimary,
LandscapingType: residence.LandscapingType,
IsPrimary: residence.IsPrimary,
IsActive: residence.IsActive,
CreatedAt: residence.CreatedAt,
UpdatedAt: residence.UpdatedAt,
}

View File

@@ -0,0 +1,833 @@
package responses
import (
"fmt"
"testing"
"time"
"github.com/shopspring/decimal"
"github.com/treytartt/honeydue-api/internal/models"
)
// --- helpers ---
func timePtr(t time.Time) *time.Time { return &t }
func uintPtr(v uint) *uint { return &v }
func intPtr(v int) *int { return &v }
func strPtr(v string) *string { return &v }
func float64Ptr(v float64) *float64 { return &v }
var fixedNow = time.Date(2025, 6, 15, 0, 0, 0, 0, time.UTC)
func makeUser() *models.User {
return &models.User{
ID: 1,
Username: "john",
Email: "john@example.com",
FirstName: "John",
LastName: "Doe",
IsActive: true,
DateJoined: fixedNow,
LastLogin: timePtr(fixedNow),
Profile: &models.UserProfile{
BaseModel: models.BaseModel{ID: 10},
UserID: 1,
Verified: true,
Bio: "hello",
},
}
}
func makeUserNoProfile() *models.User {
u := makeUser()
u.Profile = nil
return u
}
// ==================== auth.go ====================
func TestNewUserResponse_AllFields(t *testing.T) {
u := makeUser()
resp := NewUserResponse(u)
if resp.ID != 1 {
t.Errorf("ID = %d, want 1", resp.ID)
}
if resp.Username != "john" {
t.Errorf("Username = %q", resp.Username)
}
if !resp.Verified {
t.Error("Verified should be true when profile is verified")
}
if resp.LastLogin == nil {
t.Error("LastLogin should not be nil")
}
}
func TestNewUserResponse_NilProfile(t *testing.T) {
u := makeUserNoProfile()
resp := NewUserResponse(u)
if resp.Verified {
t.Error("Verified should be false when profile is nil")
}
}
func TestNewUserProfileResponse_Nil(t *testing.T) {
resp := NewUserProfileResponse(nil)
if resp != nil {
t.Error("expected nil for nil profile")
}
}
func TestNewUserProfileResponse_Valid(t *testing.T) {
p := &models.UserProfile{
BaseModel: models.BaseModel{ID: 5},
UserID: 1,
Verified: true,
Bio: "bio",
}
resp := NewUserProfileResponse(p)
if resp == nil {
t.Fatal("expected non-nil")
}
if resp.ID != 5 || resp.UserID != 1 || !resp.Verified || resp.Bio != "bio" {
t.Errorf("unexpected response: %+v", resp)
}
}
func TestNewCurrentUserResponse(t *testing.T) {
u := makeUser()
resp := NewCurrentUserResponse(u, "apple")
if resp.AuthProvider != "apple" {
t.Errorf("AuthProvider = %q, want apple", resp.AuthProvider)
}
if resp.Profile == nil {
t.Error("Profile should not be nil")
}
if resp.ID != 1 {
t.Errorf("ID = %d, want 1", resp.ID)
}
}
func TestNewLoginResponse(t *testing.T) {
u := makeUser()
resp := NewLoginResponse("tok123", u)
if resp.Token != "tok123" {
t.Errorf("Token = %q", resp.Token)
}
if resp.User.ID != 1 {
t.Errorf("User.ID = %d", resp.User.ID)
}
}
func TestNewRegisterResponse(t *testing.T) {
u := makeUser()
resp := NewRegisterResponse("tok456", u)
if resp.Token != "tok456" {
t.Errorf("Token = %q", resp.Token)
}
if resp.Message == "" {
t.Error("Message should not be empty")
}
}
func TestNewAppleSignInResponse(t *testing.T) {
u := makeUser()
resp := NewAppleSignInResponse("atok", u, true)
if !resp.IsNewUser {
t.Error("IsNewUser should be true")
}
if resp.Token != "atok" {
t.Errorf("Token = %q", resp.Token)
}
}
func TestNewGoogleSignInResponse(t *testing.T) {
u := makeUser()
resp := NewGoogleSignInResponse("gtok", u, false)
if resp.IsNewUser {
t.Error("IsNewUser should be false")
}
if resp.Token != "gtok" {
t.Errorf("Token = %q", resp.Token)
}
}
// ==================== task.go ====================
func makeTask() *models.Task {
due := time.Date(2025, 7, 1, 0, 0, 0, 0, time.UTC)
catID := uint(1)
priID := uint(2)
freqID := uint(3)
return &models.Task{
BaseModel: models.BaseModel{ID: 100, CreatedAt: fixedNow, UpdatedAt: fixedNow},
ResidenceID: 10,
CreatedByID: 1,
CreatedBy: *makeUser(),
Title: "Fix roof",
Description: "Repair leak",
CategoryID: &catID,
Category: &models.TaskCategory{BaseModel: models.BaseModel{ID: catID}, Name: "Exterior", Icon: "roof", Color: "#FF0000", DisplayOrder: 1},
PriorityID: &priID,
Priority: &models.TaskPriority{BaseModel: models.BaseModel{ID: priID}, Name: "High", Level: 3, Color: "#FF0000", DisplayOrder: 1},
FrequencyID: &freqID,
Frequency: &models.TaskFrequency{BaseModel: models.BaseModel{ID: freqID}, Name: "Monthly", Days: intPtr(30), DisplayOrder: 1},
DueDate: &due,
}
}
func TestNewTaskResponse_BasicFields(t *testing.T) {
task := makeTask()
resp := NewTaskResponseWithTime(task, 30, fixedNow)
if resp.ID != 100 {
t.Errorf("ID = %d", resp.ID)
}
if resp.Title != "Fix roof" {
t.Errorf("Title = %q", resp.Title)
}
if resp.CreatedBy == nil {
t.Error("CreatedBy should not be nil")
}
if resp.Category == nil {
t.Error("Category should not be nil")
}
if resp.Priority == nil {
t.Error("Priority should not be nil")
}
if resp.Frequency == nil {
t.Error("Frequency should not be nil")
}
if resp.KanbanColumn == "" {
t.Error("KanbanColumn should not be empty")
}
}
func TestNewTaskResponse_NilAssociations(t *testing.T) {
task := &models.Task{
BaseModel: models.BaseModel{ID: 200},
ResidenceID: 10,
CreatedByID: 1,
Title: "Simple task",
}
resp := NewTaskResponseWithTime(task, 30, fixedNow)
if resp.CreatedBy != nil {
t.Error("CreatedBy should be nil when CreatedBy.ID is 0")
}
if resp.Category != nil {
t.Error("Category should be nil")
}
if resp.Priority != nil {
t.Error("Priority should be nil")
}
if resp.Frequency != nil {
t.Error("Frequency should be nil")
}
if resp.AssignedTo != nil {
t.Error("AssignedTo should be nil")
}
}
func TestNewTaskResponse_WithCompletions(t *testing.T) {
task := makeTask()
task.Completions = []models.TaskCompletion{
{BaseModel: models.BaseModel{ID: 1}, TaskID: 100, CompletedAt: fixedNow, CompletedByID: 1},
{BaseModel: models.BaseModel{ID: 2}, TaskID: 100, CompletedAt: fixedNow, CompletedByID: 1},
}
resp := NewTaskResponseWithTime(task, 30, fixedNow)
if resp.CompletionCount != 2 {
t.Errorf("CompletionCount = %d, want 2", resp.CompletionCount)
}
}
func TestNewTaskResponseWithTime_KanbanColumn(t *testing.T) {
task := makeTask()
// due date is July 1, now is June 15 → 16 days away → due_soon (within 30 days)
resp := NewTaskResponseWithTime(task, 30, fixedNow)
if resp.KanbanColumn == "" {
t.Error("KanbanColumn should be set")
}
}
func TestNewTaskListResponse(t *testing.T) {
tasks := []models.Task{
{BaseModel: models.BaseModel{ID: 1}, Title: "A"},
{BaseModel: models.BaseModel{ID: 2}, Title: "B"},
}
results := NewTaskListResponse(tasks)
if len(results) != 2 {
t.Errorf("len = %d, want 2", len(results))
}
}
func TestNewTaskListResponse_Empty(t *testing.T) {
results := NewTaskListResponse([]models.Task{})
if len(results) != 0 {
t.Errorf("len = %d, want 0", len(results))
}
}
func TestNewTaskCompletionResponse_WithImages(t *testing.T) {
c := &models.TaskCompletion{
BaseModel: models.BaseModel{ID: 50},
TaskID: 100,
CompletedByID: 1,
CompletedBy: *makeUser(),
CompletedAt: fixedNow,
Notes: "done",
Images: []models.TaskCompletionImage{
{BaseModel: models.BaseModel{ID: 1}, ImageURL: "http://img1.jpg", Caption: "before"},
{BaseModel: models.BaseModel{ID: 2}, ImageURL: "http://img2.jpg", Caption: "after"},
},
}
resp := NewTaskCompletionResponse(c)
if resp.CompletedBy == nil {
t.Error("CompletedBy should not be nil")
}
if len(resp.Images) != 2 {
t.Errorf("Images len = %d, want 2", len(resp.Images))
}
if resp.Images[0].MediaURL != "/api/media/completion-image/1" {
t.Errorf("MediaURL = %q", resp.Images[0].MediaURL)
}
}
func TestNewTaskCompletionResponse_EmptyImages(t *testing.T) {
c := &models.TaskCompletion{
BaseModel: models.BaseModel{ID: 51},
TaskID: 100,
CompletedByID: 1,
CompletedAt: fixedNow,
}
resp := NewTaskCompletionResponse(c)
if resp.Images == nil {
t.Error("Images should be empty slice, not nil")
}
if len(resp.Images) != 0 {
t.Errorf("Images len = %d, want 0", len(resp.Images))
}
}
func TestNewKanbanBoardResponse(t *testing.T) {
board := &models.KanbanBoard{
Columns: []models.KanbanColumn{
{
Name: "overdue",
DisplayName: "Overdue",
Color: "#FF0000",
Tasks: []models.Task{{BaseModel: models.BaseModel{ID: 1}, Title: "A"}},
Count: 1,
},
},
DaysThreshold: 30,
}
resp := NewKanbanBoardResponse(board, 10, fixedNow)
if len(resp.Columns) != 1 {
t.Fatalf("Columns len = %d", len(resp.Columns))
}
if resp.ResidenceID != "10" {
t.Errorf("ResidenceID = %q, want '10'", resp.ResidenceID)
}
if resp.Columns[0].Count != 1 {
t.Errorf("Count = %d", resp.Columns[0].Count)
}
}
func TestNewKanbanBoardResponseForAll(t *testing.T) {
board := &models.KanbanBoard{
Columns: []models.KanbanColumn{},
DaysThreshold: 30,
}
resp := NewKanbanBoardResponseForAll(board, fixedNow)
if resp.ResidenceID != "all" {
t.Errorf("ResidenceID = %q, want 'all'", resp.ResidenceID)
}
}
func TestDetermineKanbanColumn_Delegates(t *testing.T) {
task := &models.Task{
BaseModel: models.BaseModel{ID: 1},
Title: "test",
}
col := DetermineKanbanColumn(task, 30)
if col == "" {
t.Error("expected non-empty column")
}
}
func TestNewTaskCompletionWithTaskResponse(t *testing.T) {
c := &models.TaskCompletion{
BaseModel: models.BaseModel{ID: 1},
TaskID: 100,
CompletedByID: 1,
CompletedAt: fixedNow,
}
task := makeTask()
resp := NewTaskCompletionWithTaskResponseWithTime(c, task, 30, fixedNow)
if resp.Task == nil {
t.Error("Task should not be nil")
}
if resp.Task.ID != 100 {
t.Errorf("Task.ID = %d", resp.Task.ID)
}
}
func TestNewTaskCompletionWithTaskResponse_NilTask(t *testing.T) {
c := &models.TaskCompletion{
BaseModel: models.BaseModel{ID: 1},
TaskID: 100,
CompletedByID: 1,
CompletedAt: fixedNow,
}
resp := NewTaskCompletionWithTaskResponseWithTime(c, nil, 30, fixedNow)
if resp.Task != nil {
t.Error("Task should be nil")
}
}
func TestNewTaskCompletionListResponse(t *testing.T) {
completions := []models.TaskCompletion{
{BaseModel: models.BaseModel{ID: 1}, TaskID: 100, CompletedAt: fixedNow, CompletedByID: 1},
}
results := NewTaskCompletionListResponse(completions)
if len(results) != 1 {
t.Errorf("len = %d", len(results))
}
}
func TestNewTaskCategoryResponse_Nil(t *testing.T) {
if NewTaskCategoryResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewTaskPriorityResponse_Nil(t *testing.T) {
if NewTaskPriorityResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewTaskFrequencyResponse_Nil(t *testing.T) {
if NewTaskFrequencyResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewTaskUserResponse_Nil(t *testing.T) {
if NewTaskUserResponse(nil) != nil {
t.Error("expected nil")
}
}
// ==================== contractor.go ====================
func makeContractor() *models.Contractor {
resID := uint(10)
return &models.Contractor{
BaseModel: models.BaseModel{ID: 5, CreatedAt: fixedNow, UpdatedAt: fixedNow},
ResidenceID: &resID,
CreatedByID: 1,
CreatedBy: *makeUser(),
Name: "Bob's Plumbing",
Company: "Bob Co",
Phone: "555-1234",
Email: "bob@plumb.com",
Rating: float64Ptr(4.5),
IsFavorite: true,
IsActive: true,
Specialties: []models.ContractorSpecialty{
{BaseModel: models.BaseModel{ID: 1}, Name: "Plumbing", Icon: "wrench", DisplayOrder: 1},
},
Tasks: []models.Task{{BaseModel: models.BaseModel{ID: 1}}, {BaseModel: models.BaseModel{ID: 2}}},
}
}
func TestNewContractorResponse_BasicFields(t *testing.T) {
c := makeContractor()
resp := NewContractorResponse(c)
if resp.ID != 5 {
t.Errorf("ID = %d", resp.ID)
}
if resp.Name != "Bob's Plumbing" {
t.Errorf("Name = %q", resp.Name)
}
if resp.AddedBy != 1 {
t.Errorf("AddedBy = %d, want 1", resp.AddedBy)
}
if resp.CreatedBy == nil {
t.Error("CreatedBy should not be nil")
}
if resp.TaskCount != 2 {
t.Errorf("TaskCount = %d, want 2", resp.TaskCount)
}
}
func TestNewContractorResponse_WithSpecialties(t *testing.T) {
c := makeContractor()
resp := NewContractorResponse(c)
if len(resp.Specialties) != 1 {
t.Fatalf("Specialties len = %d", len(resp.Specialties))
}
if resp.Specialties[0].Name != "Plumbing" {
t.Errorf("Specialty name = %q", resp.Specialties[0].Name)
}
}
func TestNewContractorListResponse(t *testing.T) {
contractors := []models.Contractor{*makeContractor()}
results := NewContractorListResponse(contractors)
if len(results) != 1 {
t.Errorf("len = %d", len(results))
}
}
func TestNewContractorUserResponse_Nil(t *testing.T) {
if NewContractorUserResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewContractorSpecialtyResponse(t *testing.T) {
s := &models.ContractorSpecialty{
BaseModel: models.BaseModel{ID: 1},
Name: "Electrical",
Description: "Electrical work",
Icon: "bolt",
DisplayOrder: 2,
}
resp := NewContractorSpecialtyResponse(s)
if resp.Name != "Electrical" || resp.Icon != "bolt" {
t.Errorf("unexpected: %+v", resp)
}
}
// ==================== document.go ====================
func makeDocument() *models.Document {
price := decimal.NewFromFloat(99.99)
return &models.Document{
BaseModel: models.BaseModel{ID: 20, CreatedAt: fixedNow, UpdatedAt: fixedNow},
ResidenceID: 10,
CreatedByID: 1,
CreatedBy: *makeUser(),
Title: "Warranty",
Description: "Roof warranty",
DocumentType: "warranty",
FileName: "warranty.pdf",
FileSize: func() *int64 { v := int64(1024); return &v }(),
MimeType: "application/pdf",
PurchasePrice: &price,
IsActive: true,
Images: []models.DocumentImage{
{BaseModel: models.BaseModel{ID: 1}, ImageURL: "http://img.jpg", Caption: "page 1"},
},
}
}
func TestNewDocumentResponse_MediaURL(t *testing.T) {
d := makeDocument()
resp := NewDocumentResponse(d)
want := fmt.Sprintf("/api/media/document/%d", d.ID)
if resp.MediaURL != want {
t.Errorf("MediaURL = %q, want %q", resp.MediaURL, want)
}
if resp.Residence != resp.ResidenceID {
t.Error("Residence alias should equal ResidenceID")
}
}
func TestNewDocumentResponse_WithImages(t *testing.T) {
d := makeDocument()
resp := NewDocumentResponse(d)
if len(resp.Images) != 1 {
t.Fatalf("Images len = %d", len(resp.Images))
}
if resp.Images[0].MediaURL != "/api/media/document-image/1" {
t.Errorf("Image MediaURL = %q", resp.Images[0].MediaURL)
}
}
func TestNewDocumentResponse_EmptyImageURL(t *testing.T) {
d := makeDocument()
d.Images = []models.DocumentImage{
{BaseModel: models.BaseModel{ID: 5}, ImageURL: "", Caption: "missing"},
}
resp := NewDocumentResponse(d)
if resp.Images[0].Error != "image source URL is missing" {
t.Errorf("Error = %q", resp.Images[0].Error)
}
}
func TestNewDocumentListResponse(t *testing.T) {
docs := []models.Document{*makeDocument()}
results := NewDocumentListResponse(docs)
if len(results) != 1 {
t.Errorf("len = %d", len(results))
}
}
func TestNewDocumentUserResponse_Nil(t *testing.T) {
if NewDocumentUserResponse(nil) != nil {
t.Error("expected nil")
}
}
// ==================== residence.go ====================
func makeResidence() *models.Residence {
propTypeID := uint(1)
return &models.Residence{
BaseModel: models.BaseModel{ID: 10, CreatedAt: fixedNow, UpdatedAt: fixedNow},
OwnerID: 1,
Owner: *makeUser(),
Name: "My House",
PropertyTypeID: &propTypeID,
PropertyType: &models.ResidenceType{BaseModel: models.BaseModel{ID: 1}, Name: "House"},
StreetAddress: "123 Main St",
City: "Springfield",
StateProvince: "IL",
PostalCode: "62701",
Country: "USA",
Bedrooms: intPtr(3),
IsPrimary: true,
IsActive: true,
HasPool: true,
HeatingType: strPtr("central"),
Users: []models.User{
{ID: 1, Username: "john", Email: "john@example.com"},
{ID: 2, Username: "jane", Email: "jane@example.com"},
},
}
}
func TestNewResidenceResponse_AllFields(t *testing.T) {
r := makeResidence()
resp := NewResidenceResponse(r)
if resp.ID != 10 {
t.Errorf("ID = %d", resp.ID)
}
if resp.Name != "My House" {
t.Errorf("Name = %q", resp.Name)
}
if resp.Owner == nil {
t.Error("Owner should not be nil")
}
if resp.PropertyType == nil {
t.Error("PropertyType should not be nil")
}
if !resp.HasPool {
t.Error("HasPool should be true")
}
if resp.HeatingType == nil || *resp.HeatingType != "central" {
t.Error("HeatingType should be 'central'")
}
}
func TestNewResidenceResponse_WithUsers(t *testing.T) {
r := makeResidence()
resp := NewResidenceResponse(r)
if len(resp.Users) != 2 {
t.Errorf("Users len = %d, want 2", len(resp.Users))
}
}
func TestNewResidenceResponse_NoUsers(t *testing.T) {
r := makeResidence()
r.Users = nil
resp := NewResidenceResponse(r)
if resp.Users == nil {
t.Error("Users should be empty slice, not nil")
}
if len(resp.Users) != 0 {
t.Errorf("Users len = %d, want 0", len(resp.Users))
}
}
func TestNewResidenceListResponse(t *testing.T) {
residences := []models.Residence{*makeResidence()}
results := NewResidenceListResponse(residences)
if len(results) != 1 {
t.Errorf("len = %d", len(results))
}
}
func TestNewResidenceUserResponse_Nil(t *testing.T) {
if NewResidenceUserResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewResidenceTypeResponse_Nil(t *testing.T) {
if NewResidenceTypeResponse(nil) != nil {
t.Error("expected nil")
}
}
func TestNewShareCodeResponse(t *testing.T) {
sc := &models.ResidenceShareCode{
BaseModel: models.BaseModel{ID: 1, CreatedAt: fixedNow},
Code: "ABC123",
ResidenceID: 10,
CreatedByID: 1,
IsActive: true,
ExpiresAt: timePtr(fixedNow.Add(24 * time.Hour)),
}
resp := NewShareCodeResponse(sc)
if resp.Code != "ABC123" {
t.Errorf("Code = %q", resp.Code)
}
if resp.ResidenceID != 10 {
t.Errorf("ResidenceID = %d", resp.ResidenceID)
}
}
// ==================== task_template.go ====================
func TestParseTags_Empty(t *testing.T) {
result := parseTags("")
if len(result) != 0 {
t.Errorf("len = %d, want 0", len(result))
}
}
func TestParseTags_Multiple(t *testing.T) {
result := parseTags("plumbing,electrical,roofing")
if len(result) != 3 {
t.Errorf("len = %d, want 3", len(result))
}
if result[0] != "plumbing" || result[1] != "electrical" || result[2] != "roofing" {
t.Errorf("unexpected tags: %v", result)
}
}
func TestParseTags_Whitespace(t *testing.T) {
result := parseTags(" plumbing , , electrical ")
if len(result) != 2 {
t.Errorf("len = %d, want 2 (should skip empty after trim)", len(result))
}
if result[0] != "plumbing" || result[1] != "electrical" {
t.Errorf("unexpected tags: %v", result)
}
}
func makeTemplate(catID *uint, cat *models.TaskCategory) models.TaskTemplate {
return models.TaskTemplate{
BaseModel: models.BaseModel{ID: 1, CreatedAt: fixedNow, UpdatedAt: fixedNow},
Title: "Clean Gutters",
Description: "Remove debris",
CategoryID: catID,
Category: cat,
IconIOS: "leaf",
IconAndroid: "leaf_android",
Tags: "exterior,seasonal",
DisplayOrder: 1,
IsActive: true,
}
}
func TestNewTaskTemplateResponse(t *testing.T) {
catID := uint(1)
cat := &models.TaskCategory{BaseModel: models.BaseModel{ID: 1}, Name: "Exterior"}
tmpl := makeTemplate(&catID, cat)
resp := NewTaskTemplateResponse(&tmpl)
if resp.Title != "Clean Gutters" {
t.Errorf("Title = %q", resp.Title)
}
if len(resp.Tags) != 2 {
t.Errorf("Tags len = %d", len(resp.Tags))
}
if resp.Category == nil {
t.Error("Category should not be nil")
}
}
func TestNewTaskTemplateResponse_WithRegion(t *testing.T) {
tmpl := makeTemplate(nil, nil)
tmpl.Regions = []models.ClimateRegion{
{BaseModel: models.BaseModel{ID: 5}, Name: "Southeast"},
}
resp := NewTaskTemplateResponse(&tmpl)
if resp.RegionID == nil || *resp.RegionID != 5 {
t.Error("RegionID should be 5")
}
if resp.RegionName != "Southeast" {
t.Errorf("RegionName = %q", resp.RegionName)
}
}
func TestNewTaskTemplatesGroupedResponse_Grouping(t *testing.T) {
catID := uint(1)
cat := &models.TaskCategory{BaseModel: models.BaseModel{ID: 1}, Name: "Exterior"}
templates := []models.TaskTemplate{
makeTemplate(&catID, cat),
makeTemplate(&catID, cat),
}
resp := NewTaskTemplatesGroupedResponse(templates)
if len(resp.Categories) != 1 {
t.Fatalf("Categories len = %d, want 1", len(resp.Categories))
}
if resp.Categories[0].CategoryName != "Exterior" {
t.Errorf("CategoryName = %q", resp.Categories[0].CategoryName)
}
if resp.Categories[0].Count != 2 {
t.Errorf("Count = %d, want 2", resp.Categories[0].Count)
}
if resp.TotalCount != 2 {
t.Errorf("TotalCount = %d, want 2", resp.TotalCount)
}
}
func TestNewTaskTemplatesGroupedResponse_Uncategorized(t *testing.T) {
tmpl := makeTemplate(nil, nil)
resp := NewTaskTemplatesGroupedResponse([]models.TaskTemplate{tmpl})
if len(resp.Categories) != 1 {
t.Fatalf("Categories len = %d", len(resp.Categories))
}
if resp.Categories[0].CategoryName != "Uncategorized" {
t.Errorf("CategoryName = %q", resp.Categories[0].CategoryName)
}
}
func TestNewTaskTemplateListResponse(t *testing.T) {
templates := []models.TaskTemplate{makeTemplate(nil, nil)}
results := NewTaskTemplateListResponse(templates)
if len(results) != 1 {
t.Errorf("len = %d", len(results))
}
}
// ==================== DetermineKanbanColumnWithTime ====================
func TestDetermineKanbanColumnWithTime(t *testing.T) {
task := makeTask()
col := DetermineKanbanColumnWithTime(task, 30, fixedNow)
if col == "" {
t.Error("expected non-empty column")
}
}
// ==================== NewTaskResponse uses NewTaskResponseWithThreshold ====================
func TestNewTaskResponse_UsesDefault30(t *testing.T) {
task := makeTask()
resp := NewTaskResponse(task)
if resp.ID != 100 {
t.Errorf("ID = %d", resp.ID)
}
// Just verify it doesn't panic and produces a response
}
// ==================== NewTaskCompletionWithTaskResponse UTC variant ====================
func TestNewTaskCompletionWithTaskResponse_UTC(t *testing.T) {
c := &models.TaskCompletion{
BaseModel: models.BaseModel{ID: 1},
TaskID: 100,
CompletedByID: 1,
CompletedAt: fixedNow,
}
task := makeTask()
resp := NewTaskCompletionWithTaskResponse(c, task, 30)
if resp.Task == nil {
t.Error("Task should not be nil")
}
}

View File

@@ -0,0 +1,15 @@
package responses
// TaskSuggestionResponse represents a single task suggestion with relevance scoring
type TaskSuggestionResponse struct {
Template TaskTemplateResponse `json:"template"`
RelevanceScore float64 `json:"relevance_score"`
MatchReasons []string `json:"match_reasons"`
}
// TaskSuggestionsResponse represents the full suggestions response
type TaskSuggestionsResponse struct {
Suggestions []TaskSuggestionResponse `json:"suggestions"`
TotalCount int `json:"total_count"`
ProfileCompleteness float64 `json:"profile_completeness"`
}

View File

@@ -83,9 +83,10 @@ type TaskResponse struct {
Category *TaskCategoryResponse `json:"category,omitempty"`
PriorityID *uint `json:"priority_id"`
Priority *TaskPriorityResponse `json:"priority,omitempty"`
FrequencyID *uint `json:"frequency_id"`
Frequency *TaskFrequencyResponse `json:"frequency,omitempty"`
InProgress bool `json:"in_progress"`
FrequencyID *uint `json:"frequency_id"`
Frequency *TaskFrequencyResponse `json:"frequency,omitempty"`
CustomIntervalDays *int `json:"custom_interval_days"` // For "Custom" frequency, user-specified days
InProgress bool `json:"in_progress"`
DueDate *time.Time `json:"due_date"`
NextDueDate *time.Time `json:"next_due_date"` // For recurring tasks, updated after each completion
EstimatedCost *decimal.Decimal `json:"estimated_cost"`
@@ -236,8 +237,9 @@ func newTaskResponseInternal(t *models.Task, daysThreshold int, now time.Time) T
Description: t.Description,
CategoryID: t.CategoryID,
PriorityID: t.PriorityID,
FrequencyID: t.FrequencyID,
InProgress: t.InProgress,
FrequencyID: t.FrequencyID,
CustomIntervalDays: t.CustomIntervalDays,
InProgress: t.InProgress,
AssignedToID: t.AssignedToID,
DueDate: t.DueDate,
NextDueDate: t.NextDueDate,

View File

@@ -0,0 +1,105 @@
package echohelpers
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/labstack/echo/v4"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestDefaultQuery(t *testing.T) {
tests := []struct {
name string
query string
key string
defaultValue string
expected string
}{
{"returns value when present", "/?status=active", "status", "all", "active"},
{"returns default when absent", "/", "status", "all", "all"},
{"returns default for empty value", "/?status=", "status", "all", "all"},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, tc.query, nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
result := DefaultQuery(c, tc.key, tc.defaultValue)
assert.Equal(t, tc.expected, result)
})
}
}
func TestParseUintParam(t *testing.T) {
tests := []struct {
name string
paramValue string
expected uint
expectError bool
}{
{"valid uint", "42", 42, false},
{"zero", "0", 0, false},
{"invalid string", "abc", 0, true},
{"negative", "-1", 0, true},
{"empty", "", 0, true},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
c.SetParamNames("id")
c.SetParamValues(tc.paramValue)
result, err := ParseUintParam(c, "id")
if tc.expectError {
require.Error(t, err)
} else {
require.NoError(t, err)
assert.Equal(t, tc.expected, result)
}
})
}
}
func TestParseIntParam(t *testing.T) {
tests := []struct {
name string
paramValue string
expected int
expectError bool
}{
{"valid int", "42", 42, false},
{"zero", "0", 0, false},
{"negative", "-5", -5, false},
{"invalid string", "abc", 0, true},
{"empty", "", 0, true},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
c.SetParamNames("id")
c.SetParamValues(tc.paramValue)
result, err := ParseIntParam(c, "id")
if tc.expectError {
require.Error(t, err)
} else {
require.NoError(t, err)
assert.Equal(t, tc.expected, result)
}
})
}
}

View File

@@ -23,6 +23,7 @@ type AuthHandler struct {
appleAuthService *services.AppleAuthService
googleAuthService *services.GoogleAuthService
storageService *services.StorageService
auditService *services.AuditService
}
// NewAuthHandler creates a new auth handler
@@ -49,6 +50,11 @@ func (h *AuthHandler) SetStorageService(storageService *services.StorageService)
h.storageService = storageService
}
// SetAuditService sets the audit service for logging security events
func (h *AuthHandler) SetAuditService(auditService *services.AuditService) {
h.auditService = auditService
}
// Login handles POST /api/auth/login/
func (h *AuthHandler) Login(c echo.Context) error {
var req requests.LoginRequest
@@ -62,9 +68,19 @@ func (h *AuthHandler) Login(c echo.Context) error {
response, err := h.authService.Login(&req)
if err != nil {
log.Debug().Err(err).Str("identifier", req.Username).Msg("Login failed")
if h.auditService != nil {
h.auditService.LogEvent(c, nil, services.AuditEventLoginFailed, map[string]interface{}{
"identifier": req.Username,
})
}
return err
}
if h.auditService != nil {
userID := response.User.ID
h.auditService.LogEvent(c, &userID, services.AuditEventLogin, nil)
}
return c.JSON(http.StatusOK, response)
}
@@ -84,6 +100,14 @@ func (h *AuthHandler) Register(c echo.Context) error {
return err
}
if h.auditService != nil {
userID := response.User.ID
h.auditService.LogEvent(c, &userID, services.AuditEventRegister, map[string]interface{}{
"username": req.Username,
"email": req.Email,
})
}
// Send welcome email with confirmation code (async)
if h.emailService != nil && confirmationCode != "" {
go func() {
@@ -108,6 +132,14 @@ func (h *AuthHandler) Logout(c echo.Context) error {
return apperrors.Unauthorized("error.not_authenticated")
}
// Log audit event before invalidating the token
if h.auditService != nil {
user := middleware.GetAuthUser(c)
if user != nil {
h.auditService.LogEvent(c, &user.ID, services.AuditEventLogout, nil)
}
}
// Invalidate token in database
if err := h.authService.Logout(token); err != nil {
log.Warn().Err(err).Msg("Failed to delete token from database")
@@ -270,6 +302,12 @@ func (h *AuthHandler) ForgotPassword(c echo.Context) error {
}()
}
if h.auditService != nil {
h.auditService.LogEvent(c, nil, services.AuditEventPasswordReset, map[string]interface{}{
"email": req.Email,
})
}
// Always return success to prevent email enumeration
return c.JSON(http.StatusOK, responses.ForgotPasswordResponse{
Message: "Password reset email sent",
@@ -314,6 +352,12 @@ func (h *AuthHandler) ResetPassword(c echo.Context) error {
return err
}
if h.auditService != nil {
h.auditService.LogEvent(c, nil, services.AuditEventPasswordChanged, map[string]interface{}{
"method": "reset_token",
})
}
return c.JSON(http.StatusOK, responses.ResetPasswordResponse{
Message: "Password reset successful",
})
@@ -413,6 +457,34 @@ func (h *AuthHandler) GoogleSignIn(c echo.Context) error {
return c.JSON(http.StatusOK, response)
}
// RefreshToken handles POST /api/auth/refresh/
func (h *AuthHandler) RefreshToken(c echo.Context) error {
user, err := middleware.MustGetAuthUser(c)
if err != nil {
return err
}
token := middleware.GetAuthToken(c)
if token == "" {
return apperrors.Unauthorized("error.not_authenticated")
}
response, err := h.authService.RefreshToken(token, user.ID)
if err != nil {
log.Debug().Err(err).Uint("user_id", user.ID).Msg("Token refresh failed")
return err
}
// If the token was refreshed (new token), invalidate the old one from cache
if response.Token != token && h.cache != nil {
if cacheErr := h.cache.InvalidateAuthToken(c.Request().Context(), token); cacheErr != nil {
log.Warn().Err(cacheErr).Msg("Failed to invalidate old token from cache during refresh")
}
}
return c.JSON(http.StatusOK, response)
}
// DeleteAccount handles DELETE /api/auth/account/
func (h *AuthHandler) DeleteAccount(c echo.Context) error {
user, err := middleware.MustGetAuthUser(c)
@@ -431,6 +503,14 @@ func (h *AuthHandler) DeleteAccount(c echo.Context) error {
return err
}
if h.auditService != nil {
h.auditService.LogEvent(c, &user.ID, services.AuditEventAccountDeleted, map[string]interface{}{
"user_id": user.ID,
"username": user.Username,
"email": user.Email,
})
}
// Delete files from disk (best effort, don't fail the request)
if h.storageService != nil && len(fileURLs) > 0 {
go func() {

View File

@@ -38,7 +38,7 @@ func setupDeleteAccountHandler(t *testing.T) (*AuthHandler, *echo.Echo, *gorm.DB
func TestAuthHandler_DeleteAccount_EmailUser(t *testing.T) {
handler, e, db := setupDeleteAccountHandler(t)
user := testutil.CreateTestUser(t, db, "deletetest", "delete@test.com", "password123")
user := testutil.CreateTestUser(t, db, "deletetest", "delete@test.com", "Password123")
// Create profile for the user
profile := &models.UserProfile{UserID: user.ID, Verified: true}
@@ -52,7 +52,7 @@ func TestAuthHandler_DeleteAccount_EmailUser(t *testing.T) {
authGroup.DELETE("/account/", handler.DeleteAccount)
t.Run("successful deletion with correct password", func(t *testing.T) {
password := "password123"
password := "Password123"
req := map[string]interface{}{
"password": password,
}
@@ -84,7 +84,7 @@ func TestAuthHandler_DeleteAccount_EmailUser(t *testing.T) {
func TestAuthHandler_DeleteAccount_WrongPassword(t *testing.T) {
handler, e, db := setupDeleteAccountHandler(t)
user := testutil.CreateTestUser(t, db, "wrongpw", "wrongpw@test.com", "password123")
user := testutil.CreateTestUser(t, db, "wrongpw", "wrongpw@test.com", "Password123")
authGroup := e.Group("/api/auth")
authGroup.Use(testutil.MockAuthMiddleware(user))
@@ -105,7 +105,7 @@ func TestAuthHandler_DeleteAccount_WrongPassword(t *testing.T) {
func TestAuthHandler_DeleteAccount_MissingPassword(t *testing.T) {
handler, e, db := setupDeleteAccountHandler(t)
user := testutil.CreateTestUser(t, db, "nopw", "nopw@test.com", "password123")
user := testutil.CreateTestUser(t, db, "nopw", "nopw@test.com", "Password123")
authGroup := e.Group("/api/auth")
authGroup.Use(testutil.MockAuthMiddleware(user))
@@ -207,7 +207,7 @@ func TestAuthHandler_DeleteAccount_Unauthenticated(t *testing.T) {
t.Run("unauthenticated request returns 401", func(t *testing.T) {
req := map[string]interface{}{
"password": "password123",
"password": "Password123",
}
w := testutil.MakeRequest(e, "DELETE", "/api/auth/account/", req, "")

View File

@@ -43,7 +43,7 @@ func TestAuthHandler_Register(t *testing.T) {
req := requests.RegisterRequest{
Username: "newuser",
Email: "new@test.com",
Password: "password123",
Password: "Password123",
FirstName: "New",
LastName: "User",
}
@@ -98,7 +98,7 @@ func TestAuthHandler_Register(t *testing.T) {
req := requests.RegisterRequest{
Username: "duplicate",
Email: "unique1@test.com",
Password: "password123",
Password: "Password123",
}
w := testutil.MakeRequest(e, "POST", "/api/auth/register/", req, "")
testutil.AssertStatusCode(t, w, http.StatusCreated)
@@ -117,7 +117,7 @@ func TestAuthHandler_Register(t *testing.T) {
req := requests.RegisterRequest{
Username: "user1",
Email: "duplicate@test.com",
Password: "password123",
Password: "Password123",
}
w := testutil.MakeRequest(e, "POST", "/api/auth/register/", req, "")
testutil.AssertStatusCode(t, w, http.StatusCreated)
@@ -142,7 +142,7 @@ func TestAuthHandler_Login(t *testing.T) {
registerReq := requests.RegisterRequest{
Username: "logintest",
Email: "login@test.com",
Password: "password123",
Password: "Password123",
FirstName: "Test",
LastName: "User",
}
@@ -152,7 +152,7 @@ func TestAuthHandler_Login(t *testing.T) {
t.Run("successful login with username", func(t *testing.T) {
req := requests.LoginRequest{
Username: "logintest",
Password: "password123",
Password: "Password123",
}
w := testutil.MakeRequest(e, "POST", "/api/auth/login/", req, "")
@@ -174,7 +174,7 @@ func TestAuthHandler_Login(t *testing.T) {
t.Run("successful login with email", func(t *testing.T) {
req := requests.LoginRequest{
Username: "login@test.com", // Using email as username
Password: "password123",
Password: "Password123",
}
w := testutil.MakeRequest(e, "POST", "/api/auth/login/", req, "")
@@ -199,7 +199,7 @@ func TestAuthHandler_Login(t *testing.T) {
t.Run("login with non-existent user", func(t *testing.T) {
req := requests.LoginRequest{
Username: "nonexistent",
Password: "password123",
Password: "Password123",
}
w := testutil.MakeRequest(e, "POST", "/api/auth/login/", req, "")
@@ -223,7 +223,7 @@ func TestAuthHandler_CurrentUser(t *testing.T) {
handler, e, userRepo := setupAuthHandler(t)
db := testutil.SetupTestDB(t)
user := testutil.CreateTestUser(t, db, "metest", "me@test.com", "password123")
user := testutil.CreateTestUser(t, db, "metest", "me@test.com", "Password123")
user.FirstName = "Test"
user.LastName = "User"
userRepo.Update(user)
@@ -251,7 +251,7 @@ func TestAuthHandler_UpdateProfile(t *testing.T) {
handler, e, userRepo := setupAuthHandler(t)
db := testutil.SetupTestDB(t)
user := testutil.CreateTestUser(t, db, "updatetest", "update@test.com", "password123")
user := testutil.CreateTestUser(t, db, "updatetest", "update@test.com", "Password123")
userRepo.Update(user)
authGroup := e.Group("/api/auth")
@@ -289,7 +289,7 @@ func TestAuthHandler_ForgotPassword(t *testing.T) {
registerReq := requests.RegisterRequest{
Username: "forgottest",
Email: "forgot@test.com",
Password: "password123",
Password: "Password123",
}
testutil.MakeRequest(e, "POST", "/api/auth/register/", registerReq, "")
@@ -323,7 +323,7 @@ func TestAuthHandler_Logout(t *testing.T) {
handler, e, userRepo := setupAuthHandler(t)
db := testutil.SetupTestDB(t)
user := testutil.CreateTestUser(t, db, "logouttest", "logout@test.com", "password123")
user := testutil.CreateTestUser(t, db, "logouttest", "logout@test.com", "Password123")
userRepo.Update(user)
authGroup := e.Group("/api/auth")
@@ -350,7 +350,7 @@ func TestAuthHandler_JSONResponses(t *testing.T) {
req := requests.RegisterRequest{
Username: "jsontest",
Email: "json@test.com",
Password: "password123",
Password: "Password123",
FirstName: "JSON",
LastName: "Test",
}

View File

@@ -2,6 +2,7 @@ package handlers
import (
"encoding/json"
"fmt"
"net/http"
"testing"
@@ -180,3 +181,284 @@ func TestContractorHandler_CreateContractor_100Specialties_Returns400(t *testing
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestContractorHandler_ListContractors(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Electrician Bob")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/", handler.ListContractors)
t.Run("successful list", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response []map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Len(t, response, 2)
})
t.Run("user with no contractors returns empty", func(t *testing.T) {
otherUser := testutil.CreateTestUser(t, db, "other", "other@test.com", "Password123")
e2 := testutil.SetupTestRouter()
authGroup2 := e2.Group("/api/contractors")
authGroup2.Use(testutil.MockAuthMiddleware(otherUser))
authGroup2.GET("/", handler.ListContractors)
w := testutil.MakeRequest(e2, "GET", "/api/contractors/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response []map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Len(t, response, 0)
})
}
func TestContractorHandler_GetContractor(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
contractor := testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/:id/", handler.GetContractor)
t.Run("successful get", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", fmt.Sprintf("/api/contractors/%d/", contractor.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "Plumber Joe", response["name"])
})
t.Run("not found returns 404", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/99999/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/invalid/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestContractorHandler_UpdateContractor(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
contractor := testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.PUT("/:id/", handler.UpdateContractor)
t.Run("successful update", func(t *testing.T) {
newName := "Plumber Joe Updated"
req := requests.UpdateContractorRequest{
Name: &newName,
}
w := testutil.MakeRequest(e, "PUT", fmt.Sprintf("/api/contractors/%d/", contractor.ID), req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "Plumber Joe Updated", response["name"])
})
t.Run("invalid id returns 400", func(t *testing.T) {
newName := "Updated"
req := requests.UpdateContractorRequest{Name: &newName}
w := testutil.MakeRequest(e, "PUT", "/api/contractors/invalid/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("not found returns 404", func(t *testing.T) {
newName := "Updated"
req := requests.UpdateContractorRequest{Name: &newName}
w := testutil.MakeRequest(e, "PUT", "/api/contractors/99999/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
}
func TestContractorHandler_DeleteContractor(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
contractor := testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.DELETE("/:id/", handler.DeleteContractor)
t.Run("successful delete", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", fmt.Sprintf("/api/contractors/%d/", contractor.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Contains(t, response, "message")
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", "/api/contractors/invalid/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("not found returns 404", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", "/api/contractors/99999/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
}
func TestContractorHandler_ToggleFavorite(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
contractor := testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/:id/toggle-favorite/", handler.ToggleFavorite)
t.Run("toggle favorite on", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", fmt.Sprintf("/api/contractors/%d/toggle-favorite/", contractor.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Contains(t, response, "is_favorite")
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/contractors/invalid/toggle-favorite/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("not found returns 404", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/contractors/99999/toggle-favorite/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
}
func TestContractorHandler_ListContractorsByResidence(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/by-residence/:residence_id/", handler.ListContractorsByResidence)
t.Run("successful list by residence", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", fmt.Sprintf("/api/contractors/by-residence/%d/", residence.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response []map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Len(t, response, 1)
})
t.Run("invalid residence id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/by-residence/invalid/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestContractorHandler_GetSpecialties(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/specialties/", handler.GetSpecialties)
t.Run("successful list specialties", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/specialties/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response []map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Greater(t, len(response), 0)
})
}
func TestContractorHandler_GetContractorTasks(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
contractor := testutil.CreateTestContractor(t, db, residence.ID, user.ID, "Plumber Joe")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/:id/tasks/", handler.GetContractorTasks)
t.Run("successful get tasks", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", fmt.Sprintf("/api/contractors/%d/tasks/", contractor.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/contractors/invalid/tasks/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestContractorHandler_CreateContractor_WithOptionalFields(t *testing.T) {
handler, e, db := setupContractorHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
authGroup := e.Group("/api/contractors")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/", handler.CreateContractor)
t.Run("creation with all optional fields", func(t *testing.T) {
rating := 4.5
isFavorite := true
req := requests.CreateContractorRequest{
ResidenceID: &residence.ID,
Name: "Full Contractor",
Company: "ABC Plumbing",
Phone: "555-1234",
Email: "contractor@test.com",
Notes: "Great work",
Rating: &rating,
IsFavorite: &isFavorite,
}
w := testutil.MakeRequest(e, "POST", "/api/contractors/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusCreated)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "Full Contractor", response["name"])
assert.Equal(t, "ABC Plumbing", response["company"])
})
}

View File

@@ -224,3 +224,235 @@ func TestDocumentHandler_DeleteDocument(t *testing.T) {
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
}
func TestDocumentHandler_UpdateDocument(t *testing.T) {
handler, e, db := setupDocumentHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
doc := testutil.CreateTestDocument(t, db, residence.ID, user.ID, "Original Title")
authGroup := e.Group("/api/documents")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.PUT("/:id/", handler.UpdateDocument)
t.Run("successful update", func(t *testing.T) {
newTitle := "Updated Title"
req := map[string]interface{}{
"title": newTitle,
}
w := testutil.MakeRequest(e, "PUT", fmt.Sprintf("/api/documents/%d/", doc.ID), req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, "Updated Title", response["title"])
})
t.Run("invalid id returns 400", func(t *testing.T) {
req := map[string]interface{}{"title": "Updated"}
w := testutil.MakeRequest(e, "PUT", "/api/documents/invalid/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("not found returns 404", func(t *testing.T) {
req := map[string]interface{}{"title": "Updated"}
w := testutil.MakeRequest(e, "PUT", "/api/documents/99999/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
t.Run("access denied for other user", func(t *testing.T) {
otherUser := testutil.CreateTestUser(t, db, "other", "other@test.com", "Password123")
e2 := testutil.SetupTestRouter()
otherGroup := e2.Group("/api/documents")
otherGroup.Use(testutil.MockAuthMiddleware(otherUser))
otherGroup.PUT("/:id/", handler.UpdateDocument)
req := map[string]interface{}{"title": "Hacked"}
w := testutil.MakeRequest(e2, "PUT", fmt.Sprintf("/api/documents/%d/", doc.ID), req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusForbidden)
})
}
func TestDocumentHandler_ListDocuments_Filters(t *testing.T) {
handler, e, db := setupDocumentHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
testutil.CreateTestDocument(t, db, residence.ID, user.ID, "Active Doc")
authGroup := e.Group("/api/documents")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/", handler.ListDocuments)
t.Run("filter by residence", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", fmt.Sprintf("/api/documents/?residence=%d", residence.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response []map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Len(t, response, 1)
})
t.Run("filter by search", func(t *testing.T) {
t.Skip("ILIKE is not supported in SQLite; search filter requires PostgreSQL")
})
t.Run("expiring_soon out of range returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/documents/?expiring_soon=5000", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestDocumentHandler_ListWarranties(t *testing.T) {
handler, e, db := setupDocumentHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
// Create a warranty document
doc := testutil.CreateTestDocument(t, db, residence.ID, user.ID, "Warranty Doc")
require.NoError(t, db.Model(doc).Update("document_type", "warranty").Error)
authGroup := e.Group("/api/documents")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/warranties/", handler.ListWarranties)
t.Run("successful list warranties", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/documents/warranties/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
})
}
func TestDocumentHandler_ActivateDeactivateDocument(t *testing.T) {
handler, e, db := setupDocumentHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
doc := testutil.CreateTestDocument(t, db, residence.ID, user.ID, "Toggle Doc")
authGroup := e.Group("/api/documents")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/:id/deactivate/", handler.DeactivateDocument)
authGroup.POST("/:id/activate/", handler.ActivateDocument)
t.Run("deactivate document", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", fmt.Sprintf("/api/documents/%d/deactivate/", doc.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, false, response["is_active"])
})
t.Run("activate document", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", fmt.Sprintf("/api/documents/%d/activate/", doc.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, true, response["is_active"])
})
t.Run("activate invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/documents/invalid/activate/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("deactivate invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/documents/invalid/deactivate/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("activate not found returns 404", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/documents/99999/activate/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
t.Run("deactivate not found returns 404", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/documents/99999/deactivate/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
}
func TestDocumentHandler_CreateDocument_ValidationErrors(t *testing.T) {
handler, e, db := setupDocumentHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
authGroup := e.Group("/api/documents")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/", handler.CreateDocument)
t.Run("missing title returns 400", func(t *testing.T) {
body := map[string]interface{}{
"residence_id": residence.ID,
}
w := testutil.MakeRequest(e, "POST", "/api/documents/", body, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("missing residence_id returns 400", func(t *testing.T) {
body := map[string]interface{}{
"title": "Test Doc",
}
w := testutil.MakeRequest(e, "POST", "/api/documents/", body, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("invalid document_type returns 400", func(t *testing.T) {
body := map[string]interface{}{
"title": "Test Doc",
"residence_id": residence.ID,
"document_type": "invalid_type",
}
w := testutil.MakeRequest(e, "POST", "/api/documents/", body, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestDocumentHandler_GetDocument_InvalidID(t *testing.T) {
handler, e, db := setupDocumentHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/documents")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/:id/", handler.GetDocument)
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/documents/invalid/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestDocumentHandler_DeleteDocument_InvalidID(t *testing.T) {
handler, e, db := setupDocumentHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/documents")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.DELETE("/:id/", handler.DeleteDocument)
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", "/api/documents/invalid/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestDocumentHandler_DeleteDocument_AccessDenied(t *testing.T) {
handler, e, db := setupDocumentHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
otherUser := testutil.CreateTestUser(t, db, "other", "other@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
doc := testutil.CreateTestDocument(t, db, residence.ID, user.ID, "Test Doc")
authGroup := e.Group("/api/documents")
authGroup.Use(testutil.MockAuthMiddleware(otherUser))
authGroup.DELETE("/:id/", handler.DeleteDocument)
t.Run("access denied for other user", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", fmt.Sprintf("/api/documents/%d/", doc.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusForbidden)
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -86,3 +86,323 @@ func TestNotificationHandler_ListNotifications_LimitCappedAt200(t *testing.T) {
assert.Equal(t, 50, count, "response should use default limit of 50")
})
}
func TestNotificationHandler_ListNotifications_Pagination(t *testing.T) {
handler, e, db := setupNotificationHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
createTestNotifications(t, db, user.ID, 20)
authGroup := e.Group("/api/notifications")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/", handler.ListNotifications)
t.Run("offset skips notifications", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/notifications/?limit=5&offset=15", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
count := int(response["count"].(float64))
assert.Equal(t, 5, count, "should return remaining 5 after offset 15")
})
t.Run("response has results array", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/notifications/?limit=3", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Contains(t, response, "results")
assert.Contains(t, response, "count")
results := response["results"].([]interface{})
assert.Len(t, results, 3)
})
t.Run("negative limit ignored", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/notifications/?limit=-5", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
// Negative limit should default to 50 (since -5 > 0 is false)
count := int(response["count"].(float64))
assert.Equal(t, 20, count, "should return all 20 with default limit of 50")
})
}
func TestNotificationHandler_GetUnreadCount(t *testing.T) {
handler, e, db := setupNotificationHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
// Create some unread notifications
createTestNotifications(t, db, user.ID, 5)
authGroup := e.Group("/api/notifications")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/unread-count/", handler.GetUnreadCount)
t.Run("successful unread count", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/notifications/unread-count/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Contains(t, response, "unread_count")
unreadCount := int(response["unread_count"].(float64))
assert.Equal(t, 5, unreadCount)
})
t.Run("user with no notifications returns zero", func(t *testing.T) {
otherUser := testutil.CreateTestUser(t, db, "other", "other@test.com", "Password123")
e2 := testutil.SetupTestRouter()
authGroup2 := e2.Group("/api/notifications")
authGroup2.Use(testutil.MockAuthMiddleware(otherUser))
authGroup2.GET("/unread-count/", handler.GetUnreadCount)
w := testutil.MakeRequest(e2, "GET", "/api/notifications/unread-count/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, float64(0), response["unread_count"])
})
}
func TestNotificationHandler_MarkAsRead(t *testing.T) {
handler, e, db := setupNotificationHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
// Create a notification
notif := &models.Notification{
UserID: user.ID,
NotificationType: models.NotificationTaskDueSoon,
Title: "Test Notification",
Body: "Test Body",
}
require.NoError(t, db.Create(notif).Error)
authGroup := e.Group("/api/notifications")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/:id/read/", handler.MarkAsRead)
t.Run("successful mark as read", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", fmt.Sprintf("/api/notifications/%d/read/", notif.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Contains(t, response, "message")
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/notifications/invalid/read/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("not found returns 404", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/notifications/99999/read/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusNotFound)
})
}
func TestNotificationHandler_MarkAllAsRead(t *testing.T) {
handler, e, db := setupNotificationHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
createTestNotifications(t, db, user.ID, 5)
authGroup := e.Group("/api/notifications")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/mark-all-read/", handler.MarkAllAsRead)
authGroup.GET("/unread-count/", handler.GetUnreadCount)
t.Run("successful mark all as read", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", "/api/notifications/mark-all-read/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Contains(t, response, "message")
})
t.Run("unread count is zero after mark all", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/notifications/unread-count/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, float64(0), response["unread_count"])
})
}
func TestNotificationHandler_GetPreferences(t *testing.T) {
handler, e, db := setupNotificationHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/notifications")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/preferences/", handler.GetPreferences)
t.Run("successful get preferences", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/notifications/preferences/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
// Default preferences should have standard fields
assert.Contains(t, response, "task_due_soon")
assert.Contains(t, response, "task_overdue")
assert.Contains(t, response, "task_completed")
})
}
func TestNotificationHandler_UpdatePreferences(t *testing.T) {
handler, e, db := setupNotificationHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/notifications")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.PUT("/preferences/", handler.UpdatePreferences)
t.Run("successful update preferences", func(t *testing.T) {
req := map[string]interface{}{
"task_due_soon": false,
"task_overdue": true,
}
w := testutil.MakeRequest(e, "PUT", "/api/notifications/preferences/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Equal(t, false, response["task_due_soon"])
assert.Equal(t, true, response["task_overdue"])
})
}
func TestNotificationHandler_RegisterDevice(t *testing.T) {
handler, e, db := setupNotificationHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/notifications")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/devices/", handler.RegisterDevice)
t.Run("successful device registration", func(t *testing.T) {
req := map[string]interface{}{
"name": "iPhone 15",
"device_id": "test-device-id-123",
"registration_id": "test-registration-id-abc",
"platform": "ios",
}
w := testutil.MakeRequest(e, "POST", "/api/notifications/devices/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusCreated)
})
t.Run("missing required fields returns 400", func(t *testing.T) {
req := map[string]interface{}{
"name": "iPhone 15",
// Missing device_id, registration_id, platform
}
w := testutil.MakeRequest(e, "POST", "/api/notifications/devices/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("invalid platform returns 400", func(t *testing.T) {
req := map[string]interface{}{
"device_id": "test-device-id-456",
"registration_id": "test-registration-id-def",
"platform": "windows", // invalid
}
w := testutil.MakeRequest(e, "POST", "/api/notifications/devices/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestNotificationHandler_ListDevices(t *testing.T) {
handler, e, db := setupNotificationHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/notifications")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/devices/", handler.ListDevices)
t.Run("successful list devices", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/notifications/devices/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
})
}
func TestNotificationHandler_UnregisterDevice(t *testing.T) {
handler, e, db := setupNotificationHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/notifications")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/devices/unregister/", handler.UnregisterDevice)
t.Run("missing registration_id returns 400", func(t *testing.T) {
req := map[string]interface{}{
"platform": "ios",
}
w := testutil.MakeRequest(e, "POST", "/api/notifications/devices/unregister/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("missing platform returns 400", func(t *testing.T) {
req := map[string]interface{}{
"registration_id": "test-id",
}
w := testutil.MakeRequest(e, "POST", "/api/notifications/devices/unregister/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("invalid platform returns 400", func(t *testing.T) {
req := map[string]interface{}{
"registration_id": "test-id",
"platform": "windows",
}
w := testutil.MakeRequest(e, "POST", "/api/notifications/devices/unregister/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestNotificationHandler_DeleteDevice(t *testing.T) {
handler, e, db := setupNotificationHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/notifications")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.DELETE("/devices/:id/", handler.DeleteDevice)
t.Run("missing platform query param returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", "/api/notifications/devices/1/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("invalid platform returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", "/api/notifications/devices/1/?platform=windows", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", "/api/notifications/devices/invalid/?platform=ios", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}

View File

@@ -567,3 +567,164 @@ func TestResidenceHandler_CreateResidence_NegativeBedrooms_Returns400(t *testing
testutil.AssertStatusCode(t, w, http.StatusCreated)
})
}
func TestResidenceHandler_GetMyResidences(t *testing.T) {
handler, e, db := setupResidenceHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
testutil.CreateTestResidence(t, db, user.ID, "House 1")
testutil.CreateTestResidence(t, db, user.ID, "House 2")
authGroup := e.Group("/api/residences")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/my-residences/", handler.GetMyResidences)
t.Run("successful my residences", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/residences/my-residences/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
// GetMyResidences returns MyResidencesResponse: {"residences": [...]}
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
residences := response["residences"].([]interface{})
assert.Len(t, residences, 2)
})
t.Run("user with no residences returns empty", func(t *testing.T) {
noResUser := testutil.CreateTestUser(t, db, "nores", "nores@test.com", "Password123")
e2 := testutil.SetupTestRouter()
authGroup2 := e2.Group("/api/residences")
authGroup2.Use(testutil.MockAuthMiddleware(noResUser))
authGroup2.GET("/my-residences/", handler.GetMyResidences)
w := testutil.MakeRequest(e2, "GET", "/api/residences/my-residences/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
// GetMyResidences returns MyResidencesResponse: {"residences": [...] or null}
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
if response["residences"] == nil {
// null residences means no residences
} else {
residences := response["residences"].([]interface{})
assert.Len(t, residences, 0)
}
})
}
func TestResidenceHandler_GetSummary(t *testing.T) {
handler, e, db := setupResidenceHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
testutil.CreateTestResidence(t, db, user.ID, "House 1")
authGroup := e.Group("/api/residences")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/summary/", handler.GetSummary)
t.Run("successful summary", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/residences/summary/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Contains(t, response, "total_residences")
assert.Contains(t, response, "total_tasks")
})
}
func TestResidenceHandler_UpdateResidence_InvalidID(t *testing.T) {
handler, e, db := setupResidenceHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/residences")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.PUT("/:id/", handler.UpdateResidence)
t.Run("invalid id returns 400", func(t *testing.T) {
newName := "Updated"
req := requests.UpdateResidenceRequest{Name: &newName}
w := testutil.MakeRequest(e, "PUT", "/api/residences/invalid/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("non-existent id returns 403", func(t *testing.T) {
newName := "Updated"
req := requests.UpdateResidenceRequest{Name: &newName}
w := testutil.MakeRequest(e, "PUT", "/api/residences/9999/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusForbidden)
})
}
func TestResidenceHandler_DeleteResidence_InvalidID(t *testing.T) {
handler, e, db := setupResidenceHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
authGroup := e.Group("/api/residences")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.DELETE("/:id/", handler.DeleteResidence)
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", "/api/residences/invalid/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("non-existent id returns 403", func(t *testing.T) {
w := testutil.MakeRequest(e, "DELETE", "/api/residences/9999/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusForbidden)
})
}
func TestResidenceHandler_GetShareCode(t *testing.T) {
handler, e, db := setupResidenceHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Share Code Test")
authGroup := e.Group("/api/residences")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.GET("/:id/share-code/", handler.GetShareCode)
t.Run("no share code returns null", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", fmt.Sprintf("/api/residences/%d/share-code/", residence.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Nil(t, response["share_code"])
})
t.Run("invalid id returns 400", func(t *testing.T) {
w := testutil.MakeRequest(e, "GET", "/api/residences/invalid/share-code/", nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestResidenceHandler_GenerateSharePackage(t *testing.T) {
handler, e, db := setupResidenceHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "Password123")
residence := testutil.CreateTestResidence(t, db, user.ID, "Package Test")
authGroup := e.Group("/api/residences")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/:id/generate-share-package/", handler.GenerateSharePackage)
t.Run("generate share package", func(t *testing.T) {
w := testutil.MakeRequest(e, "POST", fmt.Sprintf("/api/residences/%d/generate-share-package/", residence.ID), nil, "test-token")
testutil.AssertStatusCode(t, w, http.StatusOK)
var response map[string]interface{}
err := json.Unmarshal(w.Body.Bytes(), &response)
require.NoError(t, err)
assert.Contains(t, response, "share_code")
})
}

View File

@@ -0,0 +1,50 @@
package handlers
import (
"net/http"
"strconv"
"github.com/labstack/echo/v4"
"github.com/treytartt/honeydue-api/internal/apperrors"
"github.com/treytartt/honeydue-api/internal/middleware"
"github.com/treytartt/honeydue-api/internal/services"
)
// SuggestionHandler handles task suggestion endpoints
type SuggestionHandler struct {
suggestionService *services.SuggestionService
}
// NewSuggestionHandler creates a new suggestion handler
func NewSuggestionHandler(suggestionService *services.SuggestionService) *SuggestionHandler {
return &SuggestionHandler{
suggestionService: suggestionService,
}
}
// GetSuggestions handles GET /api/tasks/suggestions/?residence_id=X
// Returns task template suggestions scored against the residence's home profile
func (h *SuggestionHandler) GetSuggestions(c echo.Context) error {
user, err := middleware.MustGetAuthUser(c)
if err != nil {
return err
}
residenceIDStr := c.QueryParam("residence_id")
if residenceIDStr == "" {
return apperrors.BadRequest("error.residence_id_required")
}
residenceID, err := strconv.ParseUint(residenceIDStr, 10, 32)
if err != nil {
return apperrors.BadRequest("error.invalid_id")
}
resp, err := h.suggestionService.GetSuggestions(uint(residenceID), user.ID)
if err != nil {
return err
}
return c.JSON(http.StatusOK, resp)
}

View File

@@ -840,3 +840,159 @@ func TestTaskHandler_JSONResponses(t *testing.T) {
assert.IsType(t, []interface{}{}, response["columns"])
})
}
// =============================================================================
// Part 3: Handler-Level Edge Cases (TDD)
// =============================================================================
func TestTaskHandler_CreateCompletion_RatingValidation(t *testing.T) {
handler, e, db := setupTaskHandler(t)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "password")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
authGroup := e.Group("/api/task-completions")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/", handler.CreateCompletion)
tests := []struct {
name string
rating int
wantStatus int
}{
{"rating_0_rejected", 0, http.StatusBadRequest},
{"rating_negative1_rejected", -1, http.StatusBadRequest},
{"rating_1_accepted", 1, http.StatusCreated},
{"rating_5_accepted", 5, http.StatusCreated},
{"rating_6_rejected", 6, http.StatusBadRequest},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
// Create a fresh task for each accepted rating (otherwise it's completed)
task := testutil.CreateTestTask(t, db, residence.ID, user.ID, "Rate Me "+tt.name)
completedAt := time.Now().UTC()
rating := tt.rating
req := requests.CreateTaskCompletionRequest{
TaskID: task.ID,
CompletedAt: &completedAt,
Rating: &rating,
}
w := testutil.MakeRequest(e, "POST", "/api/task-completions/", req, "test-token")
testutil.AssertStatusCode(t, w, tt.wantStatus)
})
}
}
func TestTaskHandler_CreateTask_TitleBoundary(t *testing.T) {
handler, e, db := setupTaskHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "password")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
authGroup := e.Group("/api/tasks")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/", handler.CreateTask)
t.Run("title_exactly_200_chars_accepted", func(t *testing.T) {
title200 := ""
for i := 0; i < 200; i++ {
title200 += "A"
}
req := requests.CreateTaskRequest{
ResidenceID: residence.ID,
Title: title200,
}
w := testutil.MakeRequest(e, "POST", "/api/tasks/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusCreated)
})
t.Run("title_201_chars_rejected", func(t *testing.T) {
title201 := ""
for i := 0; i < 201; i++ {
title201 += "A"
}
req := requests.CreateTaskRequest{
ResidenceID: residence.ID,
Title: title201,
}
w := testutil.MakeRequest(e, "POST", "/api/tasks/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
}
func TestTaskHandler_CreateTask_DescriptionBoundary(t *testing.T) {
handler, e, db := setupTaskHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "password")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
authGroup := e.Group("/api/tasks")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/", handler.CreateTask)
t.Run("description_exactly_10000_chars_accepted", func(t *testing.T) {
desc10000 := ""
for i := 0; i < 10000; i++ {
desc10000 += "B"
}
req := requests.CreateTaskRequest{
ResidenceID: residence.ID,
Title: "Long Desc Task",
Description: desc10000,
}
w := testutil.MakeRequest(e, "POST", "/api/tasks/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusCreated)
})
}
func TestTaskHandler_CreateTask_CustomIntervalDaysValidation(t *testing.T) {
handler, e, db := setupTaskHandler(t)
testutil.SeedLookupData(t, db)
user := testutil.CreateTestUser(t, db, "owner", "owner@test.com", "password")
residence := testutil.CreateTestResidence(t, db, user.ID, "Test House")
authGroup := e.Group("/api/tasks")
authGroup.Use(testutil.MockAuthMiddleware(user))
authGroup.POST("/", handler.CreateTask)
t.Run("custom_interval_days_0_rejected", func(t *testing.T) {
// Validation tag: min=1, so 0 should be rejected
interval := 0
req := map[string]interface{}{
"residence_id": residence.ID,
"title": "Custom Interval Zero",
"custom_interval_days": interval,
}
w := testutil.MakeRequest(e, "POST", "/api/tasks/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("custom_interval_days_negative1_rejected", func(t *testing.T) {
interval := -1
req := map[string]interface{}{
"residence_id": residence.ID,
"title": "Custom Interval Negative",
"custom_interval_days": interval,
}
w := testutil.MakeRequest(e, "POST", "/api/tasks/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusBadRequest)
})
t.Run("custom_interval_days_1_accepted", func(t *testing.T) {
interval := 1
req := requests.CreateTaskRequest{
ResidenceID: residence.ID,
Title: "Custom Interval One",
CustomIntervalDays: &interval,
}
w := testutil.MakeRequest(e, "POST", "/api/tasks/", req, "test-token")
testutil.AssertStatusCode(t, w, http.StatusCreated)
})
}

211
internal/i18n/i18n_test.go Normal file
View File

@@ -0,0 +1,211 @@
package i18n
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/labstack/echo/v4"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestInit(t *testing.T) {
err := Init()
require.NoError(t, err)
assert.NotNil(t, Bundle)
}
func TestTSimple_EnglishKnownKey(t *testing.T) {
require.NoError(t, Init())
localizer := NewLocalizer("en")
msg := TSimple(localizer, "error.task_not_found")
assert.Equal(t, "Task not found", msg)
}
func TestTSimple_SpanishKnownKey(t *testing.T) {
require.NoError(t, Init())
localizer := NewLocalizer("es")
msg := TSimple(localizer, "error.invalid_credentials")
assert.Equal(t, "Credenciales no validas", msg)
}
func TestT_WithTemplateData(t *testing.T) {
require.NoError(t, Init())
localizer := NewLocalizer("en")
msg := T(localizer, "message.tasks_report_sent", map[string]interface{}{
"Email": "test@example.com",
})
assert.Contains(t, msg, "test@example.com")
}
func TestTSimple_UnknownKeyReturnsKey(t *testing.T) {
require.NoError(t, Init())
localizer := NewLocalizer("en")
key := "error.nonexistent_key_that_does_not_exist"
msg := TSimple(localizer, key)
assert.Equal(t, key, msg)
}
func TestTSimple_FallbackToEnglish(t *testing.T) {
require.NoError(t, Init())
// Use a language that may not have all translations — fallback to English
localizer := NewLocalizer("xx", "en")
msg := TSimple(localizer, "error.task_not_found")
assert.Equal(t, "Task not found", msg)
}
func TestT_NilLocalizer_UsesDefault(t *testing.T) {
require.NoError(t, Init())
msg := T(nil, "error.task_not_found", nil)
assert.Equal(t, "Task not found", msg)
}
func TestNewLocalizer(t *testing.T) {
require.NoError(t, Init())
localizer := NewLocalizer("en")
assert.NotNil(t, localizer)
}
func TestParseAcceptLanguage(t *testing.T) {
tests := []struct {
name string
header string
expected []string
}{
{"empty returns default", "", []string{"en"}},
{"english", "en-US,en;q=0.9", []string{"en", "en"}},
{"spanish first", "es,en;q=0.5", []string{"es", "en"}},
{"unsupported returns default", "xx-YY", []string{"en"}},
{"french", "fr-FR,fr;q=0.9,en;q=0.5", []string{"fr", "fr", "en"}},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := parseAcceptLanguage(tc.header)
assert.Equal(t, tc.expected, result)
})
}
}
func TestMatchLocale(t *testing.T) {
tests := []struct {
name string
langs []string
expected string
}{
{"finds supported", []string{"es", "en"}, "es"},
{"first match wins", []string{"fr", "de"}, "fr"},
{"unsupported returns default", []string{"xx"}, "en"},
{"empty returns default", []string{}, "en"},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := matchLocale(tc.langs)
assert.Equal(t, tc.expected, result)
})
}
}
func TestMiddleware_SetsLocalizerAndLocale(t *testing.T) {
require.NoError(t, Init())
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set("Accept-Language", "es,en;q=0.5")
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
handler := Middleware()(func(c echo.Context) error {
// Verify localizer is set
localizer := GetLocalizer(c)
assert.NotNil(t, localizer)
// Verify locale is set
locale := GetLocale(c)
assert.Equal(t, "es", locale)
return nil
})
err := handler(c)
assert.NoError(t, err)
}
func TestGetLocalizer_NoContextValue_ReturnsDefault(t *testing.T) {
require.NoError(t, Init())
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
localizer := GetLocalizer(c)
assert.NotNil(t, localizer)
}
func TestGetLocale_NoContextValue_ReturnsDefault(t *testing.T) {
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
locale := GetLocale(c)
assert.Equal(t, "en", locale)
}
func TestLocalizedMessage(t *testing.T) {
require.NoError(t, Init())
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set("Accept-Language", "en")
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
// Set up localizer through middleware
handler := Middleware()(func(c echo.Context) error {
msg := LocalizedMessage(c, "error.task_not_found")
assert.Equal(t, "Task not found", msg)
return nil
})
err := handler(c)
assert.NoError(t, err)
}
func TestLocalizedMessageWithData(t *testing.T) {
require.NoError(t, Init())
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
req.Header.Set("Accept-Language", "en")
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
handler := Middleware()(func(c echo.Context) error {
msg := LocalizedMessageWithData(c, "message.tasks_report_sent", map[string]interface{}{
"Email": "user@example.com",
})
assert.Contains(t, msg, "user@example.com")
return nil
})
err := handler(c)
assert.NoError(t, err)
}
func TestSupportedLanguages(t *testing.T) {
assert.Contains(t, SupportedLanguages, "en")
assert.Contains(t, SupportedLanguages, "es")
assert.Contains(t, SupportedLanguages, "fr")
assert.Equal(t, "en", DefaultLanguage)
}

View File

@@ -6,8 +6,11 @@
"error.email_taken": "Email already registered",
"error.email_already_taken": "Email already taken",
"error.registration_failed": "Registration failed",
"error.password_complexity": "Password must be at least 8 characters with at least one uppercase letter, one lowercase letter, and one digit",
"error.not_authenticated": "Not authenticated",
"error.invalid_token": "Invalid token",
"error.token_expired": "Your session has expired. Please log in again.",
"error.token_refresh_not_needed": "Token is still valid.",
"error.failed_to_get_user": "Failed to get user",
"error.failed_to_update_profile": "Failed to update profile",
"error.invalid_verification_code": "Invalid verification code",

View File

@@ -17,10 +17,10 @@ func TestIntegration_ContractorSharingFlow(t *testing.T) {
// ========== Setup Users ==========
// Create user A
userAToken := app.registerAndLogin(t, "userA", "userA@test.com", "password123")
userAToken := app.registerAndLogin(t, "userA", "userA@test.com", "Password123")
// Create user B
userBToken := app.registerAndLogin(t, "userB", "userB@test.com", "password123")
userBToken := app.registerAndLogin(t, "userB", "userB@test.com", "Password123")
// ========== User A creates residence C ==========
residenceBody := map[string]interface{}{
@@ -180,8 +180,8 @@ func TestIntegration_ContractorAccessWithoutResidenceShare(t *testing.T) {
app := setupContractorTest(t)
// Create two users
userAToken := app.registerAndLogin(t, "userA", "userA@test.com", "password123")
userBToken := app.registerAndLogin(t, "userB", "userB@test.com", "password123")
userAToken := app.registerAndLogin(t, "userA", "userA@test.com", "Password123")
userBToken := app.registerAndLogin(t, "userB", "userB@test.com", "Password123")
// User A creates a residence
residenceBody := map[string]interface{}{
@@ -228,9 +228,9 @@ func TestIntegration_ContractorUpdateAndDeleteAccess(t *testing.T) {
app := setupContractorTest(t)
// Create users
userAToken := app.registerAndLogin(t, "userA", "userA@test.com", "password123")
userBToken := app.registerAndLogin(t, "userB", "userB@test.com", "password123")
userCToken := app.registerAndLogin(t, "userC", "userC@test.com", "password123")
userAToken := app.registerAndLogin(t, "userA", "userA@test.com", "Password123")
userBToken := app.registerAndLogin(t, "userB", "userB@test.com", "Password123")
userCToken := app.registerAndLogin(t, "userC", "userC@test.com", "Password123")
// User A creates residence and shares with User B (not User C)
residenceBody := map[string]interface{}{"name": "Shared Residence"}

View File

@@ -379,7 +379,7 @@ func TestIntegration_DuplicateRegistration(t *testing.T) {
registerBody := map[string]string{
"username": "testuser",
"email": "test@example.com",
"password": "password123",
"password": "Password123",
}
w := app.makeAuthenticatedRequest(t, "POST", "/api/auth/register", registerBody, "")
assert.Equal(t, http.StatusCreated, w.Code)
@@ -388,7 +388,7 @@ func TestIntegration_DuplicateRegistration(t *testing.T) {
registerBody2 := map[string]string{
"username": "testuser",
"email": "different@example.com",
"password": "password123",
"password": "Password123",
}
w = app.makeAuthenticatedRequest(t, "POST", "/api/auth/register", registerBody2, "")
assert.Equal(t, http.StatusConflict, w.Code)
@@ -397,7 +397,7 @@ func TestIntegration_DuplicateRegistration(t *testing.T) {
registerBody3 := map[string]string{
"username": "differentuser",
"email": "test@example.com",
"password": "password123",
"password": "Password123",
}
w = app.makeAuthenticatedRequest(t, "POST", "/api/auth/register", registerBody3, "")
assert.Equal(t, http.StatusConflict, w.Code)
@@ -407,7 +407,7 @@ func TestIntegration_DuplicateRegistration(t *testing.T) {
func TestIntegration_ResidenceFlow(t *testing.T) {
app := setupIntegrationTest(t)
token := app.registerAndLogin(t, "owner", "owner@test.com", "password123")
token := app.registerAndLogin(t, "owner", "owner@test.com", "Password123")
// 1. Create a residence
createBody := map[string]interface{}{
@@ -475,8 +475,8 @@ func TestIntegration_ResidenceSharingFlow(t *testing.T) {
app := setupIntegrationTest(t)
// Create owner and another user
ownerToken := app.registerAndLogin(t, "owner", "owner@test.com", "password123")
userToken := app.registerAndLogin(t, "shareduser", "shared@test.com", "password123")
ownerToken := app.registerAndLogin(t, "owner", "owner@test.com", "Password123")
userToken := app.registerAndLogin(t, "shareduser", "shared@test.com", "Password123")
// Create residence as owner
createBody := map[string]interface{}{
@@ -531,7 +531,7 @@ func TestIntegration_ResidenceSharingFlow(t *testing.T) {
func TestIntegration_TaskFlow(t *testing.T) {
app := setupIntegrationTest(t)
token := app.registerAndLogin(t, "owner", "owner@test.com", "password123")
token := app.registerAndLogin(t, "owner", "owner@test.com", "Password123")
// Create residence first
residenceBody := map[string]interface{}{"name": "Task House"}
@@ -633,7 +633,7 @@ func TestIntegration_TaskFlow(t *testing.T) {
func TestIntegration_TasksByResidenceKanban(t *testing.T) {
app := setupIntegrationTest(t)
token := app.registerAndLogin(t, "owner", "owner@test.com", "password123")
token := app.registerAndLogin(t, "owner", "owner@test.com", "Password123")
// Use explicit timezone to test full timezone-aware path
testTimezone := "America/Los_Angeles"
@@ -682,7 +682,7 @@ func TestIntegration_TasksByResidenceKanban(t *testing.T) {
func TestIntegration_LookupEndpoints(t *testing.T) {
app := setupIntegrationTest(t)
token := app.registerAndLogin(t, "user", "user@test.com", "password123")
token := app.registerAndLogin(t, "user", "user@test.com", "Password123")
tests := []struct {
name string
@@ -721,8 +721,8 @@ func TestIntegration_CrossUserAccessDenied(t *testing.T) {
app := setupIntegrationTest(t)
// Create two users with their own residences
user1Token := app.registerAndLogin(t, "user1", "user1@test.com", "password123")
user2Token := app.registerAndLogin(t, "user2", "user2@test.com", "password123")
user1Token := app.registerAndLogin(t, "user1", "user1@test.com", "Password123")
user2Token := app.registerAndLogin(t, "user2", "user2@test.com", "Password123")
// User1 creates a residence
residenceBody := map[string]interface{}{"name": "User1's House"}
@@ -777,7 +777,7 @@ func TestIntegration_CrossUserAccessDenied(t *testing.T) {
func TestIntegration_ResponseStructure(t *testing.T) {
app := setupIntegrationTest(t)
token := app.registerAndLogin(t, "user", "user@test.com", "password123")
token := app.registerAndLogin(t, "user", "user@test.com", "Password123")
// Create residence
residenceBody := map[string]interface{}{
@@ -1704,7 +1704,7 @@ func setupContractorTest(t *testing.T) *TestApp {
// - Verify task moves between kanban columns appropriately
func TestIntegration_RecurringTaskLifecycle(t *testing.T) {
app := setupIntegrationTest(t)
token := app.registerAndLogin(t, "recurring_user", "recurring@test.com", "password123")
token := app.registerAndLogin(t, "recurring_user", "recurring@test.com", "Password123")
// Create residence
residenceBody := map[string]interface{}{"name": "Recurring Task House"}
@@ -1904,9 +1904,9 @@ func TestIntegration_MultiUserSharing(t *testing.T) {
t.Log("Phase 1: Create 3 users")
tokenA := app.registerAndLogin(t, "user_a", "usera@test.com", "password123")
tokenB := app.registerAndLogin(t, "user_b", "userb@test.com", "password123")
tokenC := app.registerAndLogin(t, "user_c", "userc@test.com", "password123")
tokenA := app.registerAndLogin(t, "user_a", "usera@test.com", "Password123")
tokenB := app.registerAndLogin(t, "user_b", "userb@test.com", "Password123")
tokenC := app.registerAndLogin(t, "user_c", "userc@test.com", "Password123")
t.Log("✓ Created users A, B, and C")
@@ -2098,7 +2098,7 @@ func TestIntegration_MultiUserSharing(t *testing.T) {
// - Verify kanban column changes with each transition
func TestIntegration_TaskStateTransitions(t *testing.T) {
app := setupIntegrationTest(t)
token := app.registerAndLogin(t, "state_user", "state@test.com", "password123")
token := app.registerAndLogin(t, "state_user", "state@test.com", "Password123")
// Create residence
residenceBody := map[string]interface{}{"name": "State Transition House"}
@@ -2274,7 +2274,7 @@ func TestIntegration_TaskStateTransitions(t *testing.T) {
// we're testing the full timezone-aware path, not just UTC defaults.
func TestIntegration_DateBoundaryEdgeCases(t *testing.T) {
app := setupIntegrationTest(t)
token := app.registerAndLogin(t, "boundary_user", "boundary@test.com", "password123")
token := app.registerAndLogin(t, "boundary_user", "boundary@test.com", "Password123")
// Create residence
residenceBody := map[string]interface{}{"name": "Boundary Test House"}
@@ -2435,7 +2435,7 @@ func TestIntegration_DateBoundaryEdgeCases(t *testing.T) {
// - One where it's already "tomorrow" → task is overdue (due date was "yesterday")
func TestIntegration_TimezoneDivergence(t *testing.T) {
app := setupIntegrationTest(t)
token := app.registerAndLogin(t, "tz_user", "tz@test.com", "password123")
token := app.registerAndLogin(t, "tz_user", "tz@test.com", "Password123")
// Create residence
residenceBody := map[string]interface{}{"name": "Timezone Test House"}
@@ -2584,7 +2584,7 @@ func findTaskColumn(kanbanResp map[string]interface{}, taskID uint) string {
// - Verify cascading effects
func TestIntegration_CascadeOperations(t *testing.T) {
app := setupIntegrationTest(t)
token := app.registerAndLogin(t, "cascade_user", "cascade@test.com", "password123")
token := app.registerAndLogin(t, "cascade_user", "cascade@test.com", "Password123")
t.Log("Phase 1: Create residence")
@@ -2721,8 +2721,8 @@ func TestIntegration_MultiUserOperations(t *testing.T) {
t.Log("Phase 1: Setup users and shared residence")
tokenA := app.registerAndLogin(t, "multiuser_a", "multiusera@test.com", "password123")
tokenB := app.registerAndLogin(t, "multiuser_b", "multiuserb@test.com", "password123")
tokenA := app.registerAndLogin(t, "multiuser_a", "multiusera@test.com", "Password123")
tokenB := app.registerAndLogin(t, "multiuser_b", "multiuserb@test.com", "Password123")
// User A creates residence
residenceBody := map[string]interface{}{"name": "Multi-User Test House"}

View File

@@ -265,8 +265,8 @@ func TestE2E_SQLInjection_AdminSort_Blocked(t *testing.T) {
adminUserHandler := adminhandlers.NewAdminUserHandler(db)
// Create a couple of test users to have data to sort
testutil.CreateTestUser(t, db, "alice", "alice@test.com", "password123")
testutil.CreateTestUser(t, db, "bob", "bob@test.com", "password123")
testutil.CreateTestUser(t, db, "alice", "alice@test.com", "Password123")
testutil.CreateTestUser(t, db, "bob", "bob@test.com", "Password123")
// Set up a minimal Echo instance with the admin handler
e := echo.New()
@@ -322,7 +322,7 @@ func TestE2E_SQLInjection_AdminSort_Blocked(t *testing.T) {
// garbage receipt data does NOT upgrade the user to Pro tier.
func TestE2E_IAP_InvalidReceipt_NoPro(t *testing.T) {
app := setupSecurityTest(t)
token, userID := app.registerAndLoginSec(t, "iapuser", "iap@test.com", "password123")
token, userID := app.registerAndLoginSec(t, "iapuser", "iap@test.com", "Password123")
// Create initial subscription (free tier)
sub := &models.UserSubscription{UserID: userID, Tier: models.TierFree}
@@ -352,7 +352,7 @@ func TestE2E_IAP_InvalidReceipt_NoPro(t *testing.T) {
// updates both the completion record and the task's NextDueDate together (P1-5/P1-6).
func TestE2E_CompletionTransaction_Atomic(t *testing.T) {
app := setupSecurityTest(t)
token, _ := app.registerAndLoginSec(t, "atomicuser", "atomic@test.com", "password123")
token, _ := app.registerAndLoginSec(t, "atomicuser", "atomic@test.com", "Password123")
// Create a residence
residenceBody := map[string]interface{}{"name": "Atomic Test House"}
@@ -423,7 +423,7 @@ func TestE2E_CompletionTransaction_Atomic(t *testing.T) {
// on a recurring task recalculates NextDueDate back to the correct value (P1-7).
func TestE2E_DeleteCompletion_RecalculatesNextDueDate(t *testing.T) {
app := setupSecurityTest(t)
token, _ := app.registerAndLoginSec(t, "recuruser", "recur@test.com", "password123")
token, _ := app.registerAndLoginSec(t, "recuruser", "recur@test.com", "Password123")
// Create a residence
residenceBody := map[string]interface{}{"name": "Recurring Test House"}
@@ -510,7 +510,7 @@ func TestE2E_DeleteCompletion_RecalculatesNextDueDate(t *testing.T) {
// configured property limit.
func TestE2E_TierLimits_Enforced(t *testing.T) {
app := setupSecurityTest(t)
token, userID := app.registerAndLoginSec(t, "tieruser", "tier@test.com", "password123")
token, userID := app.registerAndLoginSec(t, "tieruser", "tier@test.com", "Password123")
// Enable global limitations
app.DB.Where("1=1").Delete(&models.SubscriptionSettings{})
@@ -602,7 +602,7 @@ func TestE2E_AuthAssertion_NoPanics(t *testing.T) {
// caps the limit parameter to 200 even if the client requests more.
func TestE2E_NotificationLimit_Capped(t *testing.T) {
app := setupSecurityTest(t)
token, userID := app.registerAndLoginSec(t, "notifuser", "notif@test.com", "password123")
token, userID := app.registerAndLoginSec(t, "notifuser", "notif@test.com", "Password123")
// Create 210 notifications directly in the database
for i := 0; i < 210; i++ {

View File

@@ -164,7 +164,7 @@ func TestIntegration_IsFreeBypassesLimitations(t *testing.T) {
app := setupSubscriptionTest(t)
// Register and login a user
token, userID := app.registerAndLogin(t, "freeuser", "free@test.com", "password123")
token, userID := app.registerAndLogin(t, "freeuser", "free@test.com", "Password123")
// Enable global limitations - first delete any existing, then create with enabled
app.DB.Where("1=1").Delete(&models.SubscriptionSettings{})
@@ -215,7 +215,7 @@ func TestIntegration_IsFreeBypassesCheckLimit(t *testing.T) {
app := setupSubscriptionTest(t)
// Register and login a user
_, userID := app.registerAndLogin(t, "limituser", "limit@test.com", "password123")
_, userID := app.registerAndLogin(t, "limituser", "limit@test.com", "Password123")
// Enable global limitations
settings := &models.SubscriptionSettings{EnableLimitations: true}
@@ -282,7 +282,7 @@ func TestIntegration_IsFreeIndependentOfTier(t *testing.T) {
app := setupSubscriptionTest(t)
// Register and login a user
token, userID := app.registerAndLogin(t, "tieruser", "tier@test.com", "password123")
token, userID := app.registerAndLogin(t, "tieruser", "tier@test.com", "Password123")
// Enable global limitations
settings := &models.SubscriptionSettings{EnableLimitations: true}
@@ -340,7 +340,7 @@ func TestIntegration_IsFreeWhenGlobalLimitationsDisabled(t *testing.T) {
app := setupSubscriptionTest(t)
// Register and login a user
token, userID := app.registerAndLogin(t, "globaluser", "global@test.com", "password123")
token, userID := app.registerAndLogin(t, "globaluser", "global@test.com", "Password123")
// Disable global limitations
settings := &models.SubscriptionSettings{EnableLimitations: false}

View File

@@ -0,0 +1,163 @@
package middleware
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/labstack/echo/v4"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/treytartt/honeydue-api/internal/config"
"github.com/treytartt/honeydue-api/internal/models"
)
func TestAdminAuth_NoHeader_Returns401(t *testing.T) {
cfg := &config.Config{
Security: config.SecurityConfig{SecretKey: "test-secret"},
}
mw := AdminAuthMiddleware(cfg, nil)
handler := mw(func(c echo.Context) error {
return c.String(http.StatusOK, "ok")
})
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/admin/test", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
err := handler(c)
require.NoError(t, err)
assert.Equal(t, http.StatusUnauthorized, rec.Code)
assert.Contains(t, rec.Body.String(), "Authorization required")
}
func TestAdminAuth_InvalidToken_Returns401(t *testing.T) {
cfg := &config.Config{
Security: config.SecurityConfig{SecretKey: "test-secret"},
}
mw := AdminAuthMiddleware(cfg, nil)
handler := mw(func(c echo.Context) error {
return c.String(http.StatusOK, "ok")
})
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/admin/test", nil)
req.Header.Set("Authorization", "Bearer invalid-jwt-token")
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
err := handler(c)
require.NoError(t, err)
assert.Equal(t, http.StatusUnauthorized, rec.Code)
assert.Contains(t, rec.Body.String(), "Invalid token")
}
func TestAdminAuth_TokenSchemeOnly_Returns401(t *testing.T) {
cfg := &config.Config{
Security: config.SecurityConfig{SecretKey: "test-secret"},
}
mw := AdminAuthMiddleware(cfg, nil)
handler := mw(func(c echo.Context) error {
return c.String(http.StatusOK, "ok")
})
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/admin/test", nil)
// "Token" scheme is not supported for admin auth, only "Bearer"
req.Header.Set("Authorization", "Token some-token")
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
err := handler(c)
require.NoError(t, err)
assert.Equal(t, http.StatusUnauthorized, rec.Code)
}
func TestRequireSuperAdmin_NoAdmin_Returns401(t *testing.T) {
mw := RequireSuperAdmin()
handler := mw(func(c echo.Context) error {
return c.String(http.StatusOK, "ok")
})
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/admin/test", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
// No admin in context
err := handler(c)
require.NoError(t, err)
assert.Equal(t, http.StatusUnauthorized, rec.Code)
}
func TestRequireSuperAdmin_WrongType_Returns401(t *testing.T) {
mw := RequireSuperAdmin()
handler := mw(func(c echo.Context) error {
return c.String(http.StatusOK, "ok")
})
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/admin/test", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
// Wrong type in context
c.Set(AdminUserKey, "not-an-admin")
err := handler(c)
require.NoError(t, err)
assert.Equal(t, http.StatusUnauthorized, rec.Code)
}
func TestRequireSuperAdmin_NonSuperAdmin_Returns403(t *testing.T) {
mw := RequireSuperAdmin()
handler := mw(func(c echo.Context) error {
return c.String(http.StatusOK, "ok")
})
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/admin/test", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
// Regular admin (not super admin)
admin := &models.AdminUser{
Email: "admin@test.com",
IsActive: true,
Role: models.AdminRoleAdmin,
}
c.Set(AdminUserKey, admin)
err := handler(c)
require.NoError(t, err)
assert.Equal(t, http.StatusForbidden, rec.Code)
assert.Contains(t, rec.Body.String(), "Super admin privileges required")
}
func TestRequireSuperAdmin_SuperAdmin_Passes(t *testing.T) {
mw := RequireSuperAdmin()
handler := mw(func(c echo.Context) error {
return c.String(http.StatusOK, "ok")
})
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/admin/test", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
admin := &models.AdminUser{
Email: "superadmin@test.com",
IsActive: true,
Role: models.AdminRoleSuperAdmin,
}
c.Set(AdminUserKey, admin)
err := handler(c)
require.NoError(t, err)
assert.Equal(t, http.StatusOK, rec.Code)
}

View File

@@ -12,6 +12,7 @@ import (
"gorm.io/gorm"
"github.com/treytartt/honeydue-api/internal/apperrors"
"github.com/treytartt/honeydue-api/internal/config"
"github.com/treytartt/honeydue-api/internal/models"
"github.com/treytartt/honeydue-api/internal/services"
)
@@ -28,24 +29,56 @@ const (
// UserCacheTTL is how long full user records are cached in memory to
// avoid hitting the database on every authenticated request.
UserCacheTTL = 30 * time.Second
// DefaultTokenExpiryDays is the default number of days before a token expires.
DefaultTokenExpiryDays = 90
)
// AuthMiddleware provides token authentication middleware
type AuthMiddleware struct {
db *gorm.DB
cache *services.CacheService
userCache *UserCache
db *gorm.DB
cache *services.CacheService
userCache *UserCache
tokenExpiryDays int
}
// NewAuthMiddleware creates a new auth middleware instance
func NewAuthMiddleware(db *gorm.DB, cache *services.CacheService) *AuthMiddleware {
return &AuthMiddleware{
db: db,
cache: cache,
userCache: NewUserCache(UserCacheTTL),
db: db,
cache: cache,
userCache: NewUserCache(UserCacheTTL),
tokenExpiryDays: DefaultTokenExpiryDays,
}
}
// NewAuthMiddlewareWithConfig creates a new auth middleware instance with configuration
func NewAuthMiddlewareWithConfig(db *gorm.DB, cache *services.CacheService, cfg *config.Config) *AuthMiddleware {
expiryDays := DefaultTokenExpiryDays
if cfg != nil && cfg.Security.TokenExpiryDays > 0 {
expiryDays = cfg.Security.TokenExpiryDays
}
return &AuthMiddleware{
db: db,
cache: cache,
userCache: NewUserCache(UserCacheTTL),
tokenExpiryDays: expiryDays,
}
}
// TokenExpiryDuration returns the token expiry duration.
func (m *AuthMiddleware) TokenExpiryDuration() time.Duration {
return time.Duration(m.tokenExpiryDays) * 24 * time.Hour
}
// isTokenExpired checks if a token's created timestamp indicates expiry.
func (m *AuthMiddleware) isTokenExpired(created time.Time) bool {
if created.IsZero() {
return false // Legacy tokens without created time are not expired
}
return time.Since(created) > m.TokenExpiryDuration()
}
// TokenAuth returns an Echo middleware that validates token authentication
func (m *AuthMiddleware) TokenAuth() echo.MiddlewareFunc {
return func(next echo.HandlerFunc) echo.HandlerFunc {
@@ -56,7 +89,7 @@ func (m *AuthMiddleware) TokenAuth() echo.MiddlewareFunc {
return apperrors.Unauthorized("error.not_authenticated")
}
// Try to get user from cache first
// Try to get user from cache first (includes expiry check)
user, err := m.getUserFromCache(c.Request().Context(), token)
if err == nil && user != nil {
// Cache hit - set user in context and continue
@@ -65,16 +98,27 @@ func (m *AuthMiddleware) TokenAuth() echo.MiddlewareFunc {
return next(c)
}
// Check if the cache indicated token expiry
if err != nil && err.Error() == "token expired" {
return apperrors.Unauthorized("error.token_expired")
}
// Cache miss - look up token in database
user, err = m.getUserFromDatabase(token)
user, authToken, err := m.getUserFromDatabaseWithToken(token)
if err != nil {
log.Debug().Err(err).Str("token", truncateToken(token)).Msg("Token authentication failed")
return apperrors.Unauthorized("error.invalid_token")
}
// Cache the user ID for future requests
if cacheErr := m.cacheUserID(c.Request().Context(), token, user.ID); cacheErr != nil {
log.Warn().Err(cacheErr).Msg("Failed to cache user ID")
// Check token expiry
if m.isTokenExpired(authToken.Created) {
log.Debug().Str("token", truncateToken(token)).Time("created", authToken.Created).Msg("Token expired")
return apperrors.Unauthorized("error.token_expired")
}
// Cache the user ID and token creation time for future requests
if cacheErr := m.cacheTokenInfo(c.Request().Context(), token, user.ID, authToken.Created); cacheErr != nil {
log.Warn().Err(cacheErr).Msg("Failed to cache token info")
}
// Set user in context
@@ -104,9 +148,9 @@ func (m *AuthMiddleware) OptionalTokenAuth() echo.MiddlewareFunc {
}
// Try database
user, err = m.getUserFromDatabase(token)
if err == nil {
m.cacheUserID(c.Request().Context(), token, user.ID)
user, authToken, err := m.getUserFromDatabaseWithToken(token)
if err == nil && !m.isTokenExpired(authToken.Created) {
m.cacheTokenInfo(c.Request().Context(), token, user.ID, authToken.Created)
c.Set(AuthUserKey, user)
c.Set(AuthTokenKey, token)
}
@@ -145,12 +189,13 @@ func extractToken(c echo.Context) (string, error) {
// getUserFromCache tries to get user from Redis cache, then from the
// in-memory user cache, before falling back to the database.
// Returns a "token expired" error if the cached creation time indicates expiry.
func (m *AuthMiddleware) getUserFromCache(ctx context.Context, token string) (*models.User, error) {
if m.cache == nil {
return nil, fmt.Errorf("cache not available")
}
userID, err := m.cache.GetCachedAuthToken(ctx, token)
userID, createdUnix, err := m.cache.GetCachedAuthTokenWithCreated(ctx, token)
if err != nil {
if err == redis.Nil {
return nil, fmt.Errorf("token not in cache")
@@ -158,6 +203,15 @@ func (m *AuthMiddleware) getUserFromCache(ctx context.Context, token string) (*m
return nil, err
}
// Check token expiry from cached creation time
if createdUnix > 0 {
created := time.Unix(createdUnix, 0)
if m.isTokenExpired(created) {
m.cache.InvalidateAuthToken(ctx, token)
return nil, fmt.Errorf("token expired")
}
}
// Try in-memory user cache first to avoid a DB round-trip
if cached := m.userCache.Get(userID); cached != nil {
if !cached.IsActive {
@@ -187,22 +241,38 @@ func (m *AuthMiddleware) getUserFromCache(ctx context.Context, token string) (*m
return &user, nil
}
// getUserFromDatabase looks up the token in the database and caches the
// resulting user record in memory.
func (m *AuthMiddleware) getUserFromDatabase(token string) (*models.User, error) {
// getUserFromDatabaseWithToken looks up the token in the database and returns
// both the user and the auth token record (for expiry checking).
func (m *AuthMiddleware) getUserFromDatabaseWithToken(token string) (*models.User, *models.AuthToken, error) {
var authToken models.AuthToken
if err := m.db.Preload("User").Where("key = ?", token).First(&authToken).Error; err != nil {
return nil, fmt.Errorf("token not found")
return nil, nil, fmt.Errorf("token not found")
}
// Check if user is active
if !authToken.User.IsActive {
return nil, fmt.Errorf("user is inactive")
return nil, nil, fmt.Errorf("user is inactive")
}
// Store in in-memory cache for subsequent requests
m.userCache.Set(&authToken.User)
return &authToken.User, nil
return &authToken.User, &authToken, nil
}
// getUserFromDatabase looks up the token in the database and caches the
// resulting user record in memory.
// Deprecated: Use getUserFromDatabaseWithToken for new code paths that need expiry checking.
func (m *AuthMiddleware) getUserFromDatabase(token string) (*models.User, error) {
user, _, err := m.getUserFromDatabaseWithToken(token)
return user, err
}
// cacheTokenInfo caches the user ID and token creation time for a token
func (m *AuthMiddleware) cacheTokenInfo(ctx context.Context, token string, userID uint, created time.Time) error {
if m.cache == nil {
return nil
}
return m.cache.CacheAuthTokenWithCreated(ctx, token, userID, created.Unix())
}
// cacheUserID caches the user ID for a token

Some files were not shown because too many files have changed in this diff Show More