Files
honeyDueAPI/internal/monitoring/writer.go
Trey t 42a5533a56 Fix 113 hardening issues across entire Go backend
Security:
- Replace all binding: tags with validate: + c.Validate() in admin handlers
- Add rate limiting to auth endpoints (login, register, password reset)
- Add security headers (HSTS, XSS protection, nosniff, frame options)
- Wire Google Pub/Sub token verification into webhook handler
- Replace ParseUnverified with proper OIDC/JWKS key verification
- Verify inner Apple JWS signatures in webhook handler
- Add io.LimitReader (1MB) to all webhook body reads
- Add ownership verification to file deletion
- Move hardcoded admin credentials to env vars
- Add uniqueIndex to User.Email
- Hide ConfirmationCode from JSON serialization
- Mask confirmation codes in admin responses
- Use http.DetectContentType for upload validation
- Fix path traversal in storage service
- Replace os.Getenv with Viper in stripe service
- Sanitize Redis URLs before logging
- Separate DEBUG_FIXED_CODES from DEBUG flag
- Reject weak SECRET_KEY in production
- Add host check on /_next/* proxy routes
- Use explicit localhost CORS origins in debug mode
- Replace err.Error() with generic messages in all admin error responses

Critical fixes:
- Rewrite FCM to HTTP v1 API with OAuth 2.0 service account auth
- Fix user_customuser -> auth_user table names in raw SQL
- Fix dashboard verified query to use UserProfile model
- Add escapeLikeWildcards() to prevent SQL wildcard injection

Bug fixes:
- Add bounds checks for days/expiring_soon query params (1-3650)
- Add receipt_data/transaction_id empty-check to RestoreSubscription
- Change Active bool -> *bool in device handler
- Check all unchecked GORM/FindByIDWithProfile errors
- Add validation for notification hour fields (0-23)
- Add max=10000 validation on task description updates

Transactions & data integrity:
- Wrap registration flow in transaction
- Wrap QuickComplete in transaction
- Move image creation inside completion transaction
- Wrap SetSpecialties in transaction
- Wrap GetOrCreateToken in transaction
- Wrap completion+image deletion in transaction

Performance:
- Batch completion summaries (2 queries vs 2N)
- Reuse single http.Client in IAP validation
- Cache dashboard counts (30s TTL)
- Batch COUNT queries in admin user list
- Add Limit(500) to document queries
- Add reminder_stage+due_date filters to reminder queries
- Parse AllowedTypes once at init
- In-memory user cache in auth middleware (30s TTL)
- Timezone change detection cache
- Optimize P95 with per-endpoint sorted buffers
- Replace crypto/md5 with hash/fnv for ETags

Code quality:
- Add sync.Once to all monitoring Stop()/Close() methods
- Replace 8 fmt.Printf with zerolog in auth service
- Log previously discarded errors
- Standardize delete response shapes
- Route hardcoded English through i18n
- Remove FileURL from DocumentResponse (keep MediaURL only)
- Thread user timezone through kanban board responses
- Initialize empty slices to prevent null JSON
- Extract shared field map for task Update/UpdateTx
- Delete unused SoftDeleteModel, min(), formatCron, legacy handlers

Worker & jobs:
- Wire Asynq email infrastructure into worker
- Register HandleReminderLogCleanup with daily 3AM cron
- Use per-user timezone in HandleSmartReminder
- Replace direct DB queries with repository calls
- Delete legacy reminder handlers (~200 lines)
- Delete unused task type constants

Dependencies:
- Replace archived jung-kurt/gofpdf with go-pdf/fpdf
- Replace unmaintained gomail.v2 with wneessen/go-mail
- Add TODO for Echo jwt v3 transitive dep removal

Test infrastructure:
- Fix MakeRequest/SeedLookupData error handling
- Replace os.Exit(0) with t.Skip() in scope/consistency tests
- Add 11 new FCM v1 tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-18 23:14:13 -05:00

140 lines
3.7 KiB
Go

package monitoring
import (
"encoding/json"
"sync"
"sync/atomic"
"time"
"github.com/google/uuid"
)
const (
// writerChannelSize is the buffer size for the async log write channel.
// Entries beyond this limit are dropped to prevent unbounded memory growth.
writerChannelSize = 256
)
// RedisLogWriter implements io.Writer to capture zerolog output to Redis.
// It uses a single background goroutine with a buffered channel instead of
// spawning a new goroutine per log line, preventing unbounded goroutine growth.
type RedisLogWriter struct {
buffer *LogBuffer
process string
enabled atomic.Bool
ch chan LogEntry
done chan struct{}
closeOnce sync.Once
}
// NewRedisLogWriter creates a new writer that captures logs to Redis.
// It starts a single background goroutine that drains the buffered channel.
func NewRedisLogWriter(buffer *LogBuffer, process string) *RedisLogWriter {
w := &RedisLogWriter{
buffer: buffer,
process: process,
ch: make(chan LogEntry, writerChannelSize),
done: make(chan struct{}),
}
w.enabled.Store(true) // enabled by default
// Single background goroutine drains the channel
go w.drainLoop()
return w
}
// drainLoop reads entries from the buffered channel and pushes them to Redis.
// It runs in a single goroutine for the lifetime of the writer.
func (w *RedisLogWriter) drainLoop() {
defer close(w.done)
for entry := range w.ch {
_ = w.buffer.Push(entry) // Ignore errors to avoid blocking log output
}
}
// Close shuts down the background goroutine. It should be called during
// graceful shutdown to ensure all buffered entries are flushed.
// It is safe to call multiple times.
func (w *RedisLogWriter) Close() {
w.closeOnce.Do(func() {
close(w.ch)
<-w.done // Wait for drain to finish
})
}
// SetEnabled enables or disables log capture to Redis
func (w *RedisLogWriter) SetEnabled(enabled bool) {
w.enabled.Store(enabled)
}
// IsEnabled returns whether log capture is enabled
func (w *RedisLogWriter) IsEnabled() bool {
return w.enabled.Load()
}
// Write implements io.Writer interface.
// It parses zerolog JSON output and sends it to the buffered channel for
// async Redis writes. If the channel is full, the entry is dropped to
// avoid blocking the caller (back-pressure shedding).
func (w *RedisLogWriter) Write(p []byte) (n int, err error) {
// Skip if monitoring is disabled
if !w.enabled.Load() {
return len(p), nil
}
// Parse zerolog JSON output
var raw map[string]any
if err := json.Unmarshal(p, &raw); err != nil {
// Not valid JSON, skip (could be console writer output)
return len(p), nil
}
// Build log entry
entry := LogEntry{
ID: uuid.NewString(),
Timestamp: time.Now().UTC(),
Process: w.process,
Fields: make(map[string]any),
}
// Extract standard zerolog fields
if lvl, ok := raw["level"].(string); ok {
entry.Level = lvl
}
if msg, ok := raw["message"].(string); ok {
entry.Message = msg
}
if caller, ok := raw["caller"].(string); ok {
entry.Caller = caller
}
// Extract timestamp if present (zerolog may include it)
if ts, ok := raw["time"].(string); ok {
if parsed, err := time.Parse(time.RFC3339, ts); err == nil {
entry.Timestamp = parsed
}
}
// Copy additional fields (excluding standard ones)
for k, v := range raw {
switch k {
case "level", "message", "caller", "time":
// Skip standard fields
default:
entry.Fields[k] = v
}
}
// Non-blocking send: drop entries if channel is full rather than
// spawning unbounded goroutines or blocking the logger
select {
case w.ch <- entry:
// Sent successfully
default:
// Channel full — drop this entry to avoid back-pressure on the logger
}
return len(p), nil
}