Optimize AI generation speed and add richer insight data
Speed optimizations:
- Add session.prewarm() in InsightsViewModel and ReportsViewModel init
for 40% faster first-token latency
- Cap maximumResponseTokens on all 8 AI respond() calls (100-600 per use case)
- Add prompt brevity constraints ("1-2 sentences", "2 sentences")
- Reduce report batch concurrency from 4 to 2 to prevent device contention
- Pre-fetch health data once and share across all 3 insight periods
Richer insight data in MoodDataSummarizer:
- Tag-mood correlations: overall frequency + good day vs bad day tag breakdown
- Weather-mood correlations: avg mood by condition and temperature range
- Absence pattern detection: logging gap count with pre/post-gap mood averages
- Entry source breakdown: % of entries from App, Widget, Watch, Siri, etc.
- Update insight prompt to leverage tags, weather, and gap data when available
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -47,7 +47,7 @@ class FoundationModelsDigestService {
|
||||
let session = LanguageModelSession(instructions: systemInstructions)
|
||||
let prompt = buildPrompt(entries: validEntries, weekStart: weekStart, weekEnd: now)
|
||||
|
||||
let response = try await session.respond(to: prompt, generating: AIWeeklyDigestResponse.self)
|
||||
let response = try await session.respond(to: prompt, generating: AIWeeklyDigestResponse.self, options: GenerationOptions(maximumResponseTokens: 300))
|
||||
|
||||
let digest = WeeklyDigest(
|
||||
headline: response.content.headline,
|
||||
@@ -150,6 +150,7 @@ class FoundationModelsDigestService {
|
||||
Current streak: \(summary.currentLoggingStreak) days
|
||||
|
||||
Write a warm, personalized weekly digest.
|
||||
Keep summary to 2 sentences. Keep highlight and intention to 1 sentence each.
|
||||
"""
|
||||
}
|
||||
}
|
||||
|
||||
Reference in New Issue
Block a user