# Guided Reflection — CBT-Aligned Adaptive Questioning Plan ## Context The current guided reflection flow (`GuidedReflection.swift`, `GuidedReflectionView.swift`) asks 3-4 static questions based on mood category (positive → Behavioral Activation, neutral → ACT Defusion, negative → CBT Thought Record). Questions do not reference prior answers, do not adapt to cognitive distortions, and skip the evidence-examination step that is the actual mechanism of change in CBT. This plan makes the reflection **more CBT-aligned, not less**, by introducing adaptive sequencing — which is the defining characteristic of Socratic questioning and guided discovery in CBT. Every phase is additive and ships independently. --- ## Phase 1 — Template Substitution + Intensity + Translation Fix No AI dependency. Works offline. Fully localizable. ### 1a. Reference previous answers in question text Update `GuidedReflection.questions(for:)` to return **question templates** with placeholders, then resolve them at render time using the user's prior answers. **Files:** - `Shared/Models/GuidedReflection.swift` — add `QuestionTemplate` struct with `template: String` and `placeholderRef: Int?` (index of question whose answer to inject) - `Shared/Views/GuidedReflectionView.swift` — resolve templates against `draft.steps` when building the `Step.question` text at navigation time (not init time, so Q3 shows Q2's answer even if user goes back and edits) - `Reflect/Localizable.xcstrings` — add new localized keys for templated questions using standard `%@` format specifier so each locale controls where the quoted answer appears grammatically **Example — negative path:** ``` Q1 template: "What happened today that affected your mood?" Q2 template: "What thought kept coming back about it?" Q3 template: "If a friend told you they had the thought '%@', what would you tell them?" [inject Q2 answer] Q4 template: "Looking at '%@' again — what's a more balanced way to see it?" [inject Q2 answer] ``` **Answer truncation:** if the referenced answer is > 60 characters, truncate to the first sentence or 60 chars + "…". Keep a helper `GuidedReflection.truncatedForInjection(_:)` in the model. **Edge cases:** - If referenced answer is empty (user skipped back), fall back to the current static question text. - If user edits an earlier answer, the later question text updates on next navigation to it. ### 1b. Add emotion intensity rating (pre and post) CBT measures emotional intensity before and after the thought work. This is the single most CBT-faithful addition. **Files:** - `Shared/Models/GuidedReflection.swift` — add `preIntensity: Int?` (0-10) and `postIntensity: Int?` (0-10) to `GuidedReflection` struct. Update `CodingKeys` and `isComplete` logic. - `Shared/Views/GuidedReflectionView.swift` — render an intensity slider before Q1 (pre) and after the last question (post). Use a 0-10 discrete scale with labels ("barely", "intense") localized. - `Shared/Services/FoundationModelsReflectionService.swift` — include `preIntensity` and `postIntensity` in the prompt so AI feedback can reference the shift ("you moved from an 8 to a 5"). ### 1c. Fix stale localized question strings The German, Spanish, French, Japanese, Korean, and Portuguese-BR translations in `Localizable.xcstrings` for `guided_reflection_{negative,neutral,positive}_q{1..4}` translate **older** English question text. Example: German Q1 is "Was belastet dich heute?" ("What's weighing on you?") but English is "What happened today that affected your mood?". **File:** `Reflect/Localizable.xcstrings` Retranslate all existing guided reflection question keys to match current English text. Flag state as `translated` only after review. ### 1d. Specificity probe on Q1 If the Q1 answer is < 25 characters or exactly matches a vague-phrase list (e.g., "idk", "everything", "nothing", "same as always"), surface a soft follow-up bubble below the editor: "Can you remember a specific moment? What happened just before you noticed the feeling?" Non-blocking — user can ignore it. **Files:** - `Shared/Views/GuidedReflectionView.swift` — add `needsSpecificityProbe(for:)` helper and conditional hint view below the Q1 editor - `Reflect/Localizable.xcstrings` — add `guided_reflection_specificity_probe` key --- ## Phase 2 — Rule-Based Distortion Detection (Negative Path) No AI dependency. Adds the most-impactful CBT mechanism: matching the reframe to the specific cognitive distortion. ### 2a. Distortion detection Classify the Q2 answer into a cognitive distortion type using localized keyword rules. **New file:** `Shared/Services/CognitiveDistortionDetector.swift` ```swift enum CognitiveDistortion: String, Codable { case overgeneralization // "always", "never", "everyone", "no one" case shouldStatement // "should", "must", "have to" case labeling // "I am [negative trait]" case personalization // "my fault", "because of me" case catastrophizing // "will never", "ruined", "can't recover" case mindReading // "thinks I'm", "hates me", "judging me" case unknown } @MainActor enum CognitiveDistortionDetector { static func detect(in text: String, locale: Locale = .current) -> CognitiveDistortion } ``` Per-locale keyword lists live in localized strings (`distortion_overgeneralization_keywords` = comma-separated list). This stays localizable and avoids hardcoding English-only logic. ### 2b. Distortion-specific Q3 reframe prompt Update the negative-path Q3 question resolution to switch on the detected distortion: | Distortion | Q3 prompt (localized key) | |---|---| | overgeneralization | "Can you think of one counter-example to '%@'?" | | shouldStatement | "Where did the rule 'I should …' come from? Is it still serving you?" | | labeling | "Is '%@' something you *are*, or something you *did*?" | | personalization | "What other factors, besides you, contributed to this?" | | catastrophizing | "What's the worst case? What's the most likely case?" | | mindReading | "What evidence do you have for that interpretation? What else could it mean?" | | unknown | Current static Q3 (fallback) | **Files:** - `Shared/Models/GuidedReflection.swift` — add `detectedDistortion: CognitiveDistortion?` to persist the classification on the response - `Shared/Views/GuidedReflectionView.swift` — call detector when transitioning from Q2 → Q3, pick template, render - `Reflect/Localizable.xcstrings` — add 6 new localized question templates ### 2c. Add an evidence-examination step (negative path only) Currently the negative path skips the core CBT Thought Record mechanism: examining evidence for/against the thought. Insert a new step between the current Q3 and Q4. New flow for negative (5 questions instead of 4): 1. Situation (Q1) 2. Automatic thought (Q2) 3. Perspective check (Q3 — distortion-specific from 2b) 4. **Evidence examination (NEW Q4)**: "What evidence supports this thought, and what challenges it?" 5. Balanced reframe (Q5, formerly Q4) **Files:** - `Shared/Models/GuidedReflection.swift` — bump `MoodCategory.negative.questionCount` to 5, update `stepLabels`, update `questions(for:)` - `Reflect/Localizable.xcstrings` — add `guided_reflection_negative_q_evidence` key (localized to all 7 languages) - Migration: existing saved reflections with 4 responses still `isComplete` — use version-tolerant decoding (already Codable, but verify no crash on old JSON) --- ## Phase 3 — AI-Enhanced Final Question (Premium, iOS 26+) Use Foundation Models to generate a personalized final reframe question based on the entire reflection so far. Falls back to Phase 2 rule-based prompt if AI unavailable. ### 3a. Adaptive final-question service **New file:** `Shared/Services/FoundationModelsReflectionPrompterService.swift` ```swift @available(iOS 26, *) @MainActor class FoundationModelsReflectionPrompterService { func generateFinalQuestion( moodCategory: MoodCategory, priorResponses: [GuidedReflection.Response], detectedDistortion: CognitiveDistortion? ) async throws -> String } ``` System instructions enforce: - One question only, under 25 words - Must reference at least one specific phrase from a prior answer - Must follow CBT principles (Socratic, non-leading, non-interpretive) - Must map to the active therapeutic framework (Thought Record / ACT / BA) Use `LanguageModelSession` with a constrained `Generable` output schema (just `{ question: String }`). ### 3b. Integration - Gate behind `IAPManager.shared.shouldShowPaywall == false && iOS 26 && Apple Intelligence available` - On transition to the final step, kick off generation with a 1.5s timeout. If timeout or error, fall back to the Phase 2 deterministic question. - Show a brief "generating your question…" shimmer on the step card during generation — but pre-populate with the fallback text so the user can start reading/typing immediately if they want. - Persist which question text was actually shown on `GuidedReflection.Response.question` so the AI feedback stage sees what the user actually saw. ### 3c. Update `FoundationModelsReflectionService` Enhance the existing feedback service to reference: - The intensity shift (pre → post) - Which cognitive distortion was detected (if any) - The fact that the final question was AI-adapted to them --- ## Files Modified / Created ### Modified - `Shared/Models/GuidedReflection.swift` — templates, intensity, distortion, evidence step - `Shared/Views/GuidedReflectionView.swift` — resolve templates, intensity sliders, specificity probe, distortion routing, AI prompt integration - `Shared/Services/FoundationModelsReflectionService.swift` — consume intensity shift + distortion in feedback prompt - `Reflect/Localizable.xcstrings` — retranslate existing keys + add ~15 new ones ### New - `Shared/Services/CognitiveDistortionDetector.swift` (Phase 2) - `Shared/Services/FoundationModelsReflectionPrompterService.swift` (Phase 3) ### Tests - `ReflectTests/GuidedReflectionTemplatingTests.swift` — template resolution, answer truncation, edge cases (empty/edited prior answer) - `ReflectTests/CognitiveDistortionDetectorTests.swift` — per-distortion detection with English fixtures (extend to other locales when translations land) - `ReflectTests/GuidedReflectionMigrationTests.swift` — decode old 4-question JSON without crashing, handle missing intensity fields --- ## Verification ### Phase 1 1. Log a negative mood, start reflection 2. Answer Q1 with a specific event ("My boss criticized my presentation") 3. Answer Q2 with a thought ("I'm not cut out for this job") 4. Navigate to Q3 — verify the question text quotes the Q2 answer 5. Go back to Q2, change the answer, navigate forward — verify Q3 text updates 6. Verify pre-intensity slider appears before Q1 and post-intensity appears after the last question 7. Change device language to German — verify all question templates render grammatically correct German with quoted answers 8. Answer Q1 with "idk" — verify specificity probe appears 9. Answer Q1 with a full sentence — verify no probe ### Phase 2 1. Answer Q2 with "I always mess everything up" — verify Q3 shows the overgeneralization-specific prompt ("Can you think of one counter-example to...") 2. Answer Q2 with "I should have done better" — verify shouldStatement prompt 3. Answer Q2 with "I'm such a failure" — verify labeling prompt 4. Answer Q2 with a neutral thought (no distortion keywords) — verify fallback to the static Q3 5. Verify negative path now has 5 steps (progress shows 1/5) 6. Load an existing saved negative reflection with 4 responses — verify it still opens without crashing and shows as complete ### Phase 3 1. On iOS 26 device with Apple Intelligence + active subscription: complete Q1-Q4, navigate to Q5 — verify AI-generated question references specific wording from earlier answers 2. Turn off Apple Intelligence — verify fallback to Phase 2 deterministic question (no delay, no error banner) 3. On iOS 25 or non-subscribed user — verify Phase 2 prompt renders immediately (no AI path attempted) 4. Verify AI feedback at the end of the reflection references the intensity shift and (if detected) the cognitive distortion ### Cross-phase - Run `xcodebuild test -only-testing:"ReflectTests"` — all new tests pass - Manual run through all 3 mood categories (positive / neutral / negative) on English + 1 non-English locale - Verify existing saved reflections from before this change still decode and display correctly --- ## Out of Scope - Restructuring the positive (BA) or neutral (ACT) paths beyond Phase 1 templating. Those frameworks don't use distortion detection or evidence examination — their mechanisms are activity scheduling and values clarification, which work fine with static questions + templating. - Changing chip suggestions. The current chip library is solid and orthogonal to this work. - Personality-pack variants of the distortion prompts. Phase 2 ships with the "Default" voice only; other packs can be layered later using the same infrastructure.