Compare commits
17 Commits
5ba76a947b
...
main
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
98badc98ad | ||
|
|
4093b5a7f3 | ||
|
|
5d3accb2c0 | ||
|
|
3d8cbccc4e | ||
|
|
cc6ec70ed9 | ||
|
|
d99d88e73c | ||
|
|
8e1c9b6bf1 | ||
|
|
d9ddaa4902 | ||
|
|
cdf1e05c4c | ||
|
|
455df18dad | ||
|
|
3c5600f562 | ||
|
|
5f90a01314 | ||
|
|
cd491bd695 | ||
|
|
df96a9e540 | ||
|
|
c73762ab9f | ||
|
|
f809bc2a1d | ||
|
|
63dfc5e41a |
13
.gitignore
vendored
13
.gitignore
vendored
@@ -40,3 +40,16 @@ scrape/
|
|||||||
*.webm
|
*.webm
|
||||||
*.mp4
|
*.mp4
|
||||||
*.mkv
|
*.mkv
|
||||||
|
|
||||||
|
# Third-party textbook sources (not redistributable)
|
||||||
|
*.pdf
|
||||||
|
*.epub
|
||||||
|
epub_extract/
|
||||||
|
|
||||||
|
# Textbook extraction artifacts — regenerate locally via run_pipeline.sh.
|
||||||
|
# Scripts are committed; their generated outputs are not.
|
||||||
|
Conjuga/Scripts/textbook/*.json
|
||||||
|
Conjuga/Scripts/textbook/review.html
|
||||||
|
# Note: the app-bundle copies (Conjuga/Conjuga/textbook_{data,vocab}.json)
|
||||||
|
# ARE committed so `xcodebuild` works on a fresh clone without first running
|
||||||
|
# the pipeline. They're regenerated from the scripts when content changes.
|
||||||
|
|||||||
5
CLAUDE.md
Normal file
5
CLAUDE.md
Normal file
@@ -0,0 +1,5 @@
|
|||||||
|
# Project rules
|
||||||
|
|
||||||
|
## Git
|
||||||
|
|
||||||
|
- **Never run `git commit` or `git push` without an explicit request from the user in the current turn.** File edits are fine; committing and pushing are not. Wait to be told.
|
||||||
@@ -8,10 +8,15 @@
|
|||||||
|
|
||||||
/* Begin PBXBuildFile section */
|
/* Begin PBXBuildFile section */
|
||||||
00BEC0BDBB49198022D9852E /* WordOfDayWidget.swift in Sources */ = {isa = PBXBuildFile; fileRef = 8E9BCDBB9BC24F5C8117767E /* WordOfDayWidget.swift */; };
|
00BEC0BDBB49198022D9852E /* WordOfDayWidget.swift in Sources */ = {isa = PBXBuildFile; fileRef = 8E9BCDBB9BC24F5C8117767E /* WordOfDayWidget.swift */; };
|
||||||
|
04C74D9E0ED84BF785A8331C /* ClozeView.swift in Sources */ = {isa = PBXBuildFile; fileRef = D232CDA43CC9218D748BA121 /* ClozeView.swift */; };
|
||||||
0A89DCC82BE11605CB866DEF /* TenseInfo.swift in Sources */ = {isa = PBXBuildFile; fileRef = 3BC3247457109FC6BF00D85B /* TenseInfo.swift */; };
|
0A89DCC82BE11605CB866DEF /* TenseInfo.swift in Sources */ = {isa = PBXBuildFile; fileRef = 3BC3247457109FC6BF00D85B /* TenseInfo.swift */; };
|
||||||
|
0AD63CAED7C568590A16E879 /* StoryQuizView.swift in Sources */ = {isa = PBXBuildFile; fileRef = A7CDC5F2660A3009A3ADF048 /* StoryQuizView.swift */; };
|
||||||
|
0D0D3B5CC128D1A1D1252282 /* PronunciationService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 4AE3D1D8723D5C41D3501774 /* PronunciationService.swift */; };
|
||||||
13F29AD5745FB532709FA28A /* OnboardingView.swift in Sources */ = {isa = PBXBuildFile; fileRef = E972AA745F44586EF0B1B0C8 /* OnboardingView.swift */; };
|
13F29AD5745FB532709FA28A /* OnboardingView.swift in Sources */ = {isa = PBXBuildFile; fileRef = E972AA745F44586EF0B1B0C8 /* OnboardingView.swift */; };
|
||||||
|
14242FD1F500D296D41E927C /* FeatureReferenceView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1C3E36BDC2540AF2A67AEEB1 /* FeatureReferenceView.swift */; };
|
||||||
1A230C01A045F0C095BFBD35 /* PracticeView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1EA0FA4F9149B9D8E197ADE9 /* PracticeView.swift */; };
|
1A230C01A045F0C095BFBD35 /* PracticeView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1EA0FA4F9149B9D8E197ADE9 /* PracticeView.swift */; };
|
||||||
1C2636790E70B6BC7FFCC904 /* DailyLog.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0313D24F96E6A0039C34341F /* DailyLog.swift */; };
|
1C2636790E70B6BC7FFCC904 /* DailyLog.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0313D24F96E6A0039C34341F /* DailyLog.swift */; };
|
||||||
|
20B71403A8D305C29C73ADA2 /* StemChangeConjugationView.swift in Sources */ = {isa = PBXBuildFile; fileRef = F92BCE1A6720E47FCD26BADC /* StemChangeConjugationView.swift */; };
|
||||||
218E982FC4267949F82AABAD /* SharedModels in Frameworks */ = {isa = PBXBuildFile; productRef = 4A4D7B02884EBA9ACD93F0FD /* SharedModels */; };
|
218E982FC4267949F82AABAD /* SharedModels in Frameworks */ = {isa = PBXBuildFile; productRef = 4A4D7B02884EBA9ACD93F0FD /* SharedModels */; };
|
||||||
261E582449BED6EF41881B04 /* AdaptiveContainer.swift in Sources */ = {isa = PBXBuildFile; fileRef = 3B16FF4C52457CD8CD703532 /* AdaptiveContainer.swift */; };
|
261E582449BED6EF41881B04 /* AdaptiveContainer.swift in Sources */ = {isa = PBXBuildFile; fileRef = 3B16FF4C52457CD8CD703532 /* AdaptiveContainer.swift */; };
|
||||||
27BA7FA9356467846A07697D /* TypingView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 10C16AA6022E4742898745CE /* TypingView.swift */; };
|
27BA7FA9356467846A07697D /* TypingView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 10C16AA6022E4742898745CE /* TypingView.swift */; };
|
||||||
@@ -20,6 +25,7 @@
|
|||||||
2C7ABAB4D88E3E3B0EAD1EF7 /* PracticeHeaderView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 5BF946245110C92F087D81E8 /* PracticeHeaderView.swift */; };
|
2C7ABAB4D88E3E3B0EAD1EF7 /* PracticeHeaderView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 5BF946245110C92F087D81E8 /* PracticeHeaderView.swift */; };
|
||||||
33E885EB38C3BB0CB058871A /* HandwritingView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1F842EB5E566C74658D918BB /* HandwritingView.swift */; };
|
33E885EB38C3BB0CB058871A /* HandwritingView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1F842EB5E566C74658D918BB /* HandwritingView.swift */; };
|
||||||
352A5BAA6E406AA5850653A4 /* PracticeSessionService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 842DB48F8570C39CDCFF2F57 /* PracticeSessionService.swift */; };
|
352A5BAA6E406AA5850653A4 /* PracticeSessionService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 842DB48F8570C39CDCFF2F57 /* PracticeSessionService.swift */; };
|
||||||
|
354631F309E625046A3A436B /* TextbookExerciseView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 854EA2A8D6CF203958BA3C24 /* TextbookExerciseView.swift */; };
|
||||||
35A0F6E7124D989312721F7D /* DashboardView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 18AC3C548BDB9EF8701BE64C /* DashboardView.swift */; };
|
35A0F6E7124D989312721F7D /* DashboardView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 18AC3C548BDB9EF8701BE64C /* DashboardView.swift */; };
|
||||||
36F92EBAEB0E5F2B010401EF /* StreakCalendarView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 30EF2362D9FFF9B07A45CE6D /* StreakCalendarView.swift */; };
|
36F92EBAEB0E5F2B010401EF /* StreakCalendarView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 30EF2362D9FFF9B07A45CE6D /* StreakCalendarView.swift */; };
|
||||||
377C4AA000CE9A0D8CC43DA9 /* GrammarNote.swift in Sources */ = {isa = PBXBuildFile; fileRef = 4D389CA5B5C4E7A12CAEA5BC /* GrammarNote.swift */; };
|
377C4AA000CE9A0D8CC43DA9 /* GrammarNote.swift in Sources */ = {isa = PBXBuildFile; fileRef = 4D389CA5B5C4E7A12CAEA5BC /* GrammarNote.swift */; };
|
||||||
@@ -27,32 +33,50 @@
|
|||||||
3F4F0C07BE61512CBFBBB203 /* HandwritingCanvas.swift in Sources */ = {isa = PBXBuildFile; fileRef = 80D974250C396589656B8443 /* HandwritingCanvas.swift */; };
|
3F4F0C07BE61512CBFBBB203 /* HandwritingCanvas.swift in Sources */ = {isa = PBXBuildFile; fileRef = 80D974250C396589656B8443 /* HandwritingCanvas.swift */; };
|
||||||
4005E258FDF03C8B3A0D53BD /* VocabFlashcardView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 2931634BEB33B93429CE254F /* VocabFlashcardView.swift */; };
|
4005E258FDF03C8B3A0D53BD /* VocabFlashcardView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 2931634BEB33B93429CE254F /* VocabFlashcardView.swift */; };
|
||||||
46943ACFABF329DE1CBFC471 /* TensePill.swift in Sources */ = {isa = PBXBuildFile; fileRef = 102F0E136CDFF8CED710210F /* TensePill.swift */; };
|
46943ACFABF329DE1CBFC471 /* TensePill.swift in Sources */ = {isa = PBXBuildFile; fileRef = 102F0E136CDFF8CED710210F /* TensePill.swift */; };
|
||||||
|
48967E05C65E32F7082716CD /* AnswerChecker.swift in Sources */ = {isa = PBXBuildFile; fileRef = B3EFFA19D0AB2528A868E8ED /* AnswerChecker.swift */; };
|
||||||
4C3484403FD96E37DA4BEA66 /* NewWordIntent.swift in Sources */ = {isa = PBXBuildFile; fileRef = 72CB5F95DF256DF7CD73269D /* NewWordIntent.swift */; };
|
4C3484403FD96E37DA4BEA66 /* NewWordIntent.swift in Sources */ = {isa = PBXBuildFile; fileRef = 72CB5F95DF256DF7CD73269D /* NewWordIntent.swift */; };
|
||||||
|
4C577CF6B137D0A32759A169 /* VerbExampleGenerator.swift in Sources */ = {isa = PBXBuildFile; fileRef = 02EB3F9305349775E0EB28B9 /* VerbExampleGenerator.swift */; };
|
||||||
50E0095A23E119D1AB561232 /* VerbDetailView.swift in Sources */ = {isa = PBXBuildFile; fileRef = E1DBE662F89F02A0282F5BEE /* VerbDetailView.swift */; };
|
50E0095A23E119D1AB561232 /* VerbDetailView.swift in Sources */ = {isa = PBXBuildFile; fileRef = E1DBE662F89F02A0282F5BEE /* VerbDetailView.swift */; };
|
||||||
519E68D2DF4C80AB96058C0D /* LyricsConfirmationView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 3EA01795655C444795577A22 /* LyricsConfirmationView.swift */; };
|
519E68D2DF4C80AB96058C0D /* LyricsConfirmationView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 3EA01795655C444795577A22 /* LyricsConfirmationView.swift */; };
|
||||||
51D072AF30F4B12CD3E8F918 /* SRSEngine.swift in Sources */ = {isa = PBXBuildFile; fileRef = 5C0E6EAFC0D24928BA956FA5 /* SRSEngine.swift */; };
|
51D072AF30F4B12CD3E8F918 /* SRSEngine.swift in Sources */ = {isa = PBXBuildFile; fileRef = 5C0E6EAFC0D24928BA956FA5 /* SRSEngine.swift */; };
|
||||||
53A0AC57EAC44B676C997374 /* QuizType.swift in Sources */ = {isa = PBXBuildFile; fileRef = 626873572466403C0288090D /* QuizType.swift */; };
|
53A0AC57EAC44B676C997374 /* QuizType.swift in Sources */ = {isa = PBXBuildFile; fileRef = 626873572466403C0288090D /* QuizType.swift */; };
|
||||||
5A3246026E68AB6483126D0B /* WeekProgressWidget.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1980E8E439EB76ED7330A90D /* WeekProgressWidget.swift */; };
|
5A3246026E68AB6483126D0B /* WeekProgressWidget.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1980E8E439EB76ED7330A90D /* WeekProgressWidget.swift */; };
|
||||||
5EA915FFA906C5C2938FCADA /* ConjugaWidgetBundle.swift in Sources */ = {isa = PBXBuildFile; fileRef = E325FE0E484DE75009672D02 /* ConjugaWidgetBundle.swift */; };
|
5EA915FFA906C5C2938FCADA /* ConjugaWidgetBundle.swift in Sources */ = {isa = PBXBuildFile; fileRef = E325FE0E484DE75009672D02 /* ConjugaWidgetBundle.swift */; };
|
||||||
|
5EE41911F3D17224CAB359ED /* StudyTimerService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 4EC8C4E931AD7A1D87C490BB /* StudyTimerService.swift */; };
|
||||||
60E86BABE2735E2052B99DF3 /* SettingsView.swift in Sources */ = {isa = PBXBuildFile; fileRef = BCCC95A95581458E068E0484 /* SettingsView.swift */; };
|
60E86BABE2735E2052B99DF3 /* SettingsView.swift in Sources */ = {isa = PBXBuildFile; fileRef = BCCC95A95581458E068E0484 /* SettingsView.swift */; };
|
||||||
|
61328552866DE185B15011A9 /* StoryLibraryView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 15AC27B1E3D332709657F20B /* StoryLibraryView.swift */; };
|
||||||
615D3128ED6E84EF59BB5AA3 /* LyricsReaderView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 58394296923991E56BAC2B02 /* LyricsReaderView.swift */; };
|
615D3128ED6E84EF59BB5AA3 /* LyricsReaderView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 58394296923991E56BAC2B02 /* LyricsReaderView.swift */; };
|
||||||
6BB4B0A655E6CB6F82D81B5A /* WeekTestView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 5E7EF4161C73AAC67B3A0004 /* WeekTestView.swift */; };
|
6BB4B0A655E6CB6F82D81B5A /* WeekTestView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 5E7EF4161C73AAC67B3A0004 /* WeekTestView.swift */; };
|
||||||
968D626462B0ADEC8D7D56AA /* CheckpointExamView.swift in Sources */ = {isa = PBXBuildFile; fileRef = EA1F177F7ABF5D2E4E5466CD /* CheckpointExamView.swift */; };
|
|
||||||
6D4A29280FDD99B8E18AF264 /* WidgetDataReader.swift in Sources */ = {isa = PBXBuildFile; fileRef = 2889F2F81673AFF3A58A07A8 /* WidgetDataReader.swift */; };
|
6D4A29280FDD99B8E18AF264 /* WidgetDataReader.swift in Sources */ = {isa = PBXBuildFile; fileRef = 2889F2F81673AFF3A58A07A8 /* WidgetDataReader.swift */; };
|
||||||
6ED2AC2CAA54688161D4B920 /* SyncStatusMonitor.swift in Sources */ = {isa = PBXBuildFile; fileRef = 18CCD69C14D1B0CFBD03C92F /* SyncStatusMonitor.swift */; };
|
6ED2AC2CAA54688161D4B920 /* SyncStatusMonitor.swift in Sources */ = {isa = PBXBuildFile; fileRef = 18CCD69C14D1B0CFBD03C92F /* SyncStatusMonitor.swift */; };
|
||||||
728702D9AA7A8BDABBA62513 /* ReviewStore.swift in Sources */ = {isa = PBXBuildFile; fileRef = CBCF6FCFA6B00151C2371E77 /* ReviewStore.swift */; };
|
728702D9AA7A8BDABBA62513 /* ReviewStore.swift in Sources */ = {isa = PBXBuildFile; fileRef = CBCF6FCFA6B00151C2371E77 /* ReviewStore.swift */; };
|
||||||
760628EFE1CF191CE2FC07DC /* GuideView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 8C935ECDF8A5D8D6FA541E20 /* GuideView.swift */; };
|
760628EFE1CF191CE2FC07DC /* GuideView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 8C935ECDF8A5D8D6FA541E20 /* GuideView.swift */; };
|
||||||
|
78FE99C5D511737B6877EDD5 /* VocabReviewView.swift in Sources */ = {isa = PBXBuildFile; fileRef = E8D95887B18216FCA71643D6 /* VocabReviewView.swift */; };
|
||||||
7A13757EA40E81E55640D0FC /* LyricsSearchView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 70960F0FD7509310B3F61C48 /* LyricsSearchView.swift */; };
|
7A13757EA40E81E55640D0FC /* LyricsSearchView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 70960F0FD7509310B3F61C48 /* LyricsSearchView.swift */; };
|
||||||
|
81E4DB9F64F3FF3AB8BCB03A /* TextbookChapterListView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 496D851D2D95BEA283C9FD45 /* TextbookChapterListView.swift */; };
|
||||||
81FA7EBCF18F0AAE0BF385C3 /* VerbListView.swift in Sources */ = {isa = PBXBuildFile; fileRef = A63061BBC8998DF33E3DCA2B /* VerbListView.swift */; };
|
81FA7EBCF18F0AAE0BF385C3 /* VerbListView.swift in Sources */ = {isa = PBXBuildFile; fileRef = A63061BBC8998DF33E3DCA2B /* VerbListView.swift */; };
|
||||||
82F6079BE3F31AC3FB2D1013 /* MultipleChoiceView.swift in Sources */ = {isa = PBXBuildFile; fileRef = DA3A33983B2F2078C9EA1A3D /* MultipleChoiceView.swift */; };
|
82F6079BE3F31AC3FB2D1013 /* MultipleChoiceView.swift in Sources */ = {isa = PBXBuildFile; fileRef = DA3A33983B2F2078C9EA1A3D /* MultipleChoiceView.swift */; };
|
||||||
84CCBAE22A9E0DA27AE28723 /* DeckStudyView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 631DC0A942DD57C81DECE083 /* DeckStudyView.swift */; };
|
84CCBAE22A9E0DA27AE28723 /* DeckStudyView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 631DC0A942DD57C81DECE083 /* DeckStudyView.swift */; };
|
||||||
|
8B516215E0842189DEA0DBB1 /* GrammarExercise.swift in Sources */ = {isa = PBXBuildFile; fileRef = F0A3099BE24A56F9B1F179E0 /* GrammarExercise.swift */; };
|
||||||
|
8BD4B5A2DDDD4BE4B4A94962 /* ChatView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 79576893566932D2BE207528 /* ChatView.swift */; };
|
||||||
8C43F09F52EA9B537EA27E43 /* CourseReviewStore.swift in Sources */ = {isa = PBXBuildFile; fileRef = DAF7CA1E6F9979CB2C699FDC /* CourseReviewStore.swift */; };
|
8C43F09F52EA9B537EA27E43 /* CourseReviewStore.swift in Sources */ = {isa = PBXBuildFile; fileRef = DAF7CA1E6F9979CB2C699FDC /* CourseReviewStore.swift */; };
|
||||||
|
8DC1CB93333F94C5297D33BF /* GrammarExerciseView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 07F24ED76059609D6857EC97 /* GrammarExerciseView.swift */; };
|
||||||
|
90B76E34F195223580F7CCCF /* DictionaryService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 76BE2A08EC694FF784ED5575 /* DictionaryService.swift */; };
|
||||||
943A94A8C71919F3EFC0E8FA /* UserProgress.swift in Sources */ = {isa = PBXBuildFile; fileRef = E536AD1180FE10576EAC884A /* UserProgress.swift */; };
|
943A94A8C71919F3EFC0E8FA /* UserProgress.swift in Sources */ = {isa = PBXBuildFile; fileRef = E536AD1180FE10576EAC884A /* UserProgress.swift */; };
|
||||||
|
97A2088134FC6CB41C507182 /* reflexive_verbs.json in Resources */ = {isa = PBXBuildFile; fileRef = 3644B5ED77F29A65877D926A /* reflexive_verbs.json */; };
|
||||||
97EFCF6724CE59DC4F0274FD /* AchievementService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1C42EA0EBD4CB1E10A82BA25 /* AchievementService.swift */; };
|
97EFCF6724CE59DC4F0274FD /* AchievementService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1C42EA0EBD4CB1E10A82BA25 /* AchievementService.swift */; };
|
||||||
9D9FD3853C5C969C62AE9999 /* StartupCoordinator.swift in Sources */ = {isa = PBXBuildFile; fileRef = A4B95B276C054DBFE508C4D1 /* StartupCoordinator.swift */; };
|
9D9FD3853C5C969C62AE9999 /* StartupCoordinator.swift in Sources */ = {isa = PBXBuildFile; fileRef = A4B95B276C054DBFE508C4D1 /* StartupCoordinator.swift */; };
|
||||||
|
9F0ACDC1F4ACB1E0D331283D /* CheckpointExamView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 34C67DD1A1CB9B8B5A2BDCED /* CheckpointExamView.swift */; };
|
||||||
|
A4DA8CF1957A4FE161830AB2 /* ReflexiveVerbStore.swift in Sources */ = {isa = PBXBuildFile; fileRef = 940826D9ED5C18D2C4E7B2C7 /* ReflexiveVerbStore.swift */; };
|
||||||
|
A7DF435F99E66E067F2B33E1 /* ListeningView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 20D1904DF07E0A6816134CF3 /* ListeningView.swift */; };
|
||||||
A9959AE6C87B4AD21554E401 /* FullTableView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 711CB7539EF5887F6F7B8B82 /* FullTableView.swift */; };
|
A9959AE6C87B4AD21554E401 /* FullTableView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 711CB7539EF5887F6F7B8B82 /* FullTableView.swift */; };
|
||||||
AAC6F85A1C3B6C1186E1656A /* TenseEndingTable.swift in Sources */ = {isa = PBXBuildFile; fileRef = 69D98E1564C6538056D81200 /* TenseEndingTable.swift */; };
|
AAC6F85A1C3B6C1186E1656A /* TenseEndingTable.swift in Sources */ = {isa = PBXBuildFile; fileRef = 69D98E1564C6538056D81200 /* TenseEndingTable.swift */; };
|
||||||
|
ABBE5080E254D1D3E3465E40 /* ConversationService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 3A96C065B8787DEC6818E497 /* ConversationService.swift */; };
|
||||||
|
ACE9D8B3116757B5D6F0F766 /* StoryGenerator.swift in Sources */ = {isa = PBXBuildFile; fileRef = 713F23A9C2935408B136C7C7 /* StoryGenerator.swift */; };
|
||||||
|
B10A324C06F0957DDE2233F8 /* TextbookChapterView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 39908548430FDF01D76201FB /* TextbookChapterView.swift */; };
|
||||||
B4603AA6EFB134794AA39BF4 /* LyricsLibraryView.swift in Sources */ = {isa = PBXBuildFile; fileRef = FC2B1F646394D7C03493F1BF /* LyricsLibraryView.swift */; };
|
B4603AA6EFB134794AA39BF4 /* LyricsLibraryView.swift in Sources */ = {isa = PBXBuildFile; fileRef = FC2B1F646394D7C03493F1BF /* LyricsLibraryView.swift */; };
|
||||||
|
B48C0015BE53279B0631C2D7 /* ChatLibraryView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 648436F8326CF95777E2FA58 /* ChatLibraryView.swift */; };
|
||||||
|
BA3DE2DA319AA3B572C19E11 /* VerbExampleCache.swift in Sources */ = {isa = PBXBuildFile; fileRef = EBEEC9CC9A8C502AF5F42914 /* VerbExampleCache.swift */; };
|
||||||
BB48230C3B26EA6E84D2D823 /* DailyProgressRing.swift in Sources */ = {isa = PBXBuildFile; fileRef = 180F9D59828C36B44A5E384F /* DailyProgressRing.swift */; };
|
BB48230C3B26EA6E84D2D823 /* DailyProgressRing.swift in Sources */ = {isa = PBXBuildFile; fileRef = 180F9D59828C36B44A5E384F /* DailyProgressRing.swift */; };
|
||||||
BF0832865857EFDA1D1CDEAD /* SharedModels in Frameworks */ = {isa = PBXBuildFile; productRef = BCCBABD74CADDB118179D8E9 /* SharedModels */; };
|
BF0832865857EFDA1D1CDEAD /* SharedModels in Frameworks */ = {isa = PBXBuildFile; productRef = BCCBABD74CADDB118179D8E9 /* SharedModels */; };
|
||||||
C0BAEF49A6270D8F64CF13D6 /* PracticeViewModel.swift in Sources */ = {isa = PBXBuildFile; fileRef = C359C051FB157EF447561405 /* PracticeViewModel.swift */; };
|
C0BAEF49A6270D8F64CF13D6 /* PracticeViewModel.swift in Sources */ = {isa = PBXBuildFile; fileRef = C359C051FB157EF447561405 /* PracticeViewModel.swift */; };
|
||||||
@@ -61,6 +85,7 @@
|
|||||||
C3851F960C1162239DC2F935 /* CourseQuizView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 143D06606AE10DCA30A140C2 /* CourseQuizView.swift */; };
|
C3851F960C1162239DC2F935 /* CourseQuizView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 143D06606AE10DCA30A140C2 /* CourseQuizView.swift */; };
|
||||||
C8C3880535008764B7117049 /* DataLoader.swift in Sources */ = {isa = PBXBuildFile; fileRef = DADCA82DDD34DF36D59BB283 /* DataLoader.swift */; };
|
C8C3880535008764B7117049 /* DataLoader.swift in Sources */ = {isa = PBXBuildFile; fileRef = DADCA82DDD34DF36D59BB283 /* DataLoader.swift */; };
|
||||||
CAC69045B74249F121643E88 /* AnswerReviewView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 83A8C1A048627C8DEB83C12D /* AnswerReviewView.swift */; };
|
CAC69045B74249F121643E88 /* AnswerReviewView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 83A8C1A048627C8DEB83C12D /* AnswerReviewView.swift */; };
|
||||||
|
CC886125F8ECE72D1AAD4861 /* StoryReaderView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 6A48474D969CEF5F573DF09B /* StoryReaderView.swift */; };
|
||||||
CF9E48ADF0501FB79F3DDB7B /* conjuga_data.json in Resources */ = {isa = PBXBuildFile; fileRef = 8C2D88FF9A3B0590B22C7837 /* conjuga_data.json */; };
|
CF9E48ADF0501FB79F3DDB7B /* conjuga_data.json in Resources */ = {isa = PBXBuildFile; fileRef = 8C2D88FF9A3B0590B22C7837 /* conjuga_data.json */; };
|
||||||
D3FFE73A5AD27F1759F50727 /* SpeechService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 49E3AD244327CBF24B7A2752 /* SpeechService.swift */; };
|
D3FFE73A5AD27F1759F50727 /* SpeechService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 49E3AD244327CBF24B7A2752 /* SpeechService.swift */; };
|
||||||
D40B4E919DE379C50265CA9F /* SyncToast.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1C4B5204F6B8647C816814F0 /* SyncToast.swift */; };
|
D40B4E919DE379C50265CA9F /* SyncToast.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1C4B5204F6B8647C816814F0 /* SyncToast.swift */; };
|
||||||
@@ -78,22 +103,6 @@
|
|||||||
F59655A8B8FCE6264315DD33 /* Assets.xcassets in Resources */ = {isa = PBXBuildFile; fileRef = A014EEC3EE08E945FBBA5335 /* Assets.xcassets */; };
|
F59655A8B8FCE6264315DD33 /* Assets.xcassets in Resources */ = {isa = PBXBuildFile; fileRef = A014EEC3EE08E945FBBA5335 /* Assets.xcassets */; };
|
||||||
F84706B47A2156B2138FB8D5 /* GrammarNotesView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 3F1A6221A35699BD8065D064 /* GrammarNotesView.swift */; };
|
F84706B47A2156B2138FB8D5 /* GrammarNotesView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 3F1A6221A35699BD8065D064 /* GrammarNotesView.swift */; };
|
||||||
FC7873F97017532C215DAD34 /* ReviewCard.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0A8A63F750065CA4EF36B4D3 /* ReviewCard.swift */; };
|
FC7873F97017532C215DAD34 /* ReviewCard.swift in Sources */ = {isa = PBXBuildFile; fileRef = 0A8A63F750065CA4EF36B4D3 /* ReviewCard.swift */; };
|
||||||
DDF58F3899FC4B92BF6587D2 /* StudyTimerService.swift in Sources */ = {isa = PBXBuildFile; fileRef = 978FB24DF8D7436CB5210ACE /* StudyTimerService.swift */; };
|
|
||||||
8C1E4E7F36D64EFF8D092AC8 /* StoryGenerator.swift in Sources */ = {isa = PBXBuildFile; fileRef = 327659ABFD524514B6D2D505 /* StoryGenerator.swift */; };
|
|
||||||
4C2649215B81470195F38ED0 /* StoryLibraryView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 950347251CC94D4A9DFF7CBC /* StoryLibraryView.swift */; };
|
|
||||||
8E3D8E8254CF4213B9D9FAD3 /* StoryReaderView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 2A8B6081226847E0A0A174BC /* StoryReaderView.swift */; };
|
|
||||||
12D2C9311D5C4764B48B1754 /* StoryQuizView.swift in Sources */ = {isa = PBXBuildFile; fileRef = E292A183ABB24FFE9CB719C8 /* StoryQuizView.swift */; };
|
|
||||||
8D7CA0F4496B44C28CD5EBD5 /* DictionaryService.swift in Sources */ = {isa = PBXBuildFile; fileRef = A04370CF6B4E4D38BE3EB0C7 /* DictionaryService.swift */; };
|
|
||||||
3EC2A2F4B9C24B029DA49C40 /* VocabReviewView.swift in Sources */ = {isa = PBXBuildFile; fileRef = D3698CE7ACF148318615293E /* VocabReviewView.swift */; };
|
|
||||||
53908E41767B438C8BD229CD /* ClozeView.swift in Sources */ = {isa = PBXBuildFile; fileRef = A649B04B8B3C49419AD9219C /* ClozeView.swift */; };
|
|
||||||
65ABC39F35804C619DAB3200 /* GrammarExercise.swift in Sources */ = {isa = PBXBuildFile; fileRef = 17E5252282F44ECD9BA70DB8 /* GrammarExercise.swift */; };
|
|
||||||
B73F6EED00304B718C6FEFFA /* GrammarExerciseView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 1F71CA5CD67342F18319DB9A /* GrammarExerciseView.swift */; };
|
|
||||||
EA07DB964C8940F69C14DE2C /* PronunciationService.swift in Sources */ = {isa = PBXBuildFile; fileRef = D535EF6988A24B47B70209A2 /* PronunciationService.swift */; };
|
|
||||||
4DCC5CC233DE4701A12FD7EB /* ListeningView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 02B2179562E54E148C98219D /* ListeningView.swift */; };
|
|
||||||
35D6404C60C249D5995AD895 /* ConversationService.swift in Sources */ = {isa = PBXBuildFile; fileRef = E10603F454E54341AA4B9931 /* ConversationService.swift */; };
|
|
||||||
C8AF0931F7FD458C80B6EC0D /* ChatLibraryView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 5667AA04211A449A9150BD28 /* ChatLibraryView.swift */; };
|
|
||||||
6CCC8D51F5524688A4BC5AF8 /* ChatView.swift in Sources */ = {isa = PBXBuildFile; fileRef = FA5FE6E149F54A6BA7D01D99 /* ChatView.swift */; };
|
|
||||||
8510085D78E248D885181E80 /* FeatureReferenceView.swift in Sources */ = {isa = PBXBuildFile; fileRef = 12E9DDEFD53C49E0A48EA655 /* FeatureReferenceView.swift */; };
|
|
||||||
/* End PBXBuildFile section */
|
/* End PBXBuildFile section */
|
||||||
|
|
||||||
/* Begin PBXContainerItemProxy section */
|
/* Begin PBXContainerItemProxy section */
|
||||||
@@ -121,26 +130,35 @@
|
|||||||
/* End PBXCopyFilesBuildPhase section */
|
/* End PBXCopyFilesBuildPhase section */
|
||||||
|
|
||||||
/* Begin PBXFileReference section */
|
/* Begin PBXFileReference section */
|
||||||
|
02EB3F9305349775E0EB28B9 /* VerbExampleGenerator.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VerbExampleGenerator.swift; sourceTree = "<group>"; };
|
||||||
0313D24F96E6A0039C34341F /* DailyLog.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DailyLog.swift; sourceTree = "<group>"; };
|
0313D24F96E6A0039C34341F /* DailyLog.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DailyLog.swift; sourceTree = "<group>"; };
|
||||||
|
07F24ED76059609D6857EC97 /* GrammarExerciseView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = GrammarExerciseView.swift; sourceTree = "<group>"; };
|
||||||
0A8A63F750065CA4EF36B4D3 /* ReviewCard.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ReviewCard.swift; sourceTree = "<group>"; };
|
0A8A63F750065CA4EF36B4D3 /* ReviewCard.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ReviewCard.swift; sourceTree = "<group>"; };
|
||||||
102F0E136CDFF8CED710210F /* TensePill.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TensePill.swift; sourceTree = "<group>"; };
|
102F0E136CDFF8CED710210F /* TensePill.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TensePill.swift; sourceTree = "<group>"; };
|
||||||
10C16AA6022E4742898745CE /* TypingView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TypingView.swift; sourceTree = "<group>"; };
|
10C16AA6022E4742898745CE /* TypingView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TypingView.swift; sourceTree = "<group>"; };
|
||||||
143D06606AE10DCA30A140C2 /* CourseQuizView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = CourseQuizView.swift; sourceTree = "<group>"; };
|
143D06606AE10DCA30A140C2 /* CourseQuizView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = CourseQuizView.swift; sourceTree = "<group>"; };
|
||||||
|
15AC27B1E3D332709657F20B /* StoryLibraryView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StoryLibraryView.swift; sourceTree = "<group>"; };
|
||||||
16C1F74196C3C5628953BE3F /* Conjuga.app */ = {isa = PBXFileReference; includeInIndex = 0; lastKnownFileType = wrapper.application; path = Conjuga.app; sourceTree = BUILT_PRODUCTS_DIR; };
|
16C1F74196C3C5628953BE3F /* Conjuga.app */ = {isa = PBXFileReference; includeInIndex = 0; lastKnownFileType = wrapper.application; path = Conjuga.app; sourceTree = BUILT_PRODUCTS_DIR; };
|
||||||
180F9D59828C36B44A5E384F /* DailyProgressRing.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DailyProgressRing.swift; sourceTree = "<group>"; };
|
180F9D59828C36B44A5E384F /* DailyProgressRing.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DailyProgressRing.swift; sourceTree = "<group>"; };
|
||||||
18AC3C548BDB9EF8701BE64C /* DashboardView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DashboardView.swift; sourceTree = "<group>"; };
|
18AC3C548BDB9EF8701BE64C /* DashboardView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DashboardView.swift; sourceTree = "<group>"; };
|
||||||
18CCD69C14D1B0CFBD03C92F /* SyncStatusMonitor.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SyncStatusMonitor.swift; sourceTree = "<group>"; };
|
18CCD69C14D1B0CFBD03C92F /* SyncStatusMonitor.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SyncStatusMonitor.swift; sourceTree = "<group>"; };
|
||||||
195DA9CDA703DDFAD1B3CD5A /* DailyProgressWidget.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DailyProgressWidget.swift; sourceTree = "<group>"; };
|
195DA9CDA703DDFAD1B3CD5A /* DailyProgressWidget.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DailyProgressWidget.swift; sourceTree = "<group>"; };
|
||||||
1980E8E439EB76ED7330A90D /* WeekProgressWidget.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WeekProgressWidget.swift; sourceTree = "<group>"; };
|
1980E8E439EB76ED7330A90D /* WeekProgressWidget.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WeekProgressWidget.swift; sourceTree = "<group>"; };
|
||||||
|
1C3E36BDC2540AF2A67AEEB1 /* FeatureReferenceView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = FeatureReferenceView.swift; sourceTree = "<group>"; };
|
||||||
1C42EA0EBD4CB1E10A82BA25 /* AchievementService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = AchievementService.swift; sourceTree = "<group>"; };
|
1C42EA0EBD4CB1E10A82BA25 /* AchievementService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = AchievementService.swift; sourceTree = "<group>"; };
|
||||||
1C4B5204F6B8647C816814F0 /* SyncToast.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SyncToast.swift; sourceTree = "<group>"; };
|
1C4B5204F6B8647C816814F0 /* SyncToast.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SyncToast.swift; sourceTree = "<group>"; };
|
||||||
1EA0FA4F9149B9D8E197ADE9 /* PracticeView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = PracticeView.swift; sourceTree = "<group>"; };
|
1EA0FA4F9149B9D8E197ADE9 /* PracticeView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = PracticeView.swift; sourceTree = "<group>"; };
|
||||||
1EB4830F9289AACC82D753F8 /* ConjugaApp.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ConjugaApp.swift; sourceTree = "<group>"; };
|
1EB4830F9289AACC82D753F8 /* ConjugaApp.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ConjugaApp.swift; sourceTree = "<group>"; };
|
||||||
1F842EB5E566C74658D918BB /* HandwritingView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = HandwritingView.swift; sourceTree = "<group>"; };
|
1F842EB5E566C74658D918BB /* HandwritingView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = HandwritingView.swift; sourceTree = "<group>"; };
|
||||||
|
20D1904DF07E0A6816134CF3 /* ListeningView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ListeningView.swift; sourceTree = "<group>"; };
|
||||||
2889F2F81673AFF3A58A07A8 /* WidgetDataReader.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WidgetDataReader.swift; sourceTree = "<group>"; };
|
2889F2F81673AFF3A58A07A8 /* WidgetDataReader.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WidgetDataReader.swift; sourceTree = "<group>"; };
|
||||||
2931634BEB33B93429CE254F /* VocabFlashcardView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VocabFlashcardView.swift; sourceTree = "<group>"; };
|
2931634BEB33B93429CE254F /* VocabFlashcardView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VocabFlashcardView.swift; sourceTree = "<group>"; };
|
||||||
30EF2362D9FFF9B07A45CE6D /* StreakCalendarView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StreakCalendarView.swift; sourceTree = "<group>"; };
|
30EF2362D9FFF9B07A45CE6D /* StreakCalendarView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StreakCalendarView.swift; sourceTree = "<group>"; };
|
||||||
|
34C67DD1A1CB9B8B5A2BDCED /* CheckpointExamView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = CheckpointExamView.swift; sourceTree = "<group>"; };
|
||||||
|
3644B5ED77F29A65877D926A /* reflexive_verbs.json */ = {isa = PBXFileReference; lastKnownFileType = text.json; path = reflexive_verbs.json; sourceTree = "<group>"; };
|
||||||
3695075616689E72DBB26D4C /* HandwritingRecognizer.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = HandwritingRecognizer.swift; sourceTree = "<group>"; };
|
3695075616689E72DBB26D4C /* HandwritingRecognizer.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = HandwritingRecognizer.swift; sourceTree = "<group>"; };
|
||||||
|
39908548430FDF01D76201FB /* TextbookChapterView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TextbookChapterView.swift; sourceTree = "<group>"; };
|
||||||
|
3A96C065B8787DEC6818E497 /* ConversationService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ConversationService.swift; sourceTree = "<group>"; };
|
||||||
3B16FF4C52457CD8CD703532 /* AdaptiveContainer.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = AdaptiveContainer.swift; sourceTree = "<group>"; };
|
3B16FF4C52457CD8CD703532 /* AdaptiveContainer.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = AdaptiveContainer.swift; sourceTree = "<group>"; };
|
||||||
3BC3247457109FC6BF00D85B /* TenseInfo.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TenseInfo.swift; sourceTree = "<group>"; };
|
3BC3247457109FC6BF00D85B /* TenseInfo.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TenseInfo.swift; sourceTree = "<group>"; };
|
||||||
3CC1AD23158CBABBB753FA1E /* ConjugaWidget.entitlements */ = {isa = PBXFileReference; lastKnownFileType = text.plist.entitlements; path = ConjugaWidget.entitlements; sourceTree = "<group>"; };
|
3CC1AD23158CBABBB753FA1E /* ConjugaWidget.entitlements */ = {isa = PBXFileReference; lastKnownFileType = text.plist.entitlements; path = ConjugaWidget.entitlements; sourceTree = "<group>"; };
|
||||||
@@ -149,42 +167,54 @@
|
|||||||
42ADC600530309A9B147A663 /* IrregularHighlightText.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = IrregularHighlightText.swift; sourceTree = "<group>"; };
|
42ADC600530309A9B147A663 /* IrregularHighlightText.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = IrregularHighlightText.swift; sourceTree = "<group>"; };
|
||||||
43345D6C7EAA4017E3A45935 /* CombinedWidget.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = CombinedWidget.swift; sourceTree = "<group>"; };
|
43345D6C7EAA4017E3A45935 /* CombinedWidget.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = CombinedWidget.swift; sourceTree = "<group>"; };
|
||||||
43B8AED76C14A05AF2339C27 /* LyricsSearchService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = LyricsSearchService.swift; sourceTree = "<group>"; };
|
43B8AED76C14A05AF2339C27 /* LyricsSearchService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = LyricsSearchService.swift; sourceTree = "<group>"; };
|
||||||
|
496D851D2D95BEA283C9FD45 /* TextbookChapterListView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TextbookChapterListView.swift; sourceTree = "<group>"; };
|
||||||
49E3AD244327CBF24B7A2752 /* SpeechService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SpeechService.swift; sourceTree = "<group>"; };
|
49E3AD244327CBF24B7A2752 /* SpeechService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SpeechService.swift; sourceTree = "<group>"; };
|
||||||
|
4AE3D1D8723D5C41D3501774 /* PronunciationService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = PronunciationService.swift; sourceTree = "<group>"; };
|
||||||
4D389CA5B5C4E7A12CAEA5BC /* GrammarNote.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = GrammarNote.swift; sourceTree = "<group>"; };
|
4D389CA5B5C4E7A12CAEA5BC /* GrammarNote.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = GrammarNote.swift; sourceTree = "<group>"; };
|
||||||
|
4EC8C4E931AD7A1D87C490BB /* StudyTimerService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StudyTimerService.swift; sourceTree = "<group>"; };
|
||||||
58394296923991E56BAC2B02 /* LyricsReaderView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = LyricsReaderView.swift; sourceTree = "<group>"; };
|
58394296923991E56BAC2B02 /* LyricsReaderView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = LyricsReaderView.swift; sourceTree = "<group>"; };
|
||||||
5983A534E4836F30B5281ACB /* MainTabView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = MainTabView.swift; sourceTree = "<group>"; };
|
5983A534E4836F30B5281ACB /* MainTabView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = MainTabView.swift; sourceTree = "<group>"; };
|
||||||
5BF946245110C92F087D81E8 /* PracticeHeaderView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = PracticeHeaderView.swift; sourceTree = "<group>"; };
|
5BF946245110C92F087D81E8 /* PracticeHeaderView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = PracticeHeaderView.swift; sourceTree = "<group>"; };
|
||||||
5C0E6EAFC0D24928BA956FA5 /* SRSEngine.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SRSEngine.swift; sourceTree = "<group>"; };
|
5C0E6EAFC0D24928BA956FA5 /* SRSEngine.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SRSEngine.swift; sourceTree = "<group>"; };
|
||||||
5E7EF4161C73AAC67B3A0004 /* WeekTestView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WeekTestView.swift; sourceTree = "<group>"; };
|
5E7EF4161C73AAC67B3A0004 /* WeekTestView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WeekTestView.swift; sourceTree = "<group>"; };
|
||||||
EA1F177F7ABF5D2E4E5466CD /* CheckpointExamView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = CheckpointExamView.swift; sourceTree = "<group>"; };
|
|
||||||
626873572466403C0288090D /* QuizType.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = QuizType.swift; sourceTree = "<group>"; };
|
626873572466403C0288090D /* QuizType.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = QuizType.swift; sourceTree = "<group>"; };
|
||||||
631DC0A942DD57C81DECE083 /* DeckStudyView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DeckStudyView.swift; sourceTree = "<group>"; };
|
631DC0A942DD57C81DECE083 /* DeckStudyView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DeckStudyView.swift; sourceTree = "<group>"; };
|
||||||
|
648436F8326CF95777E2FA58 /* ChatLibraryView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ChatLibraryView.swift; sourceTree = "<group>"; };
|
||||||
69D98E1564C6538056D81200 /* TenseEndingTable.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TenseEndingTable.swift; sourceTree = "<group>"; };
|
69D98E1564C6538056D81200 /* TenseEndingTable.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TenseEndingTable.swift; sourceTree = "<group>"; };
|
||||||
|
6A48474D969CEF5F573DF09B /* StoryReaderView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StoryReaderView.swift; sourceTree = "<group>"; };
|
||||||
6B9A9F2AB21895E06989A4D5 /* FlashcardView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = FlashcardView.swift; sourceTree = "<group>"; };
|
6B9A9F2AB21895E06989A4D5 /* FlashcardView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = FlashcardView.swift; sourceTree = "<group>"; };
|
||||||
70960F0FD7509310B3F61C48 /* LyricsSearchView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = LyricsSearchView.swift; sourceTree = "<group>"; };
|
70960F0FD7509310B3F61C48 /* LyricsSearchView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = LyricsSearchView.swift; sourceTree = "<group>"; };
|
||||||
711CB7539EF5887F6F7B8B82 /* FullTableView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = FullTableView.swift; sourceTree = "<group>"; };
|
711CB7539EF5887F6F7B8B82 /* FullTableView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = FullTableView.swift; sourceTree = "<group>"; };
|
||||||
|
713F23A9C2935408B136C7C7 /* StoryGenerator.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StoryGenerator.swift; sourceTree = "<group>"; };
|
||||||
72CB5F95DF256DF7CD73269D /* NewWordIntent.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = NewWordIntent.swift; sourceTree = "<group>"; };
|
72CB5F95DF256DF7CD73269D /* NewWordIntent.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = NewWordIntent.swift; sourceTree = "<group>"; };
|
||||||
731614CACCB73B6FD592D34A /* SentenceBuilderView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SentenceBuilderView.swift; sourceTree = "<group>"; };
|
731614CACCB73B6FD592D34A /* SentenceBuilderView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SentenceBuilderView.swift; sourceTree = "<group>"; };
|
||||||
|
76BE2A08EC694FF784ED5575 /* DictionaryService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DictionaryService.swift; sourceTree = "<group>"; };
|
||||||
777C696A841803D5B775B678 /* ReferenceStore.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ReferenceStore.swift; sourceTree = "<group>"; };
|
777C696A841803D5B775B678 /* ReferenceStore.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ReferenceStore.swift; sourceTree = "<group>"; };
|
||||||
|
79576893566932D2BE207528 /* ChatView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ChatView.swift; sourceTree = "<group>"; };
|
||||||
7E6AF62A3A949630E067DC22 /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist; path = Info.plist; sourceTree = "<group>"; };
|
7E6AF62A3A949630E067DC22 /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist; path = Info.plist; sourceTree = "<group>"; };
|
||||||
80D974250C396589656B8443 /* HandwritingCanvas.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = HandwritingCanvas.swift; sourceTree = "<group>"; };
|
80D974250C396589656B8443 /* HandwritingCanvas.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = HandwritingCanvas.swift; sourceTree = "<group>"; };
|
||||||
833516C5D57F164C8660A479 /* CourseView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = CourseView.swift; sourceTree = "<group>"; };
|
833516C5D57F164C8660A479 /* CourseView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = CourseView.swift; sourceTree = "<group>"; };
|
||||||
83A8C1A048627C8DEB83C12D /* AnswerReviewView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = AnswerReviewView.swift; sourceTree = "<group>"; };
|
83A8C1A048627C8DEB83C12D /* AnswerReviewView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = AnswerReviewView.swift; sourceTree = "<group>"; };
|
||||||
842DB48F8570C39CDCFF2F57 /* PracticeSessionService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = PracticeSessionService.swift; sourceTree = "<group>"; };
|
842DB48F8570C39CDCFF2F57 /* PracticeSessionService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = PracticeSessionService.swift; sourceTree = "<group>"; };
|
||||||
|
854EA2A8D6CF203958BA3C24 /* TextbookExerciseView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = TextbookExerciseView.swift; sourceTree = "<group>"; };
|
||||||
8C2D88FF9A3B0590B22C7837 /* conjuga_data.json */ = {isa = PBXFileReference; lastKnownFileType = text.json; path = conjuga_data.json; sourceTree = "<group>"; };
|
8C2D88FF9A3B0590B22C7837 /* conjuga_data.json */ = {isa = PBXFileReference; lastKnownFileType = text.json; path = conjuga_data.json; sourceTree = "<group>"; };
|
||||||
8C935ECDF8A5D8D6FA541E20 /* GuideView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = GuideView.swift; sourceTree = "<group>"; };
|
8C935ECDF8A5D8D6FA541E20 /* GuideView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = GuideView.swift; sourceTree = "<group>"; };
|
||||||
8E9BCDBB9BC24F5C8117767E /* WordOfDayWidget.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WordOfDayWidget.swift; sourceTree = "<group>"; };
|
8E9BCDBB9BC24F5C8117767E /* WordOfDayWidget.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WordOfDayWidget.swift; sourceTree = "<group>"; };
|
||||||
|
940826D9ED5C18D2C4E7B2C7 /* ReflexiveVerbStore.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ReflexiveVerbStore.swift; sourceTree = "<group>"; };
|
||||||
9708FF3CF33E4765DB225F93 /* ConjugaWidgetExtension.appex */ = {isa = PBXFileReference; includeInIndex = 0; lastKnownFileType = "wrapper.app-extension"; path = ConjugaWidgetExtension.appex; sourceTree = BUILT_PRODUCTS_DIR; };
|
9708FF3CF33E4765DB225F93 /* ConjugaWidgetExtension.appex */ = {isa = PBXFileReference; includeInIndex = 0; lastKnownFileType = "wrapper.app-extension"; path = ConjugaWidgetExtension.appex; sourceTree = BUILT_PRODUCTS_DIR; };
|
||||||
9E1FB35614B709E6B1D1D017 /* Conjuga.entitlements */ = {isa = PBXFileReference; lastKnownFileType = text.plist.entitlements; path = Conjuga.entitlements; sourceTree = "<group>"; };
|
9E1FB35614B709E6B1D1D017 /* Conjuga.entitlements */ = {isa = PBXFileReference; lastKnownFileType = text.plist.entitlements; path = Conjuga.entitlements; sourceTree = "<group>"; };
|
||||||
A014EEC3EE08E945FBBA5335 /* Assets.xcassets */ = {isa = PBXFileReference; lastKnownFileType = folder.assetcatalog; path = Assets.xcassets; sourceTree = "<group>"; };
|
A014EEC3EE08E945FBBA5335 /* Assets.xcassets */ = {isa = PBXFileReference; lastKnownFileType = folder.assetcatalog; path = Assets.xcassets; sourceTree = "<group>"; };
|
||||||
A4B95B276C054DBFE508C4D1 /* StartupCoordinator.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StartupCoordinator.swift; sourceTree = "<group>"; };
|
A4B95B276C054DBFE508C4D1 /* StartupCoordinator.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StartupCoordinator.swift; sourceTree = "<group>"; };
|
||||||
A63061BBC8998DF33E3DCA2B /* VerbListView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VerbListView.swift; sourceTree = "<group>"; };
|
A63061BBC8998DF33E3DCA2B /* VerbListView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VerbListView.swift; sourceTree = "<group>"; };
|
||||||
|
A7CDC5F2660A3009A3ADF048 /* StoryQuizView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StoryQuizView.swift; sourceTree = "<group>"; };
|
||||||
AC34396050805693AA4AC582 /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist; path = Info.plist; sourceTree = "<group>"; };
|
AC34396050805693AA4AC582 /* Info.plist */ = {isa = PBXFileReference; lastKnownFileType = text.plist; path = Info.plist; sourceTree = "<group>"; };
|
||||||
|
B3EFFA19D0AB2528A868E8ED /* AnswerChecker.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = AnswerChecker.swift; sourceTree = "<group>"; };
|
||||||
BC273716CD14A99EFF8206CA /* course_data.json */ = {isa = PBXFileReference; lastKnownFileType = text.json; path = course_data.json; sourceTree = "<group>"; };
|
BC273716CD14A99EFF8206CA /* course_data.json */ = {isa = PBXFileReference; lastKnownFileType = text.json; path = course_data.json; sourceTree = "<group>"; };
|
||||||
BCCC95A95581458E068E0484 /* SettingsView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SettingsView.swift; sourceTree = "<group>"; };
|
BCCC95A95581458E068E0484 /* SettingsView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = SettingsView.swift; sourceTree = "<group>"; };
|
||||||
C359C051FB157EF447561405 /* PracticeViewModel.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = PracticeViewModel.swift; sourceTree = "<group>"; };
|
C359C051FB157EF447561405 /* PracticeViewModel.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = PracticeViewModel.swift; sourceTree = "<group>"; };
|
||||||
CBCF6FCFA6B00151C2371E77 /* ReviewStore.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ReviewStore.swift; sourceTree = "<group>"; };
|
CBCF6FCFA6B00151C2371E77 /* ReviewStore.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ReviewStore.swift; sourceTree = "<group>"; };
|
||||||
CF6D58AEE2F0DFE0F1829A73 /* SharedModels */ = {isa = PBXFileReference; lastKnownFileType = folder; name = SharedModels; path = SharedModels; sourceTree = SOURCE_ROOT; };
|
CF6D58AEE2F0DFE0F1829A73 /* SharedModels */ = {isa = PBXFileReference; lastKnownFileType = folder; name = SharedModels; path = SharedModels; sourceTree = SOURCE_ROOT; };
|
||||||
|
D232CDA43CC9218D748BA121 /* ClozeView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ClozeView.swift; sourceTree = "<group>"; };
|
||||||
D570252DA3DCDD9217C71863 /* WidgetDataService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WidgetDataService.swift; sourceTree = "<group>"; };
|
D570252DA3DCDD9217C71863 /* WidgetDataService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = WidgetDataService.swift; sourceTree = "<group>"; };
|
||||||
DA3A33983B2F2078C9EA1A3D /* MultipleChoiceView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = MultipleChoiceView.swift; sourceTree = "<group>"; };
|
DA3A33983B2F2078C9EA1A3D /* MultipleChoiceView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = MultipleChoiceView.swift; sourceTree = "<group>"; };
|
||||||
DADCA82DDD34DF36D59BB283 /* DataLoader.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DataLoader.swift; sourceTree = "<group>"; };
|
DADCA82DDD34DF36D59BB283 /* DataLoader.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DataLoader.swift; sourceTree = "<group>"; };
|
||||||
@@ -193,25 +223,13 @@
|
|||||||
E1DBE662F89F02A0282F5BEE /* VerbDetailView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VerbDetailView.swift; sourceTree = "<group>"; };
|
E1DBE662F89F02A0282F5BEE /* VerbDetailView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VerbDetailView.swift; sourceTree = "<group>"; };
|
||||||
E325FE0E484DE75009672D02 /* ConjugaWidgetBundle.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ConjugaWidgetBundle.swift; sourceTree = "<group>"; };
|
E325FE0E484DE75009672D02 /* ConjugaWidgetBundle.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ConjugaWidgetBundle.swift; sourceTree = "<group>"; };
|
||||||
E536AD1180FE10576EAC884A /* UserProgress.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = UserProgress.swift; sourceTree = "<group>"; };
|
E536AD1180FE10576EAC884A /* UserProgress.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = UserProgress.swift; sourceTree = "<group>"; };
|
||||||
|
E8D95887B18216FCA71643D6 /* VocabReviewView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VocabReviewView.swift; sourceTree = "<group>"; };
|
||||||
E8E9833868EB73AF9EB3A611 /* StoreInspector.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StoreInspector.swift; sourceTree = "<group>"; };
|
E8E9833868EB73AF9EB3A611 /* StoreInspector.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StoreInspector.swift; sourceTree = "<group>"; };
|
||||||
E972AA745F44586EF0B1B0C8 /* OnboardingView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = OnboardingView.swift; sourceTree = "<group>"; };
|
E972AA745F44586EF0B1B0C8 /* OnboardingView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = OnboardingView.swift; sourceTree = "<group>"; };
|
||||||
|
EBEEC9CC9A8C502AF5F42914 /* VerbExampleCache.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VerbExampleCache.swift; sourceTree = "<group>"; };
|
||||||
|
F0A3099BE24A56F9B1F179E0 /* GrammarExercise.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = GrammarExercise.swift; sourceTree = "<group>"; };
|
||||||
|
F92BCE1A6720E47FCD26BADC /* StemChangeConjugationView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StemChangeConjugationView.swift; sourceTree = "<group>"; };
|
||||||
FC2B1F646394D7C03493F1BF /* LyricsLibraryView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = LyricsLibraryView.swift; sourceTree = "<group>"; };
|
FC2B1F646394D7C03493F1BF /* LyricsLibraryView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = LyricsLibraryView.swift; sourceTree = "<group>"; };
|
||||||
978FB24DF8D7436CB5210ACE /* StudyTimerService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StudyTimerService.swift; sourceTree = "<group>"; };
|
|
||||||
327659ABFD524514B6D2D505 /* StoryGenerator.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StoryGenerator.swift; sourceTree = "<group>"; };
|
|
||||||
950347251CC94D4A9DFF7CBC /* StoryLibraryView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StoryLibraryView.swift; sourceTree = "<group>"; };
|
|
||||||
2A8B6081226847E0A0A174BC /* StoryReaderView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StoryReaderView.swift; sourceTree = "<group>"; };
|
|
||||||
E292A183ABB24FFE9CB719C8 /* StoryQuizView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = StoryQuizView.swift; sourceTree = "<group>"; };
|
|
||||||
A04370CF6B4E4D38BE3EB0C7 /* DictionaryService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = DictionaryService.swift; sourceTree = "<group>"; };
|
|
||||||
D3698CE7ACF148318615293E /* VocabReviewView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = VocabReviewView.swift; sourceTree = "<group>"; };
|
|
||||||
A649B04B8B3C49419AD9219C /* ClozeView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ClozeView.swift; sourceTree = "<group>"; };
|
|
||||||
17E5252282F44ECD9BA70DB8 /* GrammarExercise.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = GrammarExercise.swift; sourceTree = "<group>"; };
|
|
||||||
1F71CA5CD67342F18319DB9A /* GrammarExerciseView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = GrammarExerciseView.swift; sourceTree = "<group>"; };
|
|
||||||
D535EF6988A24B47B70209A2 /* PronunciationService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = PronunciationService.swift; sourceTree = "<group>"; };
|
|
||||||
02B2179562E54E148C98219D /* ListeningView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ListeningView.swift; sourceTree = "<group>"; };
|
|
||||||
E10603F454E54341AA4B9931 /* ConversationService.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ConversationService.swift; sourceTree = "<group>"; };
|
|
||||||
5667AA04211A449A9150BD28 /* ChatLibraryView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ChatLibraryView.swift; sourceTree = "<group>"; };
|
|
||||||
FA5FE6E149F54A6BA7D01D99 /* ChatView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = ChatView.swift; sourceTree = "<group>"; };
|
|
||||||
12E9DDEFD53C49E0A48EA655 /* FeatureReferenceView.swift */ = {isa = PBXFileReference; lastKnownFileType = sourcecode.swift; path = FeatureReferenceView.swift; sourceTree = "<group>"; };
|
|
||||||
/* End PBXFileReference section */
|
/* End PBXFileReference section */
|
||||||
|
|
||||||
/* Begin PBXFrameworksBuildPhase section */
|
/* Begin PBXFrameworksBuildPhase section */
|
||||||
@@ -243,6 +261,7 @@
|
|||||||
1EB4830F9289AACC82D753F8 /* ConjugaApp.swift */,
|
1EB4830F9289AACC82D753F8 /* ConjugaApp.swift */,
|
||||||
BC273716CD14A99EFF8206CA /* course_data.json */,
|
BC273716CD14A99EFF8206CA /* course_data.json */,
|
||||||
7E6AF62A3A949630E067DC22 /* Info.plist */,
|
7E6AF62A3A949630E067DC22 /* Info.plist */,
|
||||||
|
3644B5ED77F29A65877D926A /* reflexive_verbs.json */,
|
||||||
353C5DE41FD410FA82E3AED7 /* Models */,
|
353C5DE41FD410FA82E3AED7 /* Models */,
|
||||||
1994867BC8E985795A172854 /* Services */,
|
1994867BC8E985795A172854 /* Services */,
|
||||||
BFC1AEBE02CE22E6474FFEA6 /* Utilities */,
|
BFC1AEBE02CE22E6474FFEA6 /* Utilities */,
|
||||||
@@ -255,8 +274,8 @@
|
|||||||
0931AEB5B728C3A03F06A1CA /* Settings */ = {
|
0931AEB5B728C3A03F06A1CA /* Settings */ = {
|
||||||
isa = PBXGroup;
|
isa = PBXGroup;
|
||||||
children = (
|
children = (
|
||||||
|
1C3E36BDC2540AF2A67AEEB1 /* FeatureReferenceView.swift */,
|
||||||
BCCC95A95581458E068E0484 /* SettingsView.swift */,
|
BCCC95A95581458E068E0484 /* SettingsView.swift */,
|
||||||
12E9DDEFD53C49E0A48EA655 /* FeatureReferenceView.swift */,
|
|
||||||
);
|
);
|
||||||
path = Settings;
|
path = Settings;
|
||||||
sourceTree = "<group>";
|
sourceTree = "<group>";
|
||||||
@@ -274,23 +293,27 @@
|
|||||||
isa = PBXGroup;
|
isa = PBXGroup;
|
||||||
children = (
|
children = (
|
||||||
1C42EA0EBD4CB1E10A82BA25 /* AchievementService.swift */,
|
1C42EA0EBD4CB1E10A82BA25 /* AchievementService.swift */,
|
||||||
|
B3EFFA19D0AB2528A868E8ED /* AnswerChecker.swift */,
|
||||||
|
3A96C065B8787DEC6818E497 /* ConversationService.swift */,
|
||||||
DAF7CA1E6F9979CB2C699FDC /* CourseReviewStore.swift */,
|
DAF7CA1E6F9979CB2C699FDC /* CourseReviewStore.swift */,
|
||||||
DADCA82DDD34DF36D59BB283 /* DataLoader.swift */,
|
DADCA82DDD34DF36D59BB283 /* DataLoader.swift */,
|
||||||
|
76BE2A08EC694FF784ED5575 /* DictionaryService.swift */,
|
||||||
3695075616689E72DBB26D4C /* HandwritingRecognizer.swift */,
|
3695075616689E72DBB26D4C /* HandwritingRecognizer.swift */,
|
||||||
43B8AED76C14A05AF2339C27 /* LyricsSearchService.swift */,
|
43B8AED76C14A05AF2339C27 /* LyricsSearchService.swift */,
|
||||||
842DB48F8570C39CDCFF2F57 /* PracticeSessionService.swift */,
|
842DB48F8570C39CDCFF2F57 /* PracticeSessionService.swift */,
|
||||||
E10603F454E54341AA4B9931 /* ConversationService.swift */,
|
4AE3D1D8723D5C41D3501774 /* PronunciationService.swift */,
|
||||||
D535EF6988A24B47B70209A2 /* PronunciationService.swift */,
|
|
||||||
A04370CF6B4E4D38BE3EB0C7 /* DictionaryService.swift */,
|
|
||||||
327659ABFD524514B6D2D505 /* StoryGenerator.swift */,
|
|
||||||
978FB24DF8D7436CB5210ACE /* StudyTimerService.swift */,
|
|
||||||
777C696A841803D5B775B678 /* ReferenceStore.swift */,
|
777C696A841803D5B775B678 /* ReferenceStore.swift */,
|
||||||
|
940826D9ED5C18D2C4E7B2C7 /* ReflexiveVerbStore.swift */,
|
||||||
CBCF6FCFA6B00151C2371E77 /* ReviewStore.swift */,
|
CBCF6FCFA6B00151C2371E77 /* ReviewStore.swift */,
|
||||||
49E3AD244327CBF24B7A2752 /* SpeechService.swift */,
|
49E3AD244327CBF24B7A2752 /* SpeechService.swift */,
|
||||||
5C0E6EAFC0D24928BA956FA5 /* SRSEngine.swift */,
|
5C0E6EAFC0D24928BA956FA5 /* SRSEngine.swift */,
|
||||||
A4B95B276C054DBFE508C4D1 /* StartupCoordinator.swift */,
|
A4B95B276C054DBFE508C4D1 /* StartupCoordinator.swift */,
|
||||||
E8E9833868EB73AF9EB3A611 /* StoreInspector.swift */,
|
E8E9833868EB73AF9EB3A611 /* StoreInspector.swift */,
|
||||||
|
713F23A9C2935408B136C7C7 /* StoryGenerator.swift */,
|
||||||
|
4EC8C4E931AD7A1D87C490BB /* StudyTimerService.swift */,
|
||||||
18CCD69C14D1B0CFBD03C92F /* SyncStatusMonitor.swift */,
|
18CCD69C14D1B0CFBD03C92F /* SyncStatusMonitor.swift */,
|
||||||
|
EBEEC9CC9A8C502AF5F42914 /* VerbExampleCache.swift */,
|
||||||
|
02EB3F9305349775E0EB28B9 /* VerbExampleGenerator.swift */,
|
||||||
D570252DA3DCDD9217C71863 /* WidgetDataService.swift */,
|
D570252DA3DCDD9217C71863 /* WidgetDataService.swift */,
|
||||||
);
|
);
|
||||||
path = Services;
|
path = Services;
|
||||||
@@ -308,6 +331,7 @@
|
|||||||
isa = PBXGroup;
|
isa = PBXGroup;
|
||||||
children = (
|
children = (
|
||||||
0313D24F96E6A0039C34341F /* DailyLog.swift */,
|
0313D24F96E6A0039C34341F /* DailyLog.swift */,
|
||||||
|
F0A3099BE24A56F9B1F179E0 /* GrammarExercise.swift */,
|
||||||
4D389CA5B5C4E7A12CAEA5BC /* GrammarNote.swift */,
|
4D389CA5B5C4E7A12CAEA5BC /* GrammarNote.swift */,
|
||||||
626873572466403C0288090D /* QuizType.swift */,
|
626873572466403C0288090D /* QuizType.swift */,
|
||||||
0A8A63F750065CA4EF36B4D3 /* ReviewCard.swift */,
|
0A8A63F750065CA4EF36B4D3 /* ReviewCard.swift */,
|
||||||
@@ -315,8 +339,7 @@
|
|||||||
3BC3247457109FC6BF00D85B /* TenseInfo.swift */,
|
3BC3247457109FC6BF00D85B /* TenseInfo.swift */,
|
||||||
DAFE27F29412021AEC57E728 /* TestResult.swift */,
|
DAFE27F29412021AEC57E728 /* TestResult.swift */,
|
||||||
E536AD1180FE10576EAC884A /* UserProgress.swift */,
|
E536AD1180FE10576EAC884A /* UserProgress.swift */,
|
||||||
17E5252282F44ECD9BA70DB8 /* GrammarExercise.swift */,
|
);
|
||||||
);
|
|
||||||
path = Models;
|
path = Models;
|
||||||
sourceTree = "<group>";
|
sourceTree = "<group>";
|
||||||
};
|
};
|
||||||
@@ -341,6 +364,16 @@
|
|||||||
path = ViewModels;
|
path = ViewModels;
|
||||||
sourceTree = "<group>";
|
sourceTree = "<group>";
|
||||||
};
|
};
|
||||||
|
43E4D263B0AF47E401A51601 /* Stories */ = {
|
||||||
|
isa = PBXGroup;
|
||||||
|
children = (
|
||||||
|
15AC27B1E3D332709657F20B /* StoryLibraryView.swift */,
|
||||||
|
A7CDC5F2660A3009A3ADF048 /* StoryQuizView.swift */,
|
||||||
|
6A48474D969CEF5F573DF09B /* StoryReaderView.swift */,
|
||||||
|
);
|
||||||
|
path = Stories;
|
||||||
|
sourceTree = "<group>";
|
||||||
|
};
|
||||||
4B183AB0C56BC2EC302531E7 /* ConjugaWidget */ = {
|
4B183AB0C56BC2EC302531E7 /* ConjugaWidget */ = {
|
||||||
isa = PBXGroup;
|
isa = PBXGroup;
|
||||||
children = (
|
children = (
|
||||||
@@ -361,52 +394,33 @@
|
|||||||
isa = PBXGroup;
|
isa = PBXGroup;
|
||||||
children = (
|
children = (
|
||||||
83A8C1A048627C8DEB83C12D /* AnswerReviewView.swift */,
|
83A8C1A048627C8DEB83C12D /* AnswerReviewView.swift */,
|
||||||
|
D232CDA43CC9218D748BA121 /* ClozeView.swift */,
|
||||||
6B9A9F2AB21895E06989A4D5 /* FlashcardView.swift */,
|
6B9A9F2AB21895E06989A4D5 /* FlashcardView.swift */,
|
||||||
711CB7539EF5887F6F7B8B82 /* FullTableView.swift */,
|
711CB7539EF5887F6F7B8B82 /* FullTableView.swift */,
|
||||||
1F842EB5E566C74658D918BB /* HandwritingView.swift */,
|
1F842EB5E566C74658D918BB /* HandwritingView.swift */,
|
||||||
|
20D1904DF07E0A6816134CF3 /* ListeningView.swift */,
|
||||||
DA3A33983B2F2078C9EA1A3D /* MultipleChoiceView.swift */,
|
DA3A33983B2F2078C9EA1A3D /* MultipleChoiceView.swift */,
|
||||||
5BF946245110C92F087D81E8 /* PracticeHeaderView.swift */,
|
5BF946245110C92F087D81E8 /* PracticeHeaderView.swift */,
|
||||||
1EA0FA4F9149B9D8E197ADE9 /* PracticeView.swift */,
|
1EA0FA4F9149B9D8E197ADE9 /* PracticeView.swift */,
|
||||||
D3698CE7ACF148318615293E /* VocabReviewView.swift */,
|
|
||||||
731614CACCB73B6FD592D34A /* SentenceBuilderView.swift */,
|
731614CACCB73B6FD592D34A /* SentenceBuilderView.swift */,
|
||||||
10C16AA6022E4742898745CE /* TypingView.swift */,
|
10C16AA6022E4742898745CE /* TypingView.swift */,
|
||||||
|
E8D95887B18216FCA71643D6 /* VocabReviewView.swift */,
|
||||||
|
8FB89F19B33894DDF27C8EC2 /* Chat */,
|
||||||
895E547BEFB5D0FBF676BE33 /* Lyrics */,
|
895E547BEFB5D0FBF676BE33 /* Lyrics */,
|
||||||
8A1DED0596E04DDE9536A9A9 /* Stories */,
|
43E4D263B0AF47E401A51601 /* Stories */,
|
||||||
DFD75E32A53845A693D98F48 /* Chat */,
|
);
|
||||||
02B2179562E54E148C98219D /* ListeningView.swift */,
|
|
||||||
A649B04B8B3C49419AD9219C /* ClozeView.swift */,
|
|
||||||
);
|
|
||||||
path = Practice;
|
path = Practice;
|
||||||
sourceTree = "<group>";
|
sourceTree = "<group>";
|
||||||
};
|
};
|
||||||
8102F7FA5BFE6D38B2212AD3 /* Guide */ = {
|
8102F7FA5BFE6D38B2212AD3 /* Guide */ = {
|
||||||
isa = PBXGroup;
|
isa = PBXGroup;
|
||||||
children = (
|
children = (
|
||||||
|
07F24ED76059609D6857EC97 /* GrammarExerciseView.swift */,
|
||||||
3F1A6221A35699BD8065D064 /* GrammarNotesView.swift */,
|
3F1A6221A35699BD8065D064 /* GrammarNotesView.swift */,
|
||||||
8C935ECDF8A5D8D6FA541E20 /* GuideView.swift */,
|
8C935ECDF8A5D8D6FA541E20 /* GuideView.swift */,
|
||||||
1F71CA5CD67342F18319DB9A /* GrammarExerciseView.swift */,
|
);
|
||||||
);
|
|
||||||
path = Guide;
|
path = Guide;
|
||||||
sourceTree = "<group>";
|
sourceTree = "<group>";
|
||||||
};
|
|
||||||
DFD75E32A53845A693D98F48 /* Chat */ = {
|
|
||||||
isa = PBXGroup;
|
|
||||||
children = (
|
|
||||||
5667AA04211A449A9150BD28 /* ChatLibraryView.swift */,
|
|
||||||
FA5FE6E149F54A6BA7D01D99 /* ChatView.swift */,
|
|
||||||
);
|
|
||||||
path = Chat;
|
|
||||||
sourceTree = "<group>";
|
|
||||||
};
|
|
||||||
8A1DED0596E04DDE9536A9A9 /* Stories */ = {
|
|
||||||
isa = PBXGroup;
|
|
||||||
children = (
|
|
||||||
950347251CC94D4A9DFF7CBC /* StoryLibraryView.swift */,
|
|
||||||
2A8B6081226847E0A0A174BC /* StoryReaderView.swift */,
|
|
||||||
E292A183ABB24FFE9CB719C8 /* StoryQuizView.swift */,
|
|
||||||
);
|
|
||||||
path = Stories;
|
|
||||||
sourceTree = "<group>";
|
|
||||||
};
|
};
|
||||||
895E547BEFB5D0FBF676BE33 /* Lyrics */ = {
|
895E547BEFB5D0FBF676BE33 /* Lyrics */ = {
|
||||||
isa = PBXGroup;
|
isa = PBXGroup;
|
||||||
@@ -419,6 +433,15 @@
|
|||||||
path = Lyrics;
|
path = Lyrics;
|
||||||
sourceTree = "<group>";
|
sourceTree = "<group>";
|
||||||
};
|
};
|
||||||
|
8FB89F19B33894DDF27C8EC2 /* Chat */ = {
|
||||||
|
isa = PBXGroup;
|
||||||
|
children = (
|
||||||
|
648436F8326CF95777E2FA58 /* ChatLibraryView.swift */,
|
||||||
|
79576893566932D2BE207528 /* ChatView.swift */,
|
||||||
|
);
|
||||||
|
path = Chat;
|
||||||
|
sourceTree = "<group>";
|
||||||
|
};
|
||||||
A591A3B6F1F13D23D68D7A9D = {
|
A591A3B6F1F13D23D68D7A9D = {
|
||||||
isa = PBXGroup;
|
isa = PBXGroup;
|
||||||
children = (
|
children = (
|
||||||
@@ -457,12 +480,16 @@
|
|||||||
BE5A40BAC9DD6884C58A2096 /* Course */ = {
|
BE5A40BAC9DD6884C58A2096 /* Course */ = {
|
||||||
isa = PBXGroup;
|
isa = PBXGroup;
|
||||||
children = (
|
children = (
|
||||||
|
34C67DD1A1CB9B8B5A2BDCED /* CheckpointExamView.swift */,
|
||||||
143D06606AE10DCA30A140C2 /* CourseQuizView.swift */,
|
143D06606AE10DCA30A140C2 /* CourseQuizView.swift */,
|
||||||
833516C5D57F164C8660A479 /* CourseView.swift */,
|
833516C5D57F164C8660A479 /* CourseView.swift */,
|
||||||
631DC0A942DD57C81DECE083 /* DeckStudyView.swift */,
|
631DC0A942DD57C81DECE083 /* DeckStudyView.swift */,
|
||||||
|
F92BCE1A6720E47FCD26BADC /* StemChangeConjugationView.swift */,
|
||||||
|
496D851D2D95BEA283C9FD45 /* TextbookChapterListView.swift */,
|
||||||
|
39908548430FDF01D76201FB /* TextbookChapterView.swift */,
|
||||||
|
854EA2A8D6CF203958BA3C24 /* TextbookExerciseView.swift */,
|
||||||
2931634BEB33B93429CE254F /* VocabFlashcardView.swift */,
|
2931634BEB33B93429CE254F /* VocabFlashcardView.swift */,
|
||||||
5E7EF4161C73AAC67B3A0004 /* WeekTestView.swift */,
|
5E7EF4161C73AAC67B3A0004 /* WeekTestView.swift */,
|
||||||
EA1F177F7ABF5D2E4E5466CD /* CheckpointExamView.swift */,
|
|
||||||
);
|
);
|
||||||
path = Course;
|
path = Course;
|
||||||
sourceTree = "<group>";
|
sourceTree = "<group>";
|
||||||
@@ -585,6 +612,7 @@
|
|||||||
F59655A8B8FCE6264315DD33 /* Assets.xcassets in Resources */,
|
F59655A8B8FCE6264315DD33 /* Assets.xcassets in Resources */,
|
||||||
CF9E48ADF0501FB79F3DDB7B /* conjuga_data.json in Resources */,
|
CF9E48ADF0501FB79F3DDB7B /* conjuga_data.json in Resources */,
|
||||||
2B5B2D63DC9C290F66890A4A /* course_data.json in Resources */,
|
2B5B2D63DC9C290F66890A4A /* course_data.json in Resources */,
|
||||||
|
97A2088134FC6CB41C507182 /* reflexive_verbs.json in Resources */,
|
||||||
);
|
);
|
||||||
runOnlyForDeploymentPostprocessing = 0;
|
runOnlyForDeploymentPostprocessing = 0;
|
||||||
};
|
};
|
||||||
@@ -597,8 +625,14 @@
|
|||||||
files = (
|
files = (
|
||||||
97EFCF6724CE59DC4F0274FD /* AchievementService.swift in Sources */,
|
97EFCF6724CE59DC4F0274FD /* AchievementService.swift in Sources */,
|
||||||
261E582449BED6EF41881B04 /* AdaptiveContainer.swift in Sources */,
|
261E582449BED6EF41881B04 /* AdaptiveContainer.swift in Sources */,
|
||||||
|
48967E05C65E32F7082716CD /* AnswerChecker.swift in Sources */,
|
||||||
CAC69045B74249F121643E88 /* AnswerReviewView.swift in Sources */,
|
CAC69045B74249F121643E88 /* AnswerReviewView.swift in Sources */,
|
||||||
|
B48C0015BE53279B0631C2D7 /* ChatLibraryView.swift in Sources */,
|
||||||
|
8BD4B5A2DDDD4BE4B4A94962 /* ChatView.swift in Sources */,
|
||||||
|
9F0ACDC1F4ACB1E0D331283D /* CheckpointExamView.swift in Sources */,
|
||||||
|
04C74D9E0ED84BF785A8331C /* ClozeView.swift in Sources */,
|
||||||
C2B3D97F119EFCE97E3CB1CE /* ConjugaApp.swift in Sources */,
|
C2B3D97F119EFCE97E3CB1CE /* ConjugaApp.swift in Sources */,
|
||||||
|
ABBE5080E254D1D3E3465E40 /* ConversationService.swift in Sources */,
|
||||||
C3851F960C1162239DC2F935 /* CourseQuizView.swift in Sources */,
|
C3851F960C1162239DC2F935 /* CourseQuizView.swift in Sources */,
|
||||||
8C43F09F52EA9B537EA27E43 /* CourseReviewStore.swift in Sources */,
|
8C43F09F52EA9B537EA27E43 /* CourseReviewStore.swift in Sources */,
|
||||||
F0D0778207F144D6AC3D39C3 /* CourseView.swift in Sources */,
|
F0D0778207F144D6AC3D39C3 /* CourseView.swift in Sources */,
|
||||||
@@ -607,8 +641,12 @@
|
|||||||
35A0F6E7124D989312721F7D /* DashboardView.swift in Sources */,
|
35A0F6E7124D989312721F7D /* DashboardView.swift in Sources */,
|
||||||
C8C3880535008764B7117049 /* DataLoader.swift in Sources */,
|
C8C3880535008764B7117049 /* DataLoader.swift in Sources */,
|
||||||
84CCBAE22A9E0DA27AE28723 /* DeckStudyView.swift in Sources */,
|
84CCBAE22A9E0DA27AE28723 /* DeckStudyView.swift in Sources */,
|
||||||
|
90B76E34F195223580F7CCCF /* DictionaryService.swift in Sources */,
|
||||||
|
14242FD1F500D296D41E927C /* FeatureReferenceView.swift in Sources */,
|
||||||
D4DDE25FB2DAD315370AFB74 /* FlashcardView.swift in Sources */,
|
D4DDE25FB2DAD315370AFB74 /* FlashcardView.swift in Sources */,
|
||||||
A9959AE6C87B4AD21554E401 /* FullTableView.swift in Sources */,
|
A9959AE6C87B4AD21554E401 /* FullTableView.swift in Sources */,
|
||||||
|
8B516215E0842189DEA0DBB1 /* GrammarExercise.swift in Sources */,
|
||||||
|
8DC1CB93333F94C5297D33BF /* GrammarExerciseView.swift in Sources */,
|
||||||
377C4AA000CE9A0D8CC43DA9 /* GrammarNote.swift in Sources */,
|
377C4AA000CE9A0D8CC43DA9 /* GrammarNote.swift in Sources */,
|
||||||
F84706B47A2156B2138FB8D5 /* GrammarNotesView.swift in Sources */,
|
F84706B47A2156B2138FB8D5 /* GrammarNotesView.swift in Sources */,
|
||||||
760628EFE1CF191CE2FC07DC /* GuideView.swift in Sources */,
|
760628EFE1CF191CE2FC07DC /* GuideView.swift in Sources */,
|
||||||
@@ -616,6 +654,7 @@
|
|||||||
E7BFEE9A90E1300EFF5B1F32 /* HandwritingRecognizer.swift in Sources */,
|
E7BFEE9A90E1300EFF5B1F32 /* HandwritingRecognizer.swift in Sources */,
|
||||||
33E885EB38C3BB0CB058871A /* HandwritingView.swift in Sources */,
|
33E885EB38C3BB0CB058871A /* HandwritingView.swift in Sources */,
|
||||||
28D2F489F1927BCCC2B56086 /* IrregularHighlightText.swift in Sources */,
|
28D2F489F1927BCCC2B56086 /* IrregularHighlightText.swift in Sources */,
|
||||||
|
A7DF435F99E66E067F2B33E1 /* ListeningView.swift in Sources */,
|
||||||
519E68D2DF4C80AB96058C0D /* LyricsConfirmationView.swift in Sources */,
|
519E68D2DF4C80AB96058C0D /* LyricsConfirmationView.swift in Sources */,
|
||||||
B4603AA6EFB134794AA39BF4 /* LyricsLibraryView.swift in Sources */,
|
B4603AA6EFB134794AA39BF4 /* LyricsLibraryView.swift in Sources */,
|
||||||
615D3128ED6E84EF59BB5AA3 /* LyricsReaderView.swift in Sources */,
|
615D3128ED6E84EF59BB5AA3 /* LyricsReaderView.swift in Sources */,
|
||||||
@@ -628,8 +667,10 @@
|
|||||||
352A5BAA6E406AA5850653A4 /* PracticeSessionService.swift in Sources */,
|
352A5BAA6E406AA5850653A4 /* PracticeSessionService.swift in Sources */,
|
||||||
1A230C01A045F0C095BFBD35 /* PracticeView.swift in Sources */,
|
1A230C01A045F0C095BFBD35 /* PracticeView.swift in Sources */,
|
||||||
C0BAEF49A6270D8F64CF13D6 /* PracticeViewModel.swift in Sources */,
|
C0BAEF49A6270D8F64CF13D6 /* PracticeViewModel.swift in Sources */,
|
||||||
|
0D0D3B5CC128D1A1D1252282 /* PronunciationService.swift in Sources */,
|
||||||
53A0AC57EAC44B676C997374 /* QuizType.swift in Sources */,
|
53A0AC57EAC44B676C997374 /* QuizType.swift in Sources */,
|
||||||
DF82C2579F9889DDB06362CC /* ReferenceStore.swift in Sources */,
|
DF82C2579F9889DDB06362CC /* ReferenceStore.swift in Sources */,
|
||||||
|
A4DA8CF1957A4FE161830AB2 /* ReflexiveVerbStore.swift in Sources */,
|
||||||
FC7873F97017532C215DAD34 /* ReviewCard.swift in Sources */,
|
FC7873F97017532C215DAD34 /* ReviewCard.swift in Sources */,
|
||||||
728702D9AA7A8BDABBA62513 /* ReviewStore.swift in Sources */,
|
728702D9AA7A8BDABBA62513 /* ReviewStore.swift in Sources */,
|
||||||
51D072AF30F4B12CD3E8F918 /* SRSEngine.swift in Sources */,
|
51D072AF30F4B12CD3E8F918 /* SRSEngine.swift in Sources */,
|
||||||
@@ -637,39 +678,34 @@
|
|||||||
60E86BABE2735E2052B99DF3 /* SettingsView.swift in Sources */,
|
60E86BABE2735E2052B99DF3 /* SettingsView.swift in Sources */,
|
||||||
D3FFE73A5AD27F1759F50727 /* SpeechService.swift in Sources */,
|
D3FFE73A5AD27F1759F50727 /* SpeechService.swift in Sources */,
|
||||||
9D9FD3853C5C969C62AE9999 /* StartupCoordinator.swift in Sources */,
|
9D9FD3853C5C969C62AE9999 /* StartupCoordinator.swift in Sources */,
|
||||||
|
20B71403A8D305C29C73ADA2 /* StemChangeConjugationView.swift in Sources */,
|
||||||
E814A9CF1067313F74B509C6 /* StoreInspector.swift in Sources */,
|
E814A9CF1067313F74B509C6 /* StoreInspector.swift in Sources */,
|
||||||
|
ACE9D8B3116757B5D6F0F766 /* StoryGenerator.swift in Sources */,
|
||||||
|
61328552866DE185B15011A9 /* StoryLibraryView.swift in Sources */,
|
||||||
|
0AD63CAED7C568590A16E879 /* StoryQuizView.swift in Sources */,
|
||||||
|
CC886125F8ECE72D1AAD4861 /* StoryReaderView.swift in Sources */,
|
||||||
36F92EBAEB0E5F2B010401EF /* StreakCalendarView.swift in Sources */,
|
36F92EBAEB0E5F2B010401EF /* StreakCalendarView.swift in Sources */,
|
||||||
|
5EE41911F3D17224CAB359ED /* StudyTimerService.swift in Sources */,
|
||||||
6ED2AC2CAA54688161D4B920 /* SyncStatusMonitor.swift in Sources */,
|
6ED2AC2CAA54688161D4B920 /* SyncStatusMonitor.swift in Sources */,
|
||||||
D40B4E919DE379C50265CA9F /* SyncToast.swift in Sources */,
|
D40B4E919DE379C50265CA9F /* SyncToast.swift in Sources */,
|
||||||
AAC6F85A1C3B6C1186E1656A /* TenseEndingTable.swift in Sources */,
|
AAC6F85A1C3B6C1186E1656A /* TenseEndingTable.swift in Sources */,
|
||||||
0A89DCC82BE11605CB866DEF /* TenseInfo.swift in Sources */,
|
0A89DCC82BE11605CB866DEF /* TenseInfo.swift in Sources */,
|
||||||
46943ACFABF329DE1CBFC471 /* TensePill.swift in Sources */,
|
46943ACFABF329DE1CBFC471 /* TensePill.swift in Sources */,
|
||||||
D7456B289D135CEB3A15122B /* TestResult.swift in Sources */,
|
D7456B289D135CEB3A15122B /* TestResult.swift in Sources */,
|
||||||
|
81E4DB9F64F3FF3AB8BCB03A /* TextbookChapterListView.swift in Sources */,
|
||||||
|
B10A324C06F0957DDE2233F8 /* TextbookChapterView.swift in Sources */,
|
||||||
|
354631F309E625046A3A436B /* TextbookExerciseView.swift in Sources */,
|
||||||
27BA7FA9356467846A07697D /* TypingView.swift in Sources */,
|
27BA7FA9356467846A07697D /* TypingView.swift in Sources */,
|
||||||
943A94A8C71919F3EFC0E8FA /* UserProgress.swift in Sources */,
|
943A94A8C71919F3EFC0E8FA /* UserProgress.swift in Sources */,
|
||||||
50E0095A23E119D1AB561232 /* VerbDetailView.swift in Sources */,
|
50E0095A23E119D1AB561232 /* VerbDetailView.swift in Sources */,
|
||||||
|
BA3DE2DA319AA3B572C19E11 /* VerbExampleCache.swift in Sources */,
|
||||||
|
4C577CF6B137D0A32759A169 /* VerbExampleGenerator.swift in Sources */,
|
||||||
81FA7EBCF18F0AAE0BF385C3 /* VerbListView.swift in Sources */,
|
81FA7EBCF18F0AAE0BF385C3 /* VerbListView.swift in Sources */,
|
||||||
4005E258FDF03C8B3A0D53BD /* VocabFlashcardView.swift in Sources */,
|
4005E258FDF03C8B3A0D53BD /* VocabFlashcardView.swift in Sources */,
|
||||||
|
78FE99C5D511737B6877EDD5 /* VocabReviewView.swift in Sources */,
|
||||||
6BB4B0A655E6CB6F82D81B5A /* WeekTestView.swift in Sources */,
|
6BB4B0A655E6CB6F82D81B5A /* WeekTestView.swift in Sources */,
|
||||||
968D626462B0ADEC8D7D56AA /* CheckpointExamView.swift in Sources */,
|
|
||||||
E99473B7DF9BCAE150E9D1E1 /* WidgetDataService.swift in Sources */,
|
E99473B7DF9BCAE150E9D1E1 /* WidgetDataService.swift in Sources */,
|
||||||
DDF58F3899FC4B92BF6587D2 /* StudyTimerService.swift in Sources */,
|
);
|
||||||
8C1E4E7F36D64EFF8D092AC8 /* StoryGenerator.swift in Sources */,
|
|
||||||
4C2649215B81470195F38ED0 /* StoryLibraryView.swift in Sources */,
|
|
||||||
8E3D8E8254CF4213B9D9FAD3 /* StoryReaderView.swift in Sources */,
|
|
||||||
12D2C9311D5C4764B48B1754 /* StoryQuizView.swift in Sources */,
|
|
||||||
8D7CA0F4496B44C28CD5EBD5 /* DictionaryService.swift in Sources */,
|
|
||||||
3EC2A2F4B9C24B029DA49C40 /* VocabReviewView.swift in Sources */,
|
|
||||||
53908E41767B438C8BD229CD /* ClozeView.swift in Sources */,
|
|
||||||
65ABC39F35804C619DAB3200 /* GrammarExercise.swift in Sources */,
|
|
||||||
B73F6EED00304B718C6FEFFA /* GrammarExerciseView.swift in Sources */,
|
|
||||||
EA07DB964C8940F69C14DE2C /* PronunciationService.swift in Sources */,
|
|
||||||
4DCC5CC233DE4701A12FD7EB /* ListeningView.swift in Sources */,
|
|
||||||
35D6404C60C249D5995AD895 /* ConversationService.swift in Sources */,
|
|
||||||
C8AF0931F7FD458C80B6EC0D /* ChatLibraryView.swift in Sources */,
|
|
||||||
6CCC8D51F5524688A4BC5AF8 /* ChatView.swift in Sources */,
|
|
||||||
8510085D78E248D885181E80 /* FeatureReferenceView.swift in Sources */,
|
|
||||||
);
|
|
||||||
runOnlyForDeploymentPostprocessing = 0;
|
runOnlyForDeploymentPostprocessing = 0;
|
||||||
};
|
};
|
||||||
217A29BCEDD9D44B6DD85AF6 /* Sources */ = {
|
217A29BCEDD9D44B6DD85AF6 /* Sources */ = {
|
||||||
|
|||||||
@@ -40,6 +40,8 @@ struct ConjugaApp: App {
|
|||||||
@State private var syncMonitor = SyncStatusMonitor()
|
@State private var syncMonitor = SyncStatusMonitor()
|
||||||
@State private var studyTimer = StudyTimerService()
|
@State private var studyTimer = StudyTimerService()
|
||||||
@State private var dictionary = DictionaryService()
|
@State private var dictionary = DictionaryService()
|
||||||
|
@State private var verbExampleCache = VerbExampleCache()
|
||||||
|
@State private var reflexiveStore = ReflexiveVerbStore()
|
||||||
|
|
||||||
let localContainer: ModelContainer
|
let localContainer: ModelContainer
|
||||||
let cloudContainer: ModelContainer
|
let cloudContainer: ModelContainer
|
||||||
@@ -69,12 +71,14 @@ struct ConjugaApp: App {
|
|||||||
schema: Schema([
|
schema: Schema([
|
||||||
ReviewCard.self, CourseReviewCard.self, UserProgress.self,
|
ReviewCard.self, CourseReviewCard.self, UserProgress.self,
|
||||||
TestResult.self, DailyLog.self, SavedSong.self, Story.self, Conversation.self,
|
TestResult.self, DailyLog.self, SavedSong.self, Story.self, Conversation.self,
|
||||||
|
TextbookExerciseAttempt.self,
|
||||||
]),
|
]),
|
||||||
cloudKitDatabase: .private("iCloud.com.conjuga.app")
|
cloudKitDatabase: .private("iCloud.com.conjuga.app")
|
||||||
)
|
)
|
||||||
cloudContainer = try ModelContainer(
|
cloudContainer = try ModelContainer(
|
||||||
for: ReviewCard.self, CourseReviewCard.self, UserProgress.self,
|
for: ReviewCard.self, CourseReviewCard.self, UserProgress.self,
|
||||||
TestResult.self, DailyLog.self, SavedSong.self, Story.self, Conversation.self,
|
TestResult.self, DailyLog.self, SavedSong.self, Story.self, Conversation.self,
|
||||||
|
TextbookExerciseAttempt.self,
|
||||||
configurations: cloudConfig
|
configurations: cloudConfig
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -111,6 +115,8 @@ struct ConjugaApp: App {
|
|||||||
.environment(\.cloudModelContextProvider, { cloudContainer.mainContext })
|
.environment(\.cloudModelContextProvider, { cloudContainer.mainContext })
|
||||||
.environment(studyTimer)
|
.environment(studyTimer)
|
||||||
.environment(dictionary)
|
.environment(dictionary)
|
||||||
|
.environment(verbExampleCache)
|
||||||
|
.environment(reflexiveStore)
|
||||||
.task {
|
.task {
|
||||||
let needsSeed = await DataLoader.needsSeeding(container: localContainer)
|
let needsSeed = await DataLoader.needsSeeding(container: localContainer)
|
||||||
if needsSeed {
|
if needsSeed {
|
||||||
@@ -209,6 +215,7 @@ struct ConjugaApp: App {
|
|||||||
schema: Schema([
|
schema: Schema([
|
||||||
Verb.self, VerbForm.self, IrregularSpan.self,
|
Verb.self, VerbForm.self, IrregularSpan.self,
|
||||||
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
||||||
|
TextbookChapter.self,
|
||||||
]),
|
]),
|
||||||
url: url,
|
url: url,
|
||||||
cloudKitDatabase: .none
|
cloudKitDatabase: .none
|
||||||
@@ -216,6 +223,7 @@ struct ConjugaApp: App {
|
|||||||
return try ModelContainer(
|
return try ModelContainer(
|
||||||
for: Verb.self, VerbForm.self, IrregularSpan.self,
|
for: Verb.self, VerbForm.self, IrregularSpan.self,
|
||||||
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
||||||
|
TextbookChapter.self,
|
||||||
configurations: localConfig
|
configurations: localConfig
|
||||||
)
|
)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,6 +2,10 @@
|
|||||||
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||||
<plist version="1.0">
|
<plist version="1.0">
|
||||||
<dict>
|
<dict>
|
||||||
|
<key>BGTaskSchedulerPermittedIdentifiers</key>
|
||||||
|
<array>
|
||||||
|
<string>com.conjuga.app.refresh</string>
|
||||||
|
</array>
|
||||||
<key>CFBundleDevelopmentRegion</key>
|
<key>CFBundleDevelopmentRegion</key>
|
||||||
<string>$(DEVELOPMENT_LANGUAGE)</string>
|
<string>$(DEVELOPMENT_LANGUAGE)</string>
|
||||||
<key>CFBundleDisplayName</key>
|
<key>CFBundleDisplayName</key>
|
||||||
@@ -22,21 +26,13 @@
|
|||||||
<string>1</string>
|
<string>1</string>
|
||||||
<key>LSApplicationCategoryType</key>
|
<key>LSApplicationCategoryType</key>
|
||||||
<string>public.app-category.education</string>
|
<string>public.app-category.education</string>
|
||||||
<key>UILaunchScreen</key>
|
|
||||||
<dict/>
|
|
||||||
<key>NSSpeechRecognitionUsageDescription</key>
|
|
||||||
<string>Conjuga uses speech recognition to check your Spanish pronunciation and transcribe what you say during listening exercises.</string>
|
|
||||||
<key>NSMicrophoneUsageDescription</key>
|
|
||||||
<string>Conjuga needs microphone access to record your voice for pronunciation practice.</string>
|
|
||||||
<key>UIBackgroundModes</key>
|
<key>UIBackgroundModes</key>
|
||||||
<array>
|
<array>
|
||||||
<string>fetch</string>
|
<string>fetch</string>
|
||||||
<string>remote-notification</string>
|
<string>remote-notification</string>
|
||||||
</array>
|
</array>
|
||||||
<key>BGTaskSchedulerPermittedIdentifiers</key>
|
<key>UILaunchScreen</key>
|
||||||
<array>
|
<dict/>
|
||||||
<string>com.conjuga.app.refresh</string>
|
|
||||||
</array>
|
|
||||||
<key>UISupportedInterfaceOrientations</key>
|
<key>UISupportedInterfaceOrientations</key>
|
||||||
<array>
|
<array>
|
||||||
<string>UIInterfaceOrientationPortrait</string>
|
<string>UIInterfaceOrientationPortrait</string>
|
||||||
|
|||||||
@@ -14,6 +14,7 @@ final class UserProgress {
|
|||||||
var selectedLevel: String = "basic"
|
var selectedLevel: String = "basic"
|
||||||
var showVosotros: Bool = true
|
var showVosotros: Bool = true
|
||||||
var autoFillStem: Bool = false
|
var autoFillStem: Bool = false
|
||||||
|
var showReflexiveVerbsOnly: Bool = false
|
||||||
|
|
||||||
// Legacy CloudKit array-backed fields retained for migration compatibility.
|
// Legacy CloudKit array-backed fields retained for migration compatibility.
|
||||||
var enabledTenses: [String] = []
|
var enabledTenses: [String] = []
|
||||||
@@ -21,6 +22,10 @@ final class UserProgress {
|
|||||||
var enabledTensesBlob: String = ""
|
var enabledTensesBlob: String = ""
|
||||||
var unlockedBadgesBlob: String = ""
|
var unlockedBadgesBlob: String = ""
|
||||||
|
|
||||||
|
// Multi-select level + irregularity filters (Issue #26).
|
||||||
|
var selectedLevelsBlob: String = ""
|
||||||
|
var enabledIrregularCategoriesBlob: String = ""
|
||||||
|
|
||||||
init() {}
|
init() {}
|
||||||
|
|
||||||
var selectedVerbLevel: VerbLevel {
|
var selectedVerbLevel: VerbLevel {
|
||||||
@@ -44,6 +49,44 @@ final class UserProgress {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Levels currently enabled for practice. Multi-select per Issue #26.
|
||||||
|
/// Setting this also syncs `selectedLevel` to the highest-ranked selection so
|
||||||
|
/// legacy single-level consumers (widget, AI scenarios, word-of-day) stay consistent.
|
||||||
|
var selectedVerbLevels: Set<VerbLevel> {
|
||||||
|
get {
|
||||||
|
let raw = decodeStringArray(from: selectedLevelsBlob, fallback: [])
|
||||||
|
let decoded = Set(raw.compactMap(VerbLevel.init(rawValue:)))
|
||||||
|
if !decoded.isEmpty { return decoded }
|
||||||
|
// Pre-migration users: treat the single selectedLevel as the set.
|
||||||
|
if let legacy = VerbLevel(rawValue: selectedLevel) {
|
||||||
|
return [legacy]
|
||||||
|
}
|
||||||
|
return []
|
||||||
|
}
|
||||||
|
set {
|
||||||
|
let sorted = newValue.map(\.rawValue)
|
||||||
|
selectedLevelsBlob = Self.encodeStringArray(sorted)
|
||||||
|
selectedLevel = VerbLevel.highest(in: newValue)?.rawValue ?? VerbLevel.basic.rawValue
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// The single representative level for callers that need one value
|
||||||
|
/// (word-of-day widget, AI chat/story scenarios). Highest selected level.
|
||||||
|
var primaryLevel: VerbLevel {
|
||||||
|
VerbLevel.highest(in: selectedVerbLevels) ?? selectedVerbLevel
|
||||||
|
}
|
||||||
|
|
||||||
|
var enabledIrregularCategories: Set<IrregularSpan.SpanCategory> {
|
||||||
|
get {
|
||||||
|
let raw = decodeStringArray(from: enabledIrregularCategoriesBlob, fallback: [])
|
||||||
|
return Set(raw.compactMap(IrregularSpan.SpanCategory.init(rawValue:)))
|
||||||
|
}
|
||||||
|
set {
|
||||||
|
let sorted = newValue.map(\.rawValue)
|
||||||
|
enabledIrregularCategoriesBlob = Self.encodeStringArray(sorted)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func setTenseEnabled(_ tenseId: String, enabled: Bool) {
|
func setTenseEnabled(_ tenseId: String, enabled: Bool) {
|
||||||
var values = Set(enabledTenseIDs)
|
var values = Set(enabledTenseIDs)
|
||||||
if enabled {
|
if enabled {
|
||||||
@@ -54,6 +97,26 @@ final class UserProgress {
|
|||||||
enabledTenseIDs = values.sorted()
|
enabledTenseIDs = values.sorted()
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func setLevelEnabled(_ level: VerbLevel, enabled: Bool) {
|
||||||
|
var values = selectedVerbLevels
|
||||||
|
if enabled {
|
||||||
|
values.insert(level)
|
||||||
|
} else {
|
||||||
|
values.remove(level)
|
||||||
|
}
|
||||||
|
selectedVerbLevels = values
|
||||||
|
}
|
||||||
|
|
||||||
|
func setIrregularCategoryEnabled(_ category: IrregularSpan.SpanCategory, enabled: Bool) {
|
||||||
|
var values = enabledIrregularCategories
|
||||||
|
if enabled {
|
||||||
|
values.insert(category)
|
||||||
|
} else {
|
||||||
|
values.remove(category)
|
||||||
|
}
|
||||||
|
enabledIrregularCategories = values
|
||||||
|
}
|
||||||
|
|
||||||
func unlockBadge(_ badgeId: String) {
|
func unlockBadge(_ badgeId: String) {
|
||||||
var values = Set(unlockedBadgeIDs)
|
var values = Set(unlockedBadgeIDs)
|
||||||
values.insert(badgeId)
|
values.insert(badgeId)
|
||||||
@@ -67,6 +130,9 @@ final class UserProgress {
|
|||||||
if unlockedBadgesBlob.isEmpty && !unlockedBadges.isEmpty {
|
if unlockedBadgesBlob.isEmpty && !unlockedBadges.isEmpty {
|
||||||
unlockedBadgeIDs = unlockedBadges
|
unlockedBadgeIDs = unlockedBadges
|
||||||
}
|
}
|
||||||
|
if selectedLevelsBlob.isEmpty, let legacy = VerbLevel(rawValue: selectedLevel) {
|
||||||
|
selectedVerbLevels = [legacy]
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
private func decodeStringArray(from blob: String, fallback: [String]) -> [String] {
|
private func decodeStringArray(from blob: String, fallback: [String]) -> [String] {
|
||||||
@@ -86,4 +152,5 @@ final class UserProgress {
|
|||||||
}
|
}
|
||||||
return string
|
return string
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|||||||
10
Conjuga/Conjuga/Services/AnswerChecker.swift
Normal file
10
Conjuga/Conjuga/Services/AnswerChecker.swift
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
import Foundation
|
||||||
|
import SharedModels
|
||||||
|
|
||||||
|
/// Thin app-side wrapper around the SharedModels `AnswerGrader`. All logic
|
||||||
|
/// lives in SharedModels so it can be unit tested.
|
||||||
|
enum AnswerChecker {
|
||||||
|
static func grade(userText: String, canonical: String, alternates: [String] = []) -> TextbookGrade {
|
||||||
|
AnswerGrader.grade(userText: userText, canonical: canonical, alternates: alternates)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -6,6 +6,9 @@ actor DataLoader {
|
|||||||
static let courseDataVersion = 7
|
static let courseDataVersion = 7
|
||||||
static let courseDataKey = "courseDataVersion"
|
static let courseDataKey = "courseDataVersion"
|
||||||
|
|
||||||
|
static let textbookDataVersion = 12
|
||||||
|
static let textbookDataKey = "textbookDataVersion"
|
||||||
|
|
||||||
/// Quick check: does the DB need seeding or course data refresh?
|
/// Quick check: does the DB need seeding or course data refresh?
|
||||||
static func needsSeeding(container: ModelContainer) async -> Bool {
|
static func needsSeeding(container: ModelContainer) async -> Bool {
|
||||||
let context = ModelContext(container)
|
let context = ModelContext(container)
|
||||||
@@ -15,6 +18,9 @@ actor DataLoader {
|
|||||||
let storedVersion = UserDefaults.standard.integer(forKey: courseDataKey)
|
let storedVersion = UserDefaults.standard.integer(forKey: courseDataKey)
|
||||||
if storedVersion < courseDataVersion { return true }
|
if storedVersion < courseDataVersion { return true }
|
||||||
|
|
||||||
|
let textbookVersion = UserDefaults.standard.integer(forKey: textbookDataKey)
|
||||||
|
if textbookVersion < textbookDataVersion { return true }
|
||||||
|
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -133,6 +139,61 @@ actor DataLoader {
|
|||||||
|
|
||||||
// Seed course data (uses the same mainContext so @Query sees it)
|
// Seed course data (uses the same mainContext so @Query sees it)
|
||||||
seedCourseData(context: context)
|
seedCourseData(context: context)
|
||||||
|
|
||||||
|
// Seed textbook data — only bump the version key if the seed
|
||||||
|
// actually inserted rows, so a missing/unparseable bundle doesn't
|
||||||
|
// permanently lock us out of future re-seeds.
|
||||||
|
if seedTextbookData(context: context) {
|
||||||
|
UserDefaults.standard.set(textbookDataVersion, forKey: textbookDataKey)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Re-seed textbook data if the version has changed OR if the rows are
|
||||||
|
/// missing on disk. The row-count check exists because anything opening
|
||||||
|
/// this store with a subset schema (e.g. an out-of-date widget extension)
|
||||||
|
/// can destructively drop the rows without touching UserDefaults — so a
|
||||||
|
/// pure version-flag trigger would leave us permanently empty.
|
||||||
|
static func refreshTextbookDataIfNeeded(container: ModelContainer) async {
|
||||||
|
let shared = UserDefaults.standard
|
||||||
|
let context = ModelContext(container)
|
||||||
|
let existingCount = (try? context.fetchCount(FetchDescriptor<TextbookChapter>())) ?? 0
|
||||||
|
let versionCurrent = shared.integer(forKey: textbookDataKey) >= textbookDataVersion
|
||||||
|
|
||||||
|
if versionCurrent && existingCount > 0 { return }
|
||||||
|
|
||||||
|
if versionCurrent {
|
||||||
|
print("Textbook data version current but store has \(existingCount) chapters — re-seeding...")
|
||||||
|
} else {
|
||||||
|
print("Textbook data version outdated — re-seeding...")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch + delete individually instead of batch delete. SwiftData's
|
||||||
|
// context.delete(model:) hits the store directly and doesn't always
|
||||||
|
// clear the unique-constraint index before the reseed's save runs,
|
||||||
|
// so re-inserting rows with the same .unique id can throw.
|
||||||
|
let textbookCourseName = "Complete Spanish Step-by-Step"
|
||||||
|
if let existing = try? context.fetch(FetchDescriptor<TextbookChapter>()) {
|
||||||
|
for chapter in existing { context.delete(chapter) }
|
||||||
|
}
|
||||||
|
let deckDescriptor = FetchDescriptor<CourseDeck>(
|
||||||
|
predicate: #Predicate<CourseDeck> { $0.courseName == textbookCourseName }
|
||||||
|
)
|
||||||
|
if let decks = try? context.fetch(deckDescriptor) {
|
||||||
|
for deck in decks { context.delete(deck) }
|
||||||
|
}
|
||||||
|
do {
|
||||||
|
try context.save()
|
||||||
|
} catch {
|
||||||
|
print("[DataLoader] ERROR: textbook wipe save failed: \(error)")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if seedTextbookData(context: context) {
|
||||||
|
shared.set(textbookDataVersion, forKey: textbookDataKey)
|
||||||
|
print("Textbook data re-seeded to version \(textbookDataVersion)")
|
||||||
|
} else {
|
||||||
|
print("Textbook re-seed failed — leaving version key untouched so next launch retries")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Re-seed course data if the version has changed (e.g. examples were added).
|
/// Re-seed course data if the version has changed (e.g. examples were added).
|
||||||
@@ -170,6 +231,10 @@ actor DataLoader {
|
|||||||
// Re-seed course data
|
// Re-seed course data
|
||||||
seedCourseData(context: context)
|
seedCourseData(context: context)
|
||||||
|
|
||||||
|
// Textbook's vocab decks/cards share the same CourseDeck/VocabCard
|
||||||
|
// entities, so they were just wiped above. Reseed them.
|
||||||
|
seedTextbookVocabDecks(context: context, courseName: "Complete Spanish Step-by-Step")
|
||||||
|
|
||||||
shared.set(courseDataVersion, forKey: courseDataKey)
|
shared.set(courseDataVersion, forKey: courseDataKey)
|
||||||
print("Course data re-seeded to version \(courseDataVersion)")
|
print("Course data re-seeded to version \(courseDataVersion)")
|
||||||
}
|
}
|
||||||
@@ -336,4 +401,169 @@ actor DataLoader {
|
|||||||
context.insert(reviewCard)
|
context.insert(reviewCard)
|
||||||
return reviewCard
|
return reviewCard
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// MARK: - Textbook seeding
|
||||||
|
|
||||||
|
@discardableResult
|
||||||
|
private static func seedTextbookData(context: ModelContext) -> Bool {
|
||||||
|
let url = Bundle.main.url(forResource: "textbook_data", withExtension: "json")
|
||||||
|
?? Bundle.main.bundleURL.appendingPathComponent("textbook_data.json")
|
||||||
|
guard let data = try? Data(contentsOf: url) else {
|
||||||
|
print("[DataLoader] textbook_data.json not bundled — skipping textbook seed")
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
guard let json = try? JSONSerialization.jsonObject(with: data) as? [String: Any] else {
|
||||||
|
print("[DataLoader] ERROR: Could not parse textbook_data.json")
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
let courseName = (json["courseName"] as? String) ?? "Textbook"
|
||||||
|
guard let chapters = json["chapters"] as? [[String: Any]] else {
|
||||||
|
print("[DataLoader] ERROR: textbook_data.json missing chapters")
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
var inserted = 0
|
||||||
|
for ch in chapters {
|
||||||
|
guard let id = ch["id"] as? String,
|
||||||
|
let number = ch["number"] as? Int,
|
||||||
|
let title = ch["title"] as? String,
|
||||||
|
let blocksRaw = ch["blocks"] as? [[String: Any]] else { continue }
|
||||||
|
|
||||||
|
let part = (ch["part"] as? Int) ?? 0
|
||||||
|
|
||||||
|
// Normalize each block to canonical keys expected by TextbookBlock decoder.
|
||||||
|
var normalized: [[String: Any]] = []
|
||||||
|
var exerciseCount = 0
|
||||||
|
var vocabTableCount = 0
|
||||||
|
for (i, b) in blocksRaw.enumerated() {
|
||||||
|
var out: [String: Any] = [:]
|
||||||
|
out["index"] = i
|
||||||
|
let kind = (b["kind"] as? String) ?? ""
|
||||||
|
out["kind"] = kind
|
||||||
|
switch kind {
|
||||||
|
case "heading":
|
||||||
|
if let level = b["level"] { out["level"] = level }
|
||||||
|
if let text = b["text"] { out["text"] = text }
|
||||||
|
case "paragraph":
|
||||||
|
if let text = b["text"] { out["text"] = text }
|
||||||
|
case "key_vocab_header":
|
||||||
|
break
|
||||||
|
case "vocab_table":
|
||||||
|
vocabTableCount += 1
|
||||||
|
if let src = b["sourceImage"] { out["sourceImage"] = src }
|
||||||
|
if let lines = b["ocrLines"] { out["ocrLines"] = lines }
|
||||||
|
if let conf = b["ocrConfidence"] { out["ocrConfidence"] = conf }
|
||||||
|
// Paired Spanish→English cards from the bounding-box extractor.
|
||||||
|
if let cards = b["cards"] as? [[String: Any]], !cards.isEmpty {
|
||||||
|
let normalized: [[String: Any]] = cards.compactMap { c in
|
||||||
|
guard let front = c["front"] as? String,
|
||||||
|
let back = c["back"] as? String else { return nil }
|
||||||
|
return ["front": front, "back": back]
|
||||||
|
}
|
||||||
|
if !normalized.isEmpty {
|
||||||
|
out["cards"] = normalized
|
||||||
|
}
|
||||||
|
}
|
||||||
|
case "exercise":
|
||||||
|
exerciseCount += 1
|
||||||
|
if let exId = b["id"] { out["exerciseId"] = exId }
|
||||||
|
if let inst = b["instruction"] { out["instruction"] = inst }
|
||||||
|
if let extra = b["extra"] { out["extra"] = extra }
|
||||||
|
if let prompts = b["prompts"] { out["prompts"] = prompts }
|
||||||
|
if let items = b["answerItems"] { out["answerItems"] = items }
|
||||||
|
if let freeform = b["freeform"] { out["freeform"] = freeform }
|
||||||
|
default:
|
||||||
|
break
|
||||||
|
}
|
||||||
|
normalized.append(out)
|
||||||
|
}
|
||||||
|
|
||||||
|
let bodyJSON: Data
|
||||||
|
do {
|
||||||
|
bodyJSON = try JSONSerialization.data(withJSONObject: normalized, options: [])
|
||||||
|
} catch {
|
||||||
|
print("[DataLoader] failed to encode chapter \(number) blocks: \(error)")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
let chapter = TextbookChapter(
|
||||||
|
id: id,
|
||||||
|
number: number,
|
||||||
|
title: title,
|
||||||
|
part: part,
|
||||||
|
courseName: courseName,
|
||||||
|
bodyJSON: bodyJSON,
|
||||||
|
exerciseCount: exerciseCount,
|
||||||
|
vocabTableCount: vocabTableCount
|
||||||
|
)
|
||||||
|
context.insert(chapter)
|
||||||
|
inserted += 1
|
||||||
|
}
|
||||||
|
|
||||||
|
do {
|
||||||
|
try context.save()
|
||||||
|
} catch {
|
||||||
|
print("[DataLoader] ERROR: textbook chapter save failed: \(error)")
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Verify rows actually hit the store — guards against the case where
|
||||||
|
// save returned cleanly but no rows were persisted.
|
||||||
|
let persisted = (try? context.fetchCount(FetchDescriptor<TextbookChapter>())) ?? 0
|
||||||
|
guard persisted > 0 else {
|
||||||
|
print("[DataLoader] ERROR: textbook seeded \(inserted) chapters but persisted count is 0")
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// Seed textbook-derived vocabulary flashcards as CourseDecks so the
|
||||||
|
// existing Course UI can surface them alongside LanGo decks.
|
||||||
|
seedTextbookVocabDecks(context: context, courseName: courseName)
|
||||||
|
|
||||||
|
print("Textbook seeding complete: \(inserted) chapters inserted, \(persisted) persisted")
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
private static func seedTextbookVocabDecks(context: ModelContext, courseName: String) {
|
||||||
|
let url = Bundle.main.url(forResource: "textbook_vocab", withExtension: "json")
|
||||||
|
?? Bundle.main.bundleURL.appendingPathComponent("textbook_vocab.json")
|
||||||
|
guard let data = try? Data(contentsOf: url),
|
||||||
|
let json = try? JSONSerialization.jsonObject(with: data) as? [String: Any],
|
||||||
|
let chaptersArr = json["chapters"] as? [[String: Any]]
|
||||||
|
else { return }
|
||||||
|
|
||||||
|
let courseSlug = courseName.lowercased()
|
||||||
|
.replacingOccurrences(of: " ", with: "-")
|
||||||
|
|
||||||
|
var deckCount = 0
|
||||||
|
var cardCount = 0
|
||||||
|
for chData in chaptersArr {
|
||||||
|
guard let chNum = chData["chapter"] as? Int,
|
||||||
|
let cards = chData["cards"] as? [[String: Any]],
|
||||||
|
!cards.isEmpty else { continue }
|
||||||
|
|
||||||
|
let deckId = "textbook_\(courseSlug)_ch\(chNum)"
|
||||||
|
let title = "Chapter \(chNum) vocabulary"
|
||||||
|
let deck = CourseDeck(
|
||||||
|
id: deckId,
|
||||||
|
weekNumber: chNum,
|
||||||
|
title: title,
|
||||||
|
cardCount: cards.count,
|
||||||
|
courseName: courseName,
|
||||||
|
isReversed: false
|
||||||
|
)
|
||||||
|
context.insert(deck)
|
||||||
|
deckCount += 1
|
||||||
|
|
||||||
|
for c in cards {
|
||||||
|
guard let front = c["front"] as? String,
|
||||||
|
let back = c["back"] as? String else { continue }
|
||||||
|
let card = VocabCard(front: front, back: back, deckId: deckId)
|
||||||
|
card.deck = deck
|
||||||
|
context.insert(card)
|
||||||
|
cardCount += 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
try? context.save()
|
||||||
|
print("Textbook vocab seeding complete: \(deckCount) decks, \(cardCount) cards")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,14 +4,23 @@ import SwiftData
|
|||||||
|
|
||||||
struct PracticeSettings: Sendable {
|
struct PracticeSettings: Sendable {
|
||||||
let selectedLevel: String
|
let selectedLevel: String
|
||||||
|
let selectedLevels: Set<String>
|
||||||
let enabledTenses: Set<String>
|
let enabledTenses: Set<String>
|
||||||
|
let enabledIrregularCategories: Set<IrregularSpan.SpanCategory>
|
||||||
let showVosotros: Bool
|
let showVosotros: Bool
|
||||||
|
let showReflexiveVerbsOnly: Bool
|
||||||
|
let reflexiveBaseInfinitives: Set<String>
|
||||||
|
|
||||||
init(progress: UserProgress?) {
|
init(progress: UserProgress?, reflexiveBaseInfinitives: Set<String> = []) {
|
||||||
let resolved = progress?.enabledTenseIDs ?? []
|
let resolvedTenses = progress?.enabledTenseIDs ?? []
|
||||||
|
let resolvedLevels = progress?.selectedVerbLevels ?? []
|
||||||
self.selectedLevel = progress?.selectedLevel ?? VerbLevel.basic.rawValue
|
self.selectedLevel = progress?.selectedLevel ?? VerbLevel.basic.rawValue
|
||||||
self.enabledTenses = Set(resolved)
|
self.selectedLevels = Set(resolvedLevels.map(\.rawValue))
|
||||||
|
self.enabledTenses = Set(resolvedTenses)
|
||||||
|
self.enabledIrregularCategories = progress?.enabledIrregularCategories ?? []
|
||||||
self.showVosotros = progress?.showVosotros ?? true
|
self.showVosotros = progress?.showVosotros ?? true
|
||||||
|
self.showReflexiveVerbsOnly = progress?.showReflexiveVerbsOnly ?? false
|
||||||
|
self.reflexiveBaseInfinitives = reflexiveBaseInfinitives
|
||||||
}
|
}
|
||||||
|
|
||||||
var selectionTenseIDs: [String] {
|
var selectionTenseIDs: [String] {
|
||||||
@@ -36,16 +45,25 @@ struct FullTablePrompt {
|
|||||||
struct PracticeSessionService {
|
struct PracticeSessionService {
|
||||||
let localContext: ModelContext
|
let localContext: ModelContext
|
||||||
let cloudContext: ModelContext
|
let cloudContext: ModelContext
|
||||||
|
let reflexiveBaseInfinitives: Set<String>
|
||||||
private let referenceStore: ReferenceStore
|
private let referenceStore: ReferenceStore
|
||||||
|
|
||||||
init(localContext: ModelContext, cloudContext: ModelContext) {
|
init(
|
||||||
|
localContext: ModelContext,
|
||||||
|
cloudContext: ModelContext,
|
||||||
|
reflexiveBaseInfinitives: Set<String> = []
|
||||||
|
) {
|
||||||
self.localContext = localContext
|
self.localContext = localContext
|
||||||
self.cloudContext = cloudContext
|
self.cloudContext = cloudContext
|
||||||
|
self.reflexiveBaseInfinitives = reflexiveBaseInfinitives
|
||||||
self.referenceStore = ReferenceStore(context: localContext)
|
self.referenceStore = ReferenceStore(context: localContext)
|
||||||
}
|
}
|
||||||
|
|
||||||
func settings() -> PracticeSettings {
|
func settings() -> PracticeSettings {
|
||||||
PracticeSettings(progress: ReviewStore.fetchOrCreateUserProgress(context: cloudContext))
|
PracticeSettings(
|
||||||
|
progress: ReviewStore.fetchOrCreateUserProgress(context: cloudContext),
|
||||||
|
reflexiveBaseInfinitives: reflexiveBaseInfinitives
|
||||||
|
)
|
||||||
}
|
}
|
||||||
|
|
||||||
func nextCard(for focusMode: FocusMode, excludingVerbId lastVerbId: Int? = nil) -> PracticeCardLoad? {
|
func nextCard(for focusMode: FocusMode, excludingVerbId lastVerbId: Int? = nil) -> PracticeCardLoad? {
|
||||||
@@ -79,7 +97,12 @@ struct PracticeSessionService {
|
|||||||
|
|
||||||
func randomFullTablePrompt() -> FullTablePrompt? {
|
func randomFullTablePrompt() -> FullTablePrompt? {
|
||||||
let settings = settings()
|
let settings = settings()
|
||||||
let verbs = referenceStore.fetchVerbs(selectedLevel: settings.selectedLevel)
|
// Full Table practice is regular-only, so the irregular-category setting is
|
||||||
|
// deliberately ignored here (applying it would empty the pool).
|
||||||
|
let verbs = applyReflexiveFilter(
|
||||||
|
to: referenceStore.fetchVerbs(selectedLevels: settings.selectedLevels),
|
||||||
|
settings: settings
|
||||||
|
)
|
||||||
guard !verbs.isEmpty else { return nil }
|
guard !verbs.isEmpty else { return nil }
|
||||||
|
|
||||||
for _ in 0..<40 {
|
for _ in 0..<40 {
|
||||||
@@ -89,6 +112,11 @@ struct PracticeSessionService {
|
|||||||
|
|
||||||
let forms = referenceStore.fetchForms(verbId: verb.id, tenseId: tenseId)
|
let forms = referenceStore.fetchForms(verbId: verb.id, tenseId: tenseId)
|
||||||
if forms.isEmpty { continue }
|
if forms.isEmpty { continue }
|
||||||
|
|
||||||
|
// Full Table practice is for regular patterns only — skip combos
|
||||||
|
// where any form in this (verb, tense) is irregular.
|
||||||
|
if forms.contains(where: { $0.regularity != "regular" }) { continue }
|
||||||
|
|
||||||
return FullTablePrompt(verb: verb, tenseInfo: tenseInfo, forms: forms)
|
return FullTablePrompt(verb: verb, tenseInfo: tenseInfo, forms: forms)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -131,6 +159,27 @@ struct PracticeSessionService {
|
|||||||
return buildCardLoad(verb: verb, form: form)
|
return buildCardLoad(verb: verb, form: form)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// When the user has "Reflexive verbs only" enabled, restrict the allowed
|
||||||
|
/// verb-id set to IDs whose infinitive is in the curated list.
|
||||||
|
/// No-op otherwise.
|
||||||
|
private func applyReflexiveFilter(to ids: Set<Int>, settings: PracticeSettings) -> Set<Int> {
|
||||||
|
guard settings.showReflexiveVerbsOnly, !settings.reflexiveBaseInfinitives.isEmpty else {
|
||||||
|
return ids
|
||||||
|
}
|
||||||
|
let matching = ids.filter { id in
|
||||||
|
guard let verb = referenceStore.fetchVerb(id: id) else { return false }
|
||||||
|
return settings.reflexiveBaseInfinitives.contains(verb.infinitive.lowercased())
|
||||||
|
}
|
||||||
|
return matching
|
||||||
|
}
|
||||||
|
|
||||||
|
private func applyReflexiveFilter(to verbs: [Verb], settings: PracticeSettings) -> [Verb] {
|
||||||
|
guard settings.showReflexiveVerbsOnly, !settings.reflexiveBaseInfinitives.isEmpty else {
|
||||||
|
return verbs
|
||||||
|
}
|
||||||
|
return verbs.filter { settings.reflexiveBaseInfinitives.contains($0.infinitive.lowercased()) }
|
||||||
|
}
|
||||||
|
|
||||||
private func buildCardLoad(verb: Verb, form: VerbForm) -> PracticeCardLoad {
|
private func buildCardLoad(verb: Verb, form: VerbForm) -> PracticeCardLoad {
|
||||||
let spans = referenceStore.fetchSpans(
|
let spans = referenceStore.fetchSpans(
|
||||||
verbId: form.verbId,
|
verbId: form.verbId,
|
||||||
@@ -152,7 +201,13 @@ struct PracticeSessionService {
|
|||||||
|
|
||||||
private func fetchDueCard(excluding lastVerbId: Int?) -> ReviewCard? {
|
private func fetchDueCard(excluding lastVerbId: Int?) -> ReviewCard? {
|
||||||
let settings = settings()
|
let settings = settings()
|
||||||
let allowedVerbIds = referenceStore.allowedVerbIDs(selectedLevel: settings.selectedLevel)
|
let allowedVerbIds = applyReflexiveFilter(
|
||||||
|
to: referenceStore.allowedVerbIDs(
|
||||||
|
selectedLevels: settings.selectedLevels,
|
||||||
|
irregularCategories: settings.enabledIrregularCategories
|
||||||
|
),
|
||||||
|
settings: settings
|
||||||
|
)
|
||||||
let now = Date()
|
let now = Date()
|
||||||
var descriptor = FetchDescriptor<ReviewCard>(
|
var descriptor = FetchDescriptor<ReviewCard>(
|
||||||
predicate: #Predicate<ReviewCard> { $0.dueDate <= now },
|
predicate: #Predicate<ReviewCard> { $0.dueDate <= now },
|
||||||
@@ -179,7 +234,13 @@ struct PracticeSessionService {
|
|||||||
|
|
||||||
private func pickWeakForm() -> VerbForm? {
|
private func pickWeakForm() -> VerbForm? {
|
||||||
let settings = settings()
|
let settings = settings()
|
||||||
let allowedVerbIds = referenceStore.allowedVerbIDs(selectedLevel: settings.selectedLevel)
|
let allowedVerbIds = applyReflexiveFilter(
|
||||||
|
to: referenceStore.allowedVerbIDs(
|
||||||
|
selectedLevels: settings.selectedLevels,
|
||||||
|
irregularCategories: settings.enabledIrregularCategories
|
||||||
|
),
|
||||||
|
settings: settings
|
||||||
|
)
|
||||||
|
|
||||||
let descriptor = FetchDescriptor<ReviewCard>(
|
let descriptor = FetchDescriptor<ReviewCard>(
|
||||||
predicate: #Predicate<ReviewCard> { $0.easeFactor < 2.0 && $0.repetitions > 0 },
|
predicate: #Predicate<ReviewCard> { $0.easeFactor < 2.0 && $0.repetitions > 0 },
|
||||||
@@ -201,7 +262,15 @@ struct PracticeSessionService {
|
|||||||
|
|
||||||
private func pickIrregularForm(filter: IrregularityFilter) -> VerbForm? {
|
private func pickIrregularForm(filter: IrregularityFilter) -> VerbForm? {
|
||||||
let settings = settings()
|
let settings = settings()
|
||||||
let allowedVerbIds = referenceStore.allowedVerbIDs(selectedLevel: settings.selectedLevel)
|
// Focus mode explicitly selects one irregular category, so the user's
|
||||||
|
// settings-level irregular filter is deliberately skipped here.
|
||||||
|
let allowedVerbIds = applyReflexiveFilter(
|
||||||
|
to: referenceStore.allowedVerbIDs(
|
||||||
|
selectedLevels: settings.selectedLevels,
|
||||||
|
irregularCategories: []
|
||||||
|
),
|
||||||
|
settings: settings
|
||||||
|
)
|
||||||
let typeRange: ClosedRange<Int>
|
let typeRange: ClosedRange<Int>
|
||||||
|
|
||||||
switch filter {
|
switch filter {
|
||||||
@@ -238,7 +307,13 @@ struct PracticeSessionService {
|
|||||||
private func pickCommonTenseForm() -> VerbForm? {
|
private func pickCommonTenseForm() -> VerbForm? {
|
||||||
let settings = settings()
|
let settings = settings()
|
||||||
let coreTenseIDs = TenseID.coreTenseIDs
|
let coreTenseIDs = TenseID.coreTenseIDs
|
||||||
let verbs = referenceStore.fetchVerbs(selectedLevel: settings.selectedLevel)
|
let verbs = applyReflexiveFilter(
|
||||||
|
to: referenceStore.fetchVerbs(
|
||||||
|
selectedLevels: settings.selectedLevels,
|
||||||
|
irregularCategories: settings.enabledIrregularCategories
|
||||||
|
),
|
||||||
|
settings: settings
|
||||||
|
)
|
||||||
guard let verb = verbs.randomElement() else { return nil }
|
guard let verb = verbs.randomElement() else { return nil }
|
||||||
|
|
||||||
let forms = referenceStore.fetchVerbForms(verbId: verb.id).filter { form in
|
let forms = referenceStore.fetchVerbForms(verbId: verb.id).filter { form in
|
||||||
@@ -251,7 +326,13 @@ struct PracticeSessionService {
|
|||||||
|
|
||||||
private func pickRandomForm() -> VerbForm? {
|
private func pickRandomForm() -> VerbForm? {
|
||||||
let settings = settings()
|
let settings = settings()
|
||||||
let verbs = referenceStore.fetchVerbs(selectedLevel: settings.selectedLevel)
|
let verbs = applyReflexiveFilter(
|
||||||
|
to: referenceStore.fetchVerbs(
|
||||||
|
selectedLevels: settings.selectedLevels,
|
||||||
|
irregularCategories: settings.enabledIrregularCategories
|
||||||
|
),
|
||||||
|
settings: settings
|
||||||
|
)
|
||||||
guard let verb = verbs.randomElement() else { return nil }
|
guard let verb = verbs.randomElement() else { return nil }
|
||||||
|
|
||||||
let forms = referenceStore.fetchVerbForms(verbId: verb.id).filter { form in
|
let forms = referenceStore.fetchVerbForms(verbId: verb.id).filter { form in
|
||||||
|
|||||||
@@ -65,28 +65,26 @@ final class PronunciationService {
|
|||||||
|
|
||||||
do {
|
do {
|
||||||
let audioSession = AVAudioSession.sharedInstance()
|
let audioSession = AVAudioSession.sharedInstance()
|
||||||
try audioSession.setCategory(.playAndRecord, mode: .measurement, options: [.duckOthers, .defaultToSpeaker])
|
try audioSession.setCategory(.record, mode: .measurement, options: [.duckOthers])
|
||||||
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
|
try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
|
||||||
|
|
||||||
audioEngine = AVAudioEngine()
|
// Use SFSpeechAudioBufferRecognitionRequest with the recognizer
|
||||||
|
// directly — avoid AVAudioEngine entirely since it produces
|
||||||
|
// zero-length buffers on some devices causing assertion crashes.
|
||||||
request = SFSpeechAudioBufferRecognitionRequest()
|
request = SFSpeechAudioBufferRecognitionRequest()
|
||||||
|
guard let request else { return }
|
||||||
guard let audioEngine, let request else { return }
|
|
||||||
request.shouldReportPartialResults = true
|
request.shouldReportPartialResults = true
|
||||||
|
request.requiresOnDeviceRecognition = recognizer.supportsOnDeviceRecognition
|
||||||
|
|
||||||
|
// Use AVAudioEngine with the native input format
|
||||||
|
audioEngine = AVAudioEngine()
|
||||||
|
guard let audioEngine else { return }
|
||||||
|
|
||||||
let inputNode = audioEngine.inputNode
|
let inputNode = audioEngine.inputNode
|
||||||
let recordingFormat = inputNode.outputFormat(forBus: 0)
|
|
||||||
|
|
||||||
// Validate format — 0 channels crashes installTap
|
// Use nil format — lets the system pick a compatible format
|
||||||
guard recordingFormat.channelCount > 0 else {
|
// and avoids the mDataByteSize(0) assertion from format mismatches
|
||||||
print("[PronunciationService] invalid recording format (0 channels)")
|
inputNode.installTap(onBus: 0, bufferSize: 4096, format: nil) { buffer, _ in
|
||||||
self.audioEngine = nil
|
|
||||||
self.request = nil
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
|
|
||||||
guard buffer.frameLength > 0 else { return }
|
|
||||||
request.append(buffer)
|
request.append(buffer)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|||||||
@@ -27,6 +27,50 @@ struct ReferenceStore {
|
|||||||
Set(fetchVerbs(selectedLevel: selectedLevel).map(\.id))
|
Set(fetchVerbs(selectedLevel: selectedLevel).map(\.id))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Union of data-levels for all selected user-facing levels.
|
||||||
|
/// Empty input produces an empty result — callers decide how to handle that.
|
||||||
|
func fetchVerbs(selectedLevels: Set<String>) -> [Verb] {
|
||||||
|
guard !selectedLevels.isEmpty else { return [] }
|
||||||
|
let ids = PracticeFilter.verbIDs(
|
||||||
|
matchingLevels: selectedLevels,
|
||||||
|
in: fetchVerbs().map { .init(id: $0.id, level: $0.level) }
|
||||||
|
)
|
||||||
|
return fetchVerbs().filter { ids.contains($0.id) }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Practice verb pool intersecting selected levels with selected irregular-span categories.
|
||||||
|
/// Delegates to `PracticeFilter` so the intersection logic is unit-tested
|
||||||
|
/// in SharedModels without a ModelContainer (Issue #26).
|
||||||
|
func allowedVerbIDs(
|
||||||
|
selectedLevels: Set<String>,
|
||||||
|
irregularCategories: Set<IrregularSpan.SpanCategory>
|
||||||
|
) -> Set<Int> {
|
||||||
|
PracticeFilter.allowedVerbIDs(
|
||||||
|
verbs: fetchVerbs().map { .init(id: $0.id, level: $0.level) },
|
||||||
|
spans: allIrregularSlots(),
|
||||||
|
selectedLevels: selectedLevels,
|
||||||
|
irregularCategories: irregularCategories
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Convenience: full Verb objects passing both filters.
|
||||||
|
func fetchVerbs(
|
||||||
|
selectedLevels: Set<String>,
|
||||||
|
irregularCategories: Set<IrregularSpan.SpanCategory>
|
||||||
|
) -> [Verb] {
|
||||||
|
let ids = allowedVerbIDs(
|
||||||
|
selectedLevels: selectedLevels,
|
||||||
|
irregularCategories: irregularCategories
|
||||||
|
)
|
||||||
|
return fetchVerbs().filter { ids.contains($0.id) }
|
||||||
|
}
|
||||||
|
|
||||||
|
private func allIrregularSlots() -> [PracticeFilter.IrregularSlot] {
|
||||||
|
let descriptor = FetchDescriptor<IrregularSpan>()
|
||||||
|
let spans = (try? context.fetch(descriptor)) ?? []
|
||||||
|
return spans.map { .init(verbId: $0.verbId, category: $0.category) }
|
||||||
|
}
|
||||||
|
|
||||||
func fetchVerb(id: Int) -> Verb? {
|
func fetchVerb(id: Int) -> Verb? {
|
||||||
let descriptor = FetchDescriptor<Verb>(predicate: #Predicate<Verb> { $0.id == id })
|
let descriptor = FetchDescriptor<Verb>(predicate: #Predicate<Verb> { $0.id == id })
|
||||||
return (try? context.fetch(descriptor))?.first
|
return (try? context.fetch(descriptor))?.first
|
||||||
|
|||||||
59
Conjuga/Conjuga/Services/ReflexiveVerbStore.swift
Normal file
59
Conjuga/Conjuga/Services/ReflexiveVerbStore.swift
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
import Foundation
|
||||||
|
import SharedModels
|
||||||
|
|
||||||
|
/// Loads and queries the curated reflexive-verb list bundled with the app
|
||||||
|
/// (Gitea issue #28). One JSON load at init; in-memory lookup thereafter.
|
||||||
|
///
|
||||||
|
/// `entries(for:)` returns a list because a single base infinitive may map to
|
||||||
|
/// multiple reflexive entries — e.g., `ponerse` covers both "to put on
|
||||||
|
/// (clothing) / to become" and "to come to an agreement (with)".
|
||||||
|
@MainActor
|
||||||
|
@Observable
|
||||||
|
final class ReflexiveVerbStore {
|
||||||
|
|
||||||
|
/// Process-wide accessor for services that can't use @Environment injection
|
||||||
|
/// (e.g. PracticeSessionService called from ViewModels). Views should still
|
||||||
|
/// prefer @Environment(ReflexiveVerbStore.self) for consistency.
|
||||||
|
static let shared = ReflexiveVerbStore()
|
||||||
|
|
||||||
|
private(set) var entries: [ReflexiveVerb] = []
|
||||||
|
private var indexByBase: [String: [ReflexiveVerb]] = [:]
|
||||||
|
|
||||||
|
/// Set of base infinitives present in the list. Cheap lookup for filters.
|
||||||
|
private(set) var baseInfinitives: Set<String> = []
|
||||||
|
|
||||||
|
init(bundle: Bundle = .main) {
|
||||||
|
load(from: bundle)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// All reflexive entries whose base infinitive matches (case-insensitive).
|
||||||
|
func entries(for baseInfinitive: String) -> [ReflexiveVerb] {
|
||||||
|
indexByBase[baseInfinitive.lowercased()] ?? []
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Convenience — true when the verb's bare infinitive appears in the list.
|
||||||
|
func isReflexive(baseInfinitive: String) -> Bool {
|
||||||
|
baseInfinitives.contains(baseInfinitive.lowercased())
|
||||||
|
}
|
||||||
|
|
||||||
|
private func load(from bundle: Bundle) {
|
||||||
|
guard let url = bundle.url(forResource: "reflexive_verbs", withExtension: "json"),
|
||||||
|
let data = try? Data(contentsOf: url) else {
|
||||||
|
print("[ReflexiveVerbStore] bundled reflexive_verbs.json not found")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
do {
|
||||||
|
let decoded = try JSONDecoder().decode([ReflexiveVerb].self, from: data)
|
||||||
|
entries = decoded
|
||||||
|
var index: [String: [ReflexiveVerb]] = [:]
|
||||||
|
for entry in decoded {
|
||||||
|
index[entry.baseInfinitive.lowercased(), default: []].append(entry)
|
||||||
|
}
|
||||||
|
indexByBase = index
|
||||||
|
baseInfinitives = Set(index.keys)
|
||||||
|
print("[ReflexiveVerbStore] loaded \(decoded.count) entries (\(baseInfinitives.count) distinct base infinitives)")
|
||||||
|
} catch {
|
||||||
|
print("[ReflexiveVerbStore] decode failed: \(error)")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -9,6 +9,7 @@ enum StartupCoordinator {
|
|||||||
static func bootstrap(localContainer: ModelContainer) async {
|
static func bootstrap(localContainer: ModelContainer) async {
|
||||||
await DataLoader.seedIfNeeded(container: localContainer)
|
await DataLoader.seedIfNeeded(container: localContainer)
|
||||||
await DataLoader.refreshCourseDataIfNeeded(container: localContainer)
|
await DataLoader.refreshCourseDataIfNeeded(container: localContainer)
|
||||||
|
await DataLoader.refreshTextbookDataIfNeeded(container: localContainer)
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Recurring maintenance: legacy migrations, identity repair, cloud dedup.
|
/// Recurring maintenance: legacy migrations, identity repair, cloud dedup.
|
||||||
|
|||||||
@@ -26,12 +26,14 @@ enum StoreInspector {
|
|||||||
let hasZVERBFORM = tables.contains("ZVERBFORM")
|
let hasZVERBFORM = tables.contains("ZVERBFORM")
|
||||||
let hasZTENSEGUIDE = tables.contains("ZTENSEGUIDE")
|
let hasZTENSEGUIDE = tables.contains("ZTENSEGUIDE")
|
||||||
let hasZVOCABCARD = tables.contains("ZVOCABCARD")
|
let hasZVOCABCARD = tables.contains("ZVOCABCARD")
|
||||||
|
let hasZTEXTBOOKCHAPTER = tables.contains("ZTEXTBOOKCHAPTER")
|
||||||
|
|
||||||
var summary = "[StoreInspector:\(label)] \(tables.count) tables"
|
var summary = "[StoreInspector:\(label)] \(tables.count) tables"
|
||||||
summary += " | ZVERB=\(hasZVERB ? queryInt(db: db, sql: "SELECT COUNT(*) FROM ZVERB") : -1)"
|
summary += " | ZVERB=\(hasZVERB ? queryInt(db: db, sql: "SELECT COUNT(*) FROM ZVERB") : -1)"
|
||||||
summary += " ZVERBFORM=\(hasZVERBFORM ? queryInt(db: db, sql: "SELECT COUNT(*) FROM ZVERBFORM") : -1)"
|
summary += " ZVERBFORM=\(hasZVERBFORM ? queryInt(db: db, sql: "SELECT COUNT(*) FROM ZVERBFORM") : -1)"
|
||||||
summary += " ZTENSEGUIDE=\(hasZTENSEGUIDE ? queryInt(db: db, sql: "SELECT COUNT(*) FROM ZTENSEGUIDE") : -1)"
|
summary += " ZTENSEGUIDE=\(hasZTENSEGUIDE ? queryInt(db: db, sql: "SELECT COUNT(*) FROM ZTENSEGUIDE") : -1)"
|
||||||
summary += " ZVOCABCARD=\(hasZVOCABCARD ? queryInt(db: db, sql: "SELECT COUNT(*) FROM ZVOCABCARD") : -1)"
|
summary += " ZVOCABCARD=\(hasZVOCABCARD ? queryInt(db: db, sql: "SELECT COUNT(*) FROM ZVOCABCARD") : -1)"
|
||||||
|
summary += " ZTEXTBOOKCHAPTER=\(hasZTEXTBOOKCHAPTER ? queryInt(db: db, sql: "SELECT COUNT(*) FROM ZTEXTBOOKCHAPTER") : -1)"
|
||||||
print(summary)
|
print(summary)
|
||||||
|
|
||||||
// Log all Z-tables (SwiftData entity tables start with Z, minus Core Data system tables)
|
// Log all Z-tables (SwiftData entity tables start with Z, minus Core Data system tables)
|
||||||
|
|||||||
65
Conjuga/Conjuga/Services/VerbExampleCache.swift
Normal file
65
Conjuga/Conjuga/Services/VerbExampleCache.swift
Normal file
@@ -0,0 +1,65 @@
|
|||||||
|
import Foundation
|
||||||
|
import SharedModels
|
||||||
|
|
||||||
|
/// Disk-backed cache for verb example sentences (Issue #27). One JSON file
|
||||||
|
/// in the Caches directory keyed by verb id; lazy-loaded on first access and
|
||||||
|
/// write-through on every generation. Matches DictionaryService's disk pattern.
|
||||||
|
///
|
||||||
|
/// Cache eviction by the OS is acceptable because contents are regenerable.
|
||||||
|
@MainActor
|
||||||
|
@Observable
|
||||||
|
final class VerbExampleCache {
|
||||||
|
|
||||||
|
private var store: [Int: [VerbExample]] = [:]
|
||||||
|
private var isLoaded = false
|
||||||
|
|
||||||
|
init() {}
|
||||||
|
|
||||||
|
// MARK: - Public API
|
||||||
|
|
||||||
|
/// Look up cached examples for a verb; returns nil on miss.
|
||||||
|
/// Safe to call before `loadIfNeeded()`; it triggers the disk load itself.
|
||||||
|
func examples(for verbId: Int) -> [VerbExample]? {
|
||||||
|
loadIfNeeded()
|
||||||
|
return store[verbId]
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Store newly generated examples and persist to disk.
|
||||||
|
func setExamples(_ examples: [VerbExample], for verbId: Int) {
|
||||||
|
loadIfNeeded()
|
||||||
|
store[verbId] = examples
|
||||||
|
save()
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Disk I/O
|
||||||
|
|
||||||
|
private static var cacheURL: URL {
|
||||||
|
FileManager.default.urls(for: .cachesDirectory, in: .userDomainMask)[0]
|
||||||
|
.appendingPathComponent("verb_examples.json")
|
||||||
|
}
|
||||||
|
|
||||||
|
private func loadIfNeeded() {
|
||||||
|
guard !isLoaded else { return }
|
||||||
|
defer { isLoaded = true }
|
||||||
|
|
||||||
|
guard let data = try? Data(contentsOf: Self.cacheURL),
|
||||||
|
let decoded = try? JSONDecoder().decode([String: [VerbExample]].self, from: data)
|
||||||
|
else { return }
|
||||||
|
|
||||||
|
// Persisted with String keys because JSON object keys are strings;
|
||||||
|
// convert back to Int for in-memory lookup.
|
||||||
|
var rebuilt: [Int: [VerbExample]] = [:]
|
||||||
|
for (key, value) in decoded {
|
||||||
|
if let id = Int(key) {
|
||||||
|
rebuilt[id] = value
|
||||||
|
}
|
||||||
|
}
|
||||||
|
store = rebuilt
|
||||||
|
}
|
||||||
|
|
||||||
|
private func save() {
|
||||||
|
let serialized = Dictionary(uniqueKeysWithValues: store.map { (String($0.key), $0.value) })
|
||||||
|
guard let data = try? JSONEncoder().encode(serialized) else { return }
|
||||||
|
try? data.write(to: Self.cacheURL)
|
||||||
|
}
|
||||||
|
}
|
||||||
78
Conjuga/Conjuga/Services/VerbExampleGenerator.swift
Normal file
78
Conjuga/Conjuga/Services/VerbExampleGenerator.swift
Normal file
@@ -0,0 +1,78 @@
|
|||||||
|
import Foundation
|
||||||
|
import FoundationModels
|
||||||
|
import SharedModels
|
||||||
|
|
||||||
|
/// Generates a set of example sentences for a single verb, one per core tense
|
||||||
|
/// (Issue #27). Mirrors the StoryGenerator pattern: @Generable response types,
|
||||||
|
/// a static availability flag, and a single generate(...) entry point.
|
||||||
|
@MainActor
|
||||||
|
struct VerbExampleGenerator {
|
||||||
|
|
||||||
|
// MARK: - Generable Types
|
||||||
|
|
||||||
|
@Generable
|
||||||
|
struct GeneratedExampleSet {
|
||||||
|
@Guide(
|
||||||
|
description: "Six example sentences, one per tense in the exact order requested. Each sentence must actually use the target verb conjugated in that tense.",
|
||||||
|
.count(6)
|
||||||
|
)
|
||||||
|
var examples: [GeneratedExample]
|
||||||
|
}
|
||||||
|
|
||||||
|
@Generable
|
||||||
|
struct GeneratedExample {
|
||||||
|
@Guide(description: "The tense id this sentence demonstrates. Must match one of the ids provided in the prompt exactly (e.g. ind_presente).")
|
||||||
|
var tenseId: String
|
||||||
|
|
||||||
|
@Guide(description: "A natural Spanish sentence, 6-14 words, that uses the target verb in the specified tense. For imperative tenses use tú or nosotros — never yo.")
|
||||||
|
var spanish: String
|
||||||
|
|
||||||
|
@Guide(description: "An accurate, idiomatic English translation of the Spanish sentence.")
|
||||||
|
var english: String
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Generation
|
||||||
|
|
||||||
|
/// Generate one example per tense in `tenseIds`. Returns the examples in the
|
||||||
|
/// same order as `tenseIds`, filling in placeholders for any the model skipped.
|
||||||
|
static func generate(
|
||||||
|
verbInfinitive: String,
|
||||||
|
verbEnglish: String,
|
||||||
|
tenseIds: [String]
|
||||||
|
) async throws -> [VerbExample] {
|
||||||
|
let tenseList = tenseIds
|
||||||
|
.compactMap { id in TenseInfo.find(id).map { "\(id) (\($0.english))" } }
|
||||||
|
.joined(separator: ", ")
|
||||||
|
|
||||||
|
let session = LanguageModelSession(instructions: """
|
||||||
|
You are a Spanish language teacher writing short example sentences for a learner.
|
||||||
|
The learner is studying the verb "\(verbInfinitive)" (to \(verbEnglish)).
|
||||||
|
Write one sentence per requested tense. Each sentence must:
|
||||||
|
- Actually conjugate "\(verbInfinitive)" in that tense (not just mention it).
|
||||||
|
- Be 6-14 words, natural and everyday.
|
||||||
|
- Use vocabulary appropriate for intermediate learners.
|
||||||
|
- Vary subjects and contexts across the set; do not reuse the same subject twice.
|
||||||
|
For imperative tenses, address "tú" or "nosotros" — never "yo".
|
||||||
|
""")
|
||||||
|
|
||||||
|
let prompt = """
|
||||||
|
Write example sentences for "\(verbInfinitive)" in these tenses, in this order:
|
||||||
|
\(tenseList)
|
||||||
|
|
||||||
|
Return one GeneratedExample per tense with the matching tenseId, spanish, and english.
|
||||||
|
"""
|
||||||
|
|
||||||
|
let response = try await session.respond(to: prompt, generating: GeneratedExampleSet.self)
|
||||||
|
|
||||||
|
// Map by tenseId and return in the caller's requested order so the UI
|
||||||
|
// renders a predictable sequence even if the model shuffles its output.
|
||||||
|
let byTense = Dictionary(uniqueKeysWithValues: response.content.examples.map {
|
||||||
|
($0.tenseId, VerbExample(tenseId: $0.tenseId, spanish: $0.spanish, english: $0.english))
|
||||||
|
})
|
||||||
|
return tenseIds.compactMap { byTense[$0] }
|
||||||
|
}
|
||||||
|
|
||||||
|
static var isAvailable: Bool {
|
||||||
|
SystemLanguageModel.default.availability == .available
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -96,7 +96,11 @@ final class PracticeViewModel {
|
|||||||
currentSpans = []
|
currentSpans = []
|
||||||
hasCards = true
|
hasCards = true
|
||||||
isLoading = true
|
isLoading = true
|
||||||
let service = PracticeSessionService(localContext: localContext, cloudContext: cloudContext)
|
let service = PracticeSessionService(
|
||||||
|
localContext: localContext,
|
||||||
|
cloudContext: cloudContext,
|
||||||
|
reflexiveBaseInfinitives: ReflexiveVerbStore.shared.baseInfinitives
|
||||||
|
)
|
||||||
guard let cardLoad = service.nextCard(for: focusMode, excludingVerbId: currentVerb?.id) else {
|
guard let cardLoad = service.nextCard(for: focusMode, excludingVerbId: currentVerb?.id) else {
|
||||||
clearCurrentCard()
|
clearCurrentCard()
|
||||||
hasCards = false
|
hasCards = false
|
||||||
|
|||||||
@@ -5,9 +5,14 @@ import SwiftData
|
|||||||
struct CourseView: View {
|
struct CourseView: View {
|
||||||
@Environment(\.cloudModelContextProvider) private var cloudModelContextProvider
|
@Environment(\.cloudModelContextProvider) private var cloudModelContextProvider
|
||||||
@Query(sort: \CourseDeck.weekNumber) private var decks: [CourseDeck]
|
@Query(sort: \CourseDeck.weekNumber) private var decks: [CourseDeck]
|
||||||
|
@Query(sort: \TextbookChapter.number) private var textbookChapters: [TextbookChapter]
|
||||||
@AppStorage("selectedCourse") private var selectedCourse: String?
|
@AppStorage("selectedCourse") private var selectedCourse: String?
|
||||||
@State private var testResults: [TestResult] = []
|
@State private var testResults: [TestResult] = []
|
||||||
|
|
||||||
|
private var textbookCourses: [String] {
|
||||||
|
Array(Set(textbookChapters.map(\.courseName))).sorted()
|
||||||
|
}
|
||||||
|
|
||||||
private var cloudModelContext: ModelContext { cloudModelContextProvider() }
|
private var cloudModelContext: ModelContext { cloudModelContextProvider() }
|
||||||
|
|
||||||
private var courseNames: [String] {
|
private var courseNames: [String] {
|
||||||
@@ -62,6 +67,32 @@ struct CourseView: View {
|
|||||||
description: Text("Course data is loading...")
|
description: Text("Course data is loading...")
|
||||||
)
|
)
|
||||||
} else {
|
} else {
|
||||||
|
// Textbook entry (shown above course picker when available)
|
||||||
|
if !textbookCourses.isEmpty {
|
||||||
|
Section {
|
||||||
|
ForEach(textbookCourses, id: \.self) { name in
|
||||||
|
NavigationLink(value: TextbookDestination(courseName: name)) {
|
||||||
|
HStack(spacing: 12) {
|
||||||
|
Image(systemName: "book.fill")
|
||||||
|
.font(.title3)
|
||||||
|
.foregroundStyle(.indigo)
|
||||||
|
.frame(width: 32)
|
||||||
|
VStack(alignment: .leading, spacing: 2) {
|
||||||
|
Text(name)
|
||||||
|
.font(.subheadline.weight(.semibold))
|
||||||
|
Text("Read chapters, do exercises")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
Spacer()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} header: {
|
||||||
|
Text("Textbook")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Course picker
|
// Course picker
|
||||||
if courseNames.count > 1 {
|
if courseNames.count > 1 {
|
||||||
Section {
|
Section {
|
||||||
@@ -155,6 +186,24 @@ struct CourseView: View {
|
|||||||
.navigationDestination(for: CheckpointDestination.self) { dest in
|
.navigationDestination(for: CheckpointDestination.self) { dest in
|
||||||
CheckpointExamView(courseName: dest.courseName, throughWeek: dest.throughWeek)
|
CheckpointExamView(courseName: dest.courseName, throughWeek: dest.throughWeek)
|
||||||
}
|
}
|
||||||
|
.navigationDestination(for: TextbookDestination.self) { dest in
|
||||||
|
TextbookChapterListView(courseName: dest.courseName)
|
||||||
|
}
|
||||||
|
.navigationDestination(for: TextbookChapter.self) { chapter in
|
||||||
|
TextbookChapterView(chapter: chapter)
|
||||||
|
}
|
||||||
|
.navigationDestination(for: TextbookExerciseDestination.self) { dest in
|
||||||
|
textbookExerciseView(for: dest)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@ViewBuilder
|
||||||
|
private func textbookExerciseView(for dest: TextbookExerciseDestination) -> some View {
|
||||||
|
if let chapter = textbookChapters.first(where: { $0.id == dest.chapterId }) {
|
||||||
|
TextbookExerciseView(chapter: chapter, blockIndex: dest.blockIndex)
|
||||||
|
} else {
|
||||||
|
ContentUnavailableView("Exercise unavailable", systemImage: "questionmark.circle")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -175,6 +224,10 @@ struct CheckpointDestination: Hashable {
|
|||||||
let throughWeek: Int
|
let throughWeek: Int
|
||||||
}
|
}
|
||||||
|
|
||||||
|
struct TextbookDestination: Hashable {
|
||||||
|
let courseName: String
|
||||||
|
}
|
||||||
|
|
||||||
// MARK: - Deck Row
|
// MARK: - Deck Row
|
||||||
|
|
||||||
private struct DeckRowView: View {
|
private struct DeckRowView: View {
|
||||||
|
|||||||
@@ -8,6 +8,11 @@ struct DeckStudyView: View {
|
|||||||
@State private var isStudying = false
|
@State private var isStudying = false
|
||||||
@State private var speechService = SpeechService()
|
@State private var speechService = SpeechService()
|
||||||
@State private var deckCards: [VocabCard] = []
|
@State private var deckCards: [VocabCard] = []
|
||||||
|
@State private var expandedConjugations: Set<String> = []
|
||||||
|
|
||||||
|
private var isStemChangingDeck: Bool {
|
||||||
|
deck.title.localizedCaseInsensitiveContains("stem changing")
|
||||||
|
}
|
||||||
|
|
||||||
var body: some View {
|
var body: some View {
|
||||||
cardListView
|
cardListView
|
||||||
@@ -19,7 +24,8 @@ struct DeckStudyView: View {
|
|||||||
VocabFlashcardView(
|
VocabFlashcardView(
|
||||||
cards: deckCards.shuffled(),
|
cards: deckCards.shuffled(),
|
||||||
speechService: speechService,
|
speechService: speechService,
|
||||||
onDone: { isStudying = false }
|
onDone: { isStudying = false },
|
||||||
|
deckTitle: deck.title
|
||||||
)
|
)
|
||||||
.toolbar {
|
.toolbar {
|
||||||
ToolbarItem(placement: .cancellationAction) {
|
ToolbarItem(placement: .cancellationAction) {
|
||||||
@@ -30,6 +36,24 @@ struct DeckStudyView: View {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Reversed stem-change decks have `front` as English, so prefer the
|
||||||
|
/// Spanish side when the card is stored that way. Strip parenthetical
|
||||||
|
/// notes and the reflexive `-se` ending for verb-table lookup.
|
||||||
|
private func inferInfinitive(card: VocabCard) -> String {
|
||||||
|
let raw: String
|
||||||
|
if deck.isReversed {
|
||||||
|
raw = card.back
|
||||||
|
} else {
|
||||||
|
raw = card.front
|
||||||
|
}
|
||||||
|
var t = raw.trimmingCharacters(in: .whitespacesAndNewlines)
|
||||||
|
if let paren = t.firstIndex(of: "(") {
|
||||||
|
t = String(t[..<paren]).trimmingCharacters(in: .whitespacesAndNewlines)
|
||||||
|
}
|
||||||
|
if t.hasSuffix("se") && t.count > 4 { t = String(t.dropLast(2)) }
|
||||||
|
return t
|
||||||
|
}
|
||||||
|
|
||||||
private func loadCards() {
|
private func loadCards() {
|
||||||
let deckId = deck.id
|
let deckId = deck.id
|
||||||
let descriptor = FetchDescriptor<VocabCard>(
|
let descriptor = FetchDescriptor<VocabCard>(
|
||||||
@@ -107,6 +131,36 @@ struct DeckStudyView: View {
|
|||||||
.multilineTextAlignment(.trailing)
|
.multilineTextAlignment(.trailing)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Stem-change conjugation toggle
|
||||||
|
if isStemChangingDeck {
|
||||||
|
let verb = inferInfinitive(card: card)
|
||||||
|
let isOpen = expandedConjugations.contains(verb)
|
||||||
|
Button {
|
||||||
|
withAnimation(.smooth) {
|
||||||
|
if isOpen {
|
||||||
|
expandedConjugations.remove(verb)
|
||||||
|
} else {
|
||||||
|
expandedConjugations.insert(verb)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} label: {
|
||||||
|
Label(
|
||||||
|
isOpen ? "Hide conjugation" : "Show conjugation",
|
||||||
|
systemImage: isOpen ? "chevron.up" : "chevron.down"
|
||||||
|
)
|
||||||
|
.font(.caption.weight(.medium))
|
||||||
|
}
|
||||||
|
.buttonStyle(.borderless)
|
||||||
|
.tint(.blue)
|
||||||
|
.padding(.leading, 42)
|
||||||
|
|
||||||
|
if isOpen {
|
||||||
|
StemChangeConjugationView(infinitive: verb)
|
||||||
|
.padding(.leading, 42)
|
||||||
|
.transition(.opacity.combined(with: .move(edge: .top)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Example sentences
|
// Example sentences
|
||||||
if !card.examplesES.isEmpty {
|
if !card.examplesES.isEmpty {
|
||||||
VStack(alignment: .leading, spacing: 6) {
|
VStack(alignment: .leading, spacing: 6) {
|
||||||
|
|||||||
97
Conjuga/Conjuga/Views/Course/StemChangeConjugationView.swift
Normal file
97
Conjuga/Conjuga/Views/Course/StemChangeConjugationView.swift
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
import SwiftUI
|
||||||
|
import SharedModels
|
||||||
|
import SwiftData
|
||||||
|
|
||||||
|
/// Shows the present-tense conjugation of a verb (identified by infinitive),
|
||||||
|
/// with any irregular/stem-change spans highlighted. Designed to drop into
|
||||||
|
/// stem-changing verb flashcards so learners can see the conjugation in-place.
|
||||||
|
struct StemChangeConjugationView: View {
|
||||||
|
let infinitive: String
|
||||||
|
|
||||||
|
@Environment(\.modelContext) private var modelContext
|
||||||
|
@State private var rows: [ConjugationRow] = []
|
||||||
|
|
||||||
|
private static let personLabels = ["yo", "tú", "él/ella/Ud.", "nosotros", "vosotros", "ellos/ellas/Uds."]
|
||||||
|
private static let tenseId = "ind_presente"
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
VStack(alignment: .leading, spacing: 8) {
|
||||||
|
HStack {
|
||||||
|
Text("Present tense")
|
||||||
|
.font(.subheadline.weight(.semibold))
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
Spacer()
|
||||||
|
}
|
||||||
|
if rows.isEmpty {
|
||||||
|
Text("Conjugation not available")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
.padding(.vertical, 4)
|
||||||
|
} else {
|
||||||
|
VStack(spacing: 6) {
|
||||||
|
ForEach(rows) { row in
|
||||||
|
HStack(alignment: .firstTextBaseline) {
|
||||||
|
Text(row.person)
|
||||||
|
.font(.callout)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
.frame(width: 130, alignment: .leading)
|
||||||
|
IrregularHighlightText(
|
||||||
|
form: row.form,
|
||||||
|
spans: row.spans,
|
||||||
|
font: .callout.monospaced(),
|
||||||
|
showLabels: false
|
||||||
|
)
|
||||||
|
Spacer()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.padding(12)
|
||||||
|
.frame(maxWidth: .infinity, alignment: .leading)
|
||||||
|
.background(Color.blue.opacity(0.08), in: RoundedRectangle(cornerRadius: 10))
|
||||||
|
.onAppear(perform: loadForms)
|
||||||
|
}
|
||||||
|
|
||||||
|
private func loadForms() {
|
||||||
|
// Find the verb by infinitive (lowercase exact match).
|
||||||
|
let normalized = infinitive.lowercased().trimmingCharacters(in: .whitespaces)
|
||||||
|
let verbDescriptor = FetchDescriptor<Verb>(
|
||||||
|
predicate: #Predicate<Verb> { $0.infinitive == normalized }
|
||||||
|
)
|
||||||
|
guard let verb = (try? modelContext.fetch(verbDescriptor))?.first else {
|
||||||
|
rows = []
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
let verbId = verb.id
|
||||||
|
let tenseId = Self.tenseId
|
||||||
|
let formDescriptor = FetchDescriptor<VerbForm>(
|
||||||
|
predicate: #Predicate<VerbForm> { $0.verbId == verbId && $0.tenseId == tenseId },
|
||||||
|
sortBy: [SortDescriptor(\VerbForm.personIndex)]
|
||||||
|
)
|
||||||
|
let forms = (try? modelContext.fetch(formDescriptor)) ?? []
|
||||||
|
|
||||||
|
rows = forms.map { f in
|
||||||
|
ConjugationRow(
|
||||||
|
id: f.personIndex,
|
||||||
|
person: Self.personLabels[safe: f.personIndex] ?? "",
|
||||||
|
form: f.form,
|
||||||
|
spans: f.spans ?? []
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private struct ConjugationRow: Identifiable {
|
||||||
|
let id: Int
|
||||||
|
let person: String
|
||||||
|
let form: String
|
||||||
|
let spans: [IrregularSpan]
|
||||||
|
}
|
||||||
|
|
||||||
|
private extension Array {
|
||||||
|
subscript(safe index: Int) -> Element? {
|
||||||
|
indices.contains(index) ? self[index] : nil
|
||||||
|
}
|
||||||
|
}
|
||||||
121
Conjuga/Conjuga/Views/Course/TextbookChapterListView.swift
Normal file
121
Conjuga/Conjuga/Views/Course/TextbookChapterListView.swift
Normal file
@@ -0,0 +1,121 @@
|
|||||||
|
import SwiftUI
|
||||||
|
import SharedModels
|
||||||
|
import SwiftData
|
||||||
|
|
||||||
|
struct TextbookChapterListView: View {
|
||||||
|
let courseName: String
|
||||||
|
|
||||||
|
@Environment(\.cloudModelContextProvider) private var cloudModelContextProvider
|
||||||
|
@Query(sort: \TextbookChapter.number) private var allChapters: [TextbookChapter]
|
||||||
|
|
||||||
|
private var cloudModelContext: ModelContext { cloudModelContextProvider() }
|
||||||
|
@State private var attempts: [TextbookExerciseAttempt] = []
|
||||||
|
|
||||||
|
private var chapters: [TextbookChapter] {
|
||||||
|
allChapters.filter { $0.courseName == courseName }
|
||||||
|
}
|
||||||
|
|
||||||
|
private var byPart: [(part: Int, chapters: [TextbookChapter])] {
|
||||||
|
let grouped = Dictionary(grouping: chapters, by: \.part)
|
||||||
|
return grouped.keys.sorted().map { p in
|
||||||
|
(p, grouped[p]!.sorted { $0.number < $1.number })
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func progressFor(_ chapter: TextbookChapter) -> (correct: Int, total: Int) {
|
||||||
|
let chNum = chapter.number
|
||||||
|
let chAttempts = attempts.filter {
|
||||||
|
$0.courseName == courseName && $0.chapterNumber == chNum
|
||||||
|
}
|
||||||
|
let total = chAttempts.reduce(0) { $0 + $1.totalCount }
|
||||||
|
let correct = chAttempts.reduce(0) { $0 + $1.correctCount + $1.closeCount }
|
||||||
|
return (correct, total)
|
||||||
|
}
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
List {
|
||||||
|
if chapters.isEmpty {
|
||||||
|
ContentUnavailableView(
|
||||||
|
"Textbook loading",
|
||||||
|
systemImage: "book.closed",
|
||||||
|
description: Text("Textbook content is being prepared…")
|
||||||
|
)
|
||||||
|
} else {
|
||||||
|
ForEach(byPart, id: \.part) { part, partChapters in
|
||||||
|
Section {
|
||||||
|
ForEach(partChapters, id: \.id) { chapter in
|
||||||
|
NavigationLink(value: chapter) {
|
||||||
|
chapterRow(chapter)
|
||||||
|
}
|
||||||
|
.accessibilityIdentifier("textbook-chapter-row-\(chapter.number)")
|
||||||
|
}
|
||||||
|
} header: {
|
||||||
|
if part > 0 {
|
||||||
|
Text("Part \(part)")
|
||||||
|
} else {
|
||||||
|
Text("Chapters")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.navigationTitle("Textbook")
|
||||||
|
.onAppear(perform: loadAttempts)
|
||||||
|
}
|
||||||
|
|
||||||
|
@ViewBuilder
|
||||||
|
private func chapterRow(_ chapter: TextbookChapter) -> some View {
|
||||||
|
let p = progressFor(chapter)
|
||||||
|
HStack(alignment: .center, spacing: 12) {
|
||||||
|
ZStack {
|
||||||
|
Circle()
|
||||||
|
.stroke(Color.secondary.opacity(0.2), lineWidth: 3)
|
||||||
|
.frame(width: 36, height: 36)
|
||||||
|
if p.total > 0 {
|
||||||
|
Circle()
|
||||||
|
.trim(from: 0, to: CGFloat(p.correct) / CGFloat(p.total))
|
||||||
|
.stroke(.orange, style: StrokeStyle(lineWidth: 3, lineCap: .round))
|
||||||
|
.frame(width: 36, height: 36)
|
||||||
|
.rotationEffect(.degrees(-90))
|
||||||
|
}
|
||||||
|
Text("\(chapter.number)")
|
||||||
|
.font(.footnote.weight(.bold))
|
||||||
|
}
|
||||||
|
|
||||||
|
VStack(alignment: .leading, spacing: 2) {
|
||||||
|
Text(chapter.title)
|
||||||
|
.font(.headline)
|
||||||
|
HStack(spacing: 10) {
|
||||||
|
if chapter.exerciseCount > 0 {
|
||||||
|
Label("\(chapter.exerciseCount)", systemImage: "pencil.and.list.clipboard")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
if chapter.vocabTableCount > 0 {
|
||||||
|
Label("\(chapter.vocabTableCount)", systemImage: "list.bullet.rectangle")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
if p.total > 0 {
|
||||||
|
Text("\(p.correct)/\(p.total)")
|
||||||
|
.font(.caption.monospacedDigit())
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Spacer()
|
||||||
|
}
|
||||||
|
.padding(.vertical, 4)
|
||||||
|
}
|
||||||
|
|
||||||
|
private func loadAttempts() {
|
||||||
|
attempts = (try? cloudModelContext.fetch(FetchDescriptor<TextbookExerciseAttempt>())) ?? []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
#Preview {
|
||||||
|
NavigationStack {
|
||||||
|
TextbookChapterListView(courseName: "Complete Spanish Step-by-Step")
|
||||||
|
}
|
||||||
|
.modelContainer(for: [TextbookChapter.self], inMemory: true)
|
||||||
|
}
|
||||||
209
Conjuga/Conjuga/Views/Course/TextbookChapterView.swift
Normal file
209
Conjuga/Conjuga/Views/Course/TextbookChapterView.swift
Normal file
@@ -0,0 +1,209 @@
|
|||||||
|
import SwiftUI
|
||||||
|
import SharedModels
|
||||||
|
import SwiftData
|
||||||
|
|
||||||
|
struct TextbookChapterView: View {
|
||||||
|
let chapter: TextbookChapter
|
||||||
|
|
||||||
|
@State private var expandedVocab: Set<Int> = []
|
||||||
|
|
||||||
|
private var blocks: [TextbookBlock] { chapter.blocks() }
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
ScrollView {
|
||||||
|
VStack(alignment: .leading, spacing: 12) {
|
||||||
|
headerView
|
||||||
|
Divider()
|
||||||
|
ForEach(blocks) { block in
|
||||||
|
blockView(block)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.padding(.horizontal)
|
||||||
|
.padding(.vertical, 12)
|
||||||
|
}
|
||||||
|
.navigationTitle(chapter.title)
|
||||||
|
.navigationBarTitleDisplayMode(.inline)
|
||||||
|
}
|
||||||
|
|
||||||
|
private var headerView: some View {
|
||||||
|
VStack(alignment: .leading, spacing: 4) {
|
||||||
|
if chapter.part > 0 {
|
||||||
|
Text("Part \(chapter.part)")
|
||||||
|
.font(.subheadline)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
Text("Chapter \(chapter.number)")
|
||||||
|
.font(.subheadline)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
Text(chapter.title)
|
||||||
|
.font(.largeTitle.bold())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@ViewBuilder
|
||||||
|
private func blockView(_ block: TextbookBlock) -> some View {
|
||||||
|
switch block.kind {
|
||||||
|
case .heading:
|
||||||
|
headingView(block)
|
||||||
|
case .paragraph:
|
||||||
|
paragraphView(block)
|
||||||
|
case .keyVocabHeader:
|
||||||
|
HStack(spacing: 6) {
|
||||||
|
Image(systemName: "star.fill").foregroundStyle(.orange)
|
||||||
|
Text("Key Vocabulary")
|
||||||
|
.font(.headline)
|
||||||
|
.foregroundStyle(.orange)
|
||||||
|
}
|
||||||
|
.padding(.top, 8)
|
||||||
|
case .vocabTable:
|
||||||
|
vocabTableView(block)
|
||||||
|
case .exercise:
|
||||||
|
exerciseLinkView(block)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func headingView(_ block: TextbookBlock) -> some View {
|
||||||
|
let level = block.level ?? 3
|
||||||
|
let font: Font
|
||||||
|
switch level {
|
||||||
|
case 2: font = .title.bold()
|
||||||
|
case 3: font = .title2.bold()
|
||||||
|
case 4: font = .title3.weight(.semibold)
|
||||||
|
default: font = .headline
|
||||||
|
}
|
||||||
|
return Text(stripInlineEmphasis(block.text ?? ""))
|
||||||
|
.font(font)
|
||||||
|
.padding(.top, 10)
|
||||||
|
}
|
||||||
|
|
||||||
|
private func paragraphView(_ block: TextbookBlock) -> some View {
|
||||||
|
Text(attributedFromMarkdownish(block.text ?? ""))
|
||||||
|
.font(.body)
|
||||||
|
.fixedSize(horizontal: false, vertical: true)
|
||||||
|
}
|
||||||
|
|
||||||
|
private func vocabTableView(_ block: TextbookBlock) -> some View {
|
||||||
|
let expanded = expandedVocab.contains(block.index)
|
||||||
|
let cards = block.cards ?? []
|
||||||
|
let lines = block.ocrLines ?? []
|
||||||
|
let itemCount = cards.isEmpty ? lines.count : cards.count
|
||||||
|
return VStack(alignment: .leading, spacing: 4) {
|
||||||
|
Button {
|
||||||
|
if expanded { expandedVocab.remove(block.index) } else { expandedVocab.insert(block.index) }
|
||||||
|
} label: {
|
||||||
|
HStack {
|
||||||
|
Image(systemName: expanded ? "chevron.down" : "chevron.right")
|
||||||
|
.font(.caption)
|
||||||
|
Text("Vocabulary (\(itemCount) items)")
|
||||||
|
.font(.subheadline.weight(.medium))
|
||||||
|
.foregroundStyle(.primary)
|
||||||
|
Spacer()
|
||||||
|
}
|
||||||
|
.contentShape(Rectangle())
|
||||||
|
}
|
||||||
|
.buttonStyle(.plain)
|
||||||
|
|
||||||
|
if expanded {
|
||||||
|
if cards.isEmpty {
|
||||||
|
// Fallback: no paired cards available — show raw OCR lines.
|
||||||
|
VStack(alignment: .leading, spacing: 2) {
|
||||||
|
ForEach(Array(lines.enumerated()), id: \.offset) { _, line in
|
||||||
|
Text(line)
|
||||||
|
.font(.callout.monospaced())
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.padding(.leading, 14)
|
||||||
|
} else {
|
||||||
|
vocabGrid(cards: cards)
|
||||||
|
.padding(.leading, 14)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.padding(10)
|
||||||
|
.frame(maxWidth: .infinity, alignment: .leading)
|
||||||
|
.background(Color.orange.opacity(0.08), in: RoundedRectangle(cornerRadius: 10))
|
||||||
|
}
|
||||||
|
|
||||||
|
@ViewBuilder
|
||||||
|
private func vocabGrid(cards: [TextbookVocabPair]) -> some View {
|
||||||
|
Grid(alignment: .leading, horizontalSpacing: 16, verticalSpacing: 6) {
|
||||||
|
ForEach(Array(cards.enumerated()), id: \.offset) { _, card in
|
||||||
|
GridRow {
|
||||||
|
Text(card.front)
|
||||||
|
.font(.callout)
|
||||||
|
.foregroundStyle(.primary)
|
||||||
|
Text(card.back)
|
||||||
|
.font(.callout)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func exerciseLinkView(_ block: TextbookBlock) -> some View {
|
||||||
|
NavigationLink(value: TextbookExerciseDestination(
|
||||||
|
chapterId: chapter.id,
|
||||||
|
chapterNumber: chapter.number,
|
||||||
|
blockIndex: block.index
|
||||||
|
)) {
|
||||||
|
HStack(spacing: 10) {
|
||||||
|
Image(systemName: "pencil.and.list.clipboard")
|
||||||
|
.foregroundStyle(.orange)
|
||||||
|
.font(.title3)
|
||||||
|
VStack(alignment: .leading, spacing: 2) {
|
||||||
|
Text("Exercise \(block.exerciseId ?? "")")
|
||||||
|
.font(.headline)
|
||||||
|
if let inst = block.instruction, !inst.isEmpty {
|
||||||
|
Text(stripInlineEmphasis(inst))
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
.lineLimit(2)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Spacer()
|
||||||
|
Image(systemName: "chevron.right")
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
.font(.caption)
|
||||||
|
}
|
||||||
|
.padding(12)
|
||||||
|
.background(Color.orange.opacity(0.1), in: RoundedRectangle(cornerRadius: 10))
|
||||||
|
}
|
||||||
|
.buttonStyle(.plain)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Strip our ad-hoc ** / * markers from parsed text
|
||||||
|
private func stripInlineEmphasis(_ s: String) -> String {
|
||||||
|
s.replacingOccurrences(of: "**", with: "")
|
||||||
|
.replacingOccurrences(of: "*", with: "")
|
||||||
|
}
|
||||||
|
|
||||||
|
private func attributedFromMarkdownish(_ s: String) -> AttributedString {
|
||||||
|
// Parser emits `**bold**` and `*italic*`. Try to render via AttributedString markdown.
|
||||||
|
if let parsed = try? AttributedString(markdown: s, options: .init(allowsExtendedAttributes: true)) {
|
||||||
|
return parsed
|
||||||
|
}
|
||||||
|
return AttributedString(stripInlineEmphasis(s))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct TextbookExerciseDestination: Hashable {
|
||||||
|
let chapterId: String
|
||||||
|
let chapterNumber: Int
|
||||||
|
let blockIndex: Int
|
||||||
|
}
|
||||||
|
|
||||||
|
#Preview {
|
||||||
|
NavigationStack {
|
||||||
|
TextbookChapterView(chapter: TextbookChapter(
|
||||||
|
id: "ch1",
|
||||||
|
number: 1,
|
||||||
|
title: "Sample",
|
||||||
|
part: 1,
|
||||||
|
courseName: "Preview",
|
||||||
|
bodyJSON: Data(),
|
||||||
|
exerciseCount: 0,
|
||||||
|
vocabTableCount: 0
|
||||||
|
))
|
||||||
|
}
|
||||||
|
}
|
||||||
360
Conjuga/Conjuga/Views/Course/TextbookExerciseView.swift
Normal file
360
Conjuga/Conjuga/Views/Course/TextbookExerciseView.swift
Normal file
@@ -0,0 +1,360 @@
|
|||||||
|
import SwiftUI
|
||||||
|
import SharedModels
|
||||||
|
import SwiftData
|
||||||
|
import PencilKit
|
||||||
|
|
||||||
|
/// Interactive fill-in-the-blank view for one textbook exercise.
|
||||||
|
/// Supports keyboard typing OR Apple Pencil handwriting input per prompt.
|
||||||
|
struct TextbookExerciseView: View {
|
||||||
|
let chapter: TextbookChapter
|
||||||
|
let blockIndex: Int
|
||||||
|
|
||||||
|
@Environment(\.cloudModelContextProvider) private var cloudModelContextProvider
|
||||||
|
@State private var answers: [Int: String] = [:]
|
||||||
|
@State private var drawings: [Int: PKDrawing] = [:]
|
||||||
|
@State private var grades: [Int: TextbookGrade] = [:]
|
||||||
|
@State private var inputMode: InputMode = .keyboard
|
||||||
|
@State private var activePencilPromptNumber: Int?
|
||||||
|
@State private var isRecognizing = false
|
||||||
|
@State private var isChecked = false
|
||||||
|
@State private var recognizedTextForActive: String = ""
|
||||||
|
|
||||||
|
private var cloudModelContext: ModelContext { cloudModelContextProvider() }
|
||||||
|
|
||||||
|
enum InputMode: String {
|
||||||
|
case keyboard
|
||||||
|
case pencil
|
||||||
|
}
|
||||||
|
|
||||||
|
private var block: TextbookBlock? {
|
||||||
|
chapter.blocks().first { $0.index == blockIndex }
|
||||||
|
}
|
||||||
|
|
||||||
|
private var answerByNumber: [Int: TextbookAnswerItem] {
|
||||||
|
guard let items = block?.answerItems else { return [:] }
|
||||||
|
var out: [Int: TextbookAnswerItem] = [:]
|
||||||
|
for it in items {
|
||||||
|
out[it.number] = it
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
ScrollView {
|
||||||
|
VStack(alignment: .leading, spacing: 16) {
|
||||||
|
if let b = block {
|
||||||
|
headerView(b)
|
||||||
|
inputModePicker
|
||||||
|
exerciseBody(b)
|
||||||
|
checkButton(b)
|
||||||
|
} else {
|
||||||
|
ContentUnavailableView(
|
||||||
|
"Exercise not found",
|
||||||
|
systemImage: "questionmark.circle"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.padding()
|
||||||
|
}
|
||||||
|
.navigationTitle("Exercise \(block?.exerciseId ?? "")")
|
||||||
|
.navigationBarTitleDisplayMode(.inline)
|
||||||
|
.onAppear(perform: loadPreviousAttempt)
|
||||||
|
}
|
||||||
|
|
||||||
|
private func headerView(_ b: TextbookBlock) -> some View {
|
||||||
|
VStack(alignment: .leading, spacing: 8) {
|
||||||
|
Text("Chapter \(chapter.number): \(chapter.title)")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
Text("Exercise \(b.exerciseId ?? "")")
|
||||||
|
.font(.title2.bold())
|
||||||
|
if let inst = b.instruction, !inst.isEmpty {
|
||||||
|
Text(stripInlineEmphasis(inst))
|
||||||
|
.font(.callout)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
.fixedSize(horizontal: false, vertical: true)
|
||||||
|
}
|
||||||
|
if let extra = b.extra, !extra.isEmpty {
|
||||||
|
ForEach(Array(extra.enumerated()), id: \.offset) { _, e in
|
||||||
|
Text(stripInlineEmphasis(e))
|
||||||
|
.font(.callout)
|
||||||
|
.fixedSize(horizontal: false, vertical: true)
|
||||||
|
.padding(8)
|
||||||
|
.frame(maxWidth: .infinity, alignment: .leading)
|
||||||
|
.background(Color.secondary.opacity(0.1), in: RoundedRectangle(cornerRadius: 8))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private var inputModePicker: some View {
|
||||||
|
Picker("Input mode", selection: $inputMode) {
|
||||||
|
Label("Keyboard", systemImage: "keyboard").tag(InputMode.keyboard)
|
||||||
|
Label("Pencil", systemImage: "pencil.tip").tag(InputMode.pencil)
|
||||||
|
}
|
||||||
|
.pickerStyle(.segmented)
|
||||||
|
}
|
||||||
|
|
||||||
|
private func exerciseBody(_ b: TextbookBlock) -> some View {
|
||||||
|
VStack(alignment: .leading, spacing: 14) {
|
||||||
|
if b.freeform == true {
|
||||||
|
VStack(alignment: .leading, spacing: 6) {
|
||||||
|
Label("Freeform exercise", systemImage: "text.bubble")
|
||||||
|
.font(.subheadline.weight(.semibold))
|
||||||
|
.foregroundStyle(.orange)
|
||||||
|
Text("Answers will vary. Use this space to write your own responses; they won't be auto-checked.")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
.padding()
|
||||||
|
.background(Color.orange.opacity(0.1), in: RoundedRectangle(cornerRadius: 10))
|
||||||
|
}
|
||||||
|
let rawPrompts = b.prompts ?? []
|
||||||
|
let prompts = rawPrompts.isEmpty ? synthesizedPrompts(b) : rawPrompts
|
||||||
|
if prompts.isEmpty && b.extra?.isEmpty == false {
|
||||||
|
Text("Fill in the blanks above; answers will be graded when you tap Check.")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
} else {
|
||||||
|
ForEach(Array(prompts.enumerated()), id: \.offset) { i, prompt in
|
||||||
|
promptRow(index: i, prompt: prompt, expected: answerByNumber[i + 1])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// When the source exercise prompts were embedded in a bitmap (common in
|
||||||
|
/// this textbook), we have no text for each question — only the answer
|
||||||
|
/// key. Synthesize numbered placeholders so the user still gets one input
|
||||||
|
/// field per answer.
|
||||||
|
private func synthesizedPrompts(_ b: TextbookBlock) -> [String] {
|
||||||
|
guard let items = b.answerItems, !items.isEmpty else { return [] }
|
||||||
|
return items.map { "\($0.number)." }
|
||||||
|
}
|
||||||
|
|
||||||
|
private func promptRow(index: Int, prompt: String, expected: TextbookAnswerItem?) -> some View {
|
||||||
|
let number = index + 1
|
||||||
|
let grade = grades[number]
|
||||||
|
return VStack(alignment: .leading, spacing: 8) {
|
||||||
|
HStack(alignment: .top, spacing: 8) {
|
||||||
|
if let grade {
|
||||||
|
Image(systemName: iconFor(grade))
|
||||||
|
.foregroundStyle(colorFor(grade))
|
||||||
|
.font(.title3)
|
||||||
|
.padding(.top, 2)
|
||||||
|
}
|
||||||
|
Text(stripInlineEmphasis(prompt))
|
||||||
|
.font(.body)
|
||||||
|
.fixedSize(horizontal: false, vertical: true)
|
||||||
|
}
|
||||||
|
|
||||||
|
switch inputMode {
|
||||||
|
case .keyboard:
|
||||||
|
TextField("Your answer", text: binding(for: number))
|
||||||
|
.textFieldStyle(.roundedBorder)
|
||||||
|
.textInputAutocapitalization(.never)
|
||||||
|
.disableAutocorrection(true)
|
||||||
|
.font(.body)
|
||||||
|
.disabled(isChecked)
|
||||||
|
case .pencil:
|
||||||
|
pencilRow(number: number)
|
||||||
|
}
|
||||||
|
|
||||||
|
if isChecked, let grade, grade != .correct, let expected {
|
||||||
|
HStack(spacing: 6) {
|
||||||
|
Text("Answer:")
|
||||||
|
.font(.caption.weight(.semibold))
|
||||||
|
Text(expected.answer)
|
||||||
|
.font(.caption)
|
||||||
|
if !expected.alternates.isEmpty {
|
||||||
|
Text("(also: \(expected.alternates.joined(separator: ", ")))")
|
||||||
|
.font(.caption2)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.foregroundStyle(colorFor(grade))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.padding(10)
|
||||||
|
.background(backgroundFor(grade), in: RoundedRectangle(cornerRadius: 8))
|
||||||
|
}
|
||||||
|
|
||||||
|
private func pencilRow(number: Int) -> some View {
|
||||||
|
VStack(alignment: .leading, spacing: 6) {
|
||||||
|
HandwritingCanvas(
|
||||||
|
drawing: bindingDrawing(for: number),
|
||||||
|
onDrawingChanged: { recognizePencil(for: number) }
|
||||||
|
)
|
||||||
|
.frame(height: 100)
|
||||||
|
.background(.fill.quinary, in: RoundedRectangle(cornerRadius: 10))
|
||||||
|
.overlay(RoundedRectangle(cornerRadius: 10).stroke(.separator, lineWidth: 1))
|
||||||
|
|
||||||
|
HStack {
|
||||||
|
if let typed = answers[number], !typed.isEmpty {
|
||||||
|
Text("Recognized: \(typed)")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
Spacer()
|
||||||
|
Button("Clear") {
|
||||||
|
drawings[number] = PKDrawing()
|
||||||
|
answers[number] = ""
|
||||||
|
}
|
||||||
|
.font(.caption)
|
||||||
|
.tint(.secondary)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func checkButton(_ b: TextbookBlock) -> some View {
|
||||||
|
let hasAnyAnswer = answers.values.contains { !$0.isEmpty }
|
||||||
|
let disabled = b.freeform == true || (!isChecked && !hasAnyAnswer)
|
||||||
|
return Button {
|
||||||
|
if isChecked {
|
||||||
|
resetExercise()
|
||||||
|
} else {
|
||||||
|
checkAnswers(b)
|
||||||
|
}
|
||||||
|
} label: {
|
||||||
|
Text(isChecked ? "Try again" : "Check answers")
|
||||||
|
.font(.headline)
|
||||||
|
.frame(maxWidth: .infinity)
|
||||||
|
.padding(.vertical, 10)
|
||||||
|
}
|
||||||
|
.buttonStyle(.borderedProminent)
|
||||||
|
.tint(.orange)
|
||||||
|
.disabled(disabled)
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Actions
|
||||||
|
|
||||||
|
private func checkAnswers(_ b: TextbookBlock) {
|
||||||
|
guard let prompts = b.prompts else { return }
|
||||||
|
var newGrades: [Int: TextbookGrade] = [:]
|
||||||
|
var states: [TextbookPromptState] = []
|
||||||
|
for (i, _) in prompts.enumerated() {
|
||||||
|
let number = i + 1
|
||||||
|
let user = answers[number] ?? ""
|
||||||
|
let expected = answerByNumber[number]
|
||||||
|
let canonical = expected?.answer ?? ""
|
||||||
|
let alts = expected?.alternates ?? []
|
||||||
|
let grade: TextbookGrade
|
||||||
|
if canonical.isEmpty {
|
||||||
|
grade = .wrong
|
||||||
|
} else {
|
||||||
|
grade = AnswerChecker.grade(userText: user, canonical: canonical, alternates: alts)
|
||||||
|
}
|
||||||
|
newGrades[number] = grade
|
||||||
|
states.append(TextbookPromptState(number: number, userText: user, grade: grade))
|
||||||
|
}
|
||||||
|
grades = newGrades
|
||||||
|
isChecked = true
|
||||||
|
saveAttempt(states: states, exerciseId: b.exerciseId ?? "")
|
||||||
|
}
|
||||||
|
|
||||||
|
private func resetExercise() {
|
||||||
|
answers.removeAll()
|
||||||
|
drawings.removeAll()
|
||||||
|
grades.removeAll()
|
||||||
|
isChecked = false
|
||||||
|
}
|
||||||
|
|
||||||
|
private func recognizePencil(for number: Int) {
|
||||||
|
guard let drawing = drawings[number], !drawing.strokes.isEmpty else { return }
|
||||||
|
isRecognizing = true
|
||||||
|
Task {
|
||||||
|
let result = await HandwritingRecognizer.recognize(drawing: drawing)
|
||||||
|
await MainActor.run {
|
||||||
|
answers[number] = result.text
|
||||||
|
isRecognizing = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func saveAttempt(states: [TextbookPromptState], exerciseId: String) {
|
||||||
|
let attemptId = TextbookExerciseAttempt.attemptId(
|
||||||
|
courseName: chapter.courseName,
|
||||||
|
exerciseId: exerciseId
|
||||||
|
)
|
||||||
|
let descriptor = FetchDescriptor<TextbookExerciseAttempt>(
|
||||||
|
predicate: #Predicate<TextbookExerciseAttempt> { $0.id == attemptId }
|
||||||
|
)
|
||||||
|
let context = cloudModelContext
|
||||||
|
let existing = (try? context.fetch(descriptor))?.first
|
||||||
|
let attempt = existing ?? TextbookExerciseAttempt(
|
||||||
|
id: attemptId,
|
||||||
|
courseName: chapter.courseName,
|
||||||
|
chapterNumber: chapter.number,
|
||||||
|
exerciseId: exerciseId
|
||||||
|
)
|
||||||
|
if existing == nil { context.insert(attempt) }
|
||||||
|
attempt.lastAttemptAt = Date()
|
||||||
|
attempt.setPromptStates(states)
|
||||||
|
try? context.save()
|
||||||
|
}
|
||||||
|
|
||||||
|
private func loadPreviousAttempt() {
|
||||||
|
guard let b = block else { return }
|
||||||
|
let attemptId = TextbookExerciseAttempt.attemptId(
|
||||||
|
courseName: chapter.courseName,
|
||||||
|
exerciseId: b.exerciseId ?? ""
|
||||||
|
)
|
||||||
|
let descriptor = FetchDescriptor<TextbookExerciseAttempt>(
|
||||||
|
predicate: #Predicate<TextbookExerciseAttempt> { $0.id == attemptId }
|
||||||
|
)
|
||||||
|
guard let attempt = (try? cloudModelContext.fetch(descriptor))?.first else { return }
|
||||||
|
for s in attempt.promptStates() {
|
||||||
|
answers[s.number] = s.userText
|
||||||
|
grades[s.number] = s.grade
|
||||||
|
}
|
||||||
|
isChecked = !grades.isEmpty
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Bindings
|
||||||
|
|
||||||
|
private func binding(for number: Int) -> Binding<String> {
|
||||||
|
Binding(
|
||||||
|
get: { answers[number] ?? "" },
|
||||||
|
set: { answers[number] = $0 }
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
private func bindingDrawing(for number: Int) -> Binding<PKDrawing> {
|
||||||
|
Binding(
|
||||||
|
get: { drawings[number] ?? PKDrawing() },
|
||||||
|
set: { drawings[number] = $0 }
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - UI helpers
|
||||||
|
|
||||||
|
private func iconFor(_ grade: TextbookGrade) -> String {
|
||||||
|
switch grade {
|
||||||
|
case .correct: return "checkmark.circle.fill"
|
||||||
|
case .close: return "circle.lefthalf.filled"
|
||||||
|
case .wrong: return "xmark.circle.fill"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func colorFor(_ grade: TextbookGrade) -> Color {
|
||||||
|
switch grade {
|
||||||
|
case .correct: return .green
|
||||||
|
case .close: return .orange
|
||||||
|
case .wrong: return .red
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func backgroundFor(_ grade: TextbookGrade?) -> Color {
|
||||||
|
guard let grade else { return Color.secondary.opacity(0.05) }
|
||||||
|
switch grade {
|
||||||
|
case .correct: return .green.opacity(0.12)
|
||||||
|
case .close: return .orange.opacity(0.12)
|
||||||
|
case .wrong: return .red.opacity(0.12)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func stripInlineEmphasis(_ s: String) -> String {
|
||||||
|
s.replacingOccurrences(of: "**", with: "")
|
||||||
|
.replacingOccurrences(of: "*", with: "")
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -6,11 +6,19 @@ struct VocabFlashcardView: View {
|
|||||||
let cards: [VocabCard]
|
let cards: [VocabCard]
|
||||||
let speechService: SpeechService
|
let speechService: SpeechService
|
||||||
let onDone: () -> Void
|
let onDone: () -> Void
|
||||||
|
/// Optional deck context — when present and the title indicates a stem-
|
||||||
|
/// changing deck, each card gets an inline conjugation toggle.
|
||||||
|
var deckTitle: String? = nil
|
||||||
|
|
||||||
@Environment(\.cloudModelContextProvider) private var cloudModelContextProvider
|
@Environment(\.cloudModelContextProvider) private var cloudModelContextProvider
|
||||||
@State private var currentIndex = 0
|
@State private var currentIndex = 0
|
||||||
@State private var isRevealed = false
|
@State private var isRevealed = false
|
||||||
@State private var sessionCorrect = 0
|
@State private var sessionCorrect = 0
|
||||||
|
@State private var showConjugation = false
|
||||||
|
|
||||||
|
private var isStemChangingDeck: Bool {
|
||||||
|
(deckTitle ?? "").localizedCaseInsensitiveContains("stem changing")
|
||||||
|
}
|
||||||
|
|
||||||
private var cloudModelContext: ModelContext { cloudModelContextProvider() }
|
private var cloudModelContext: ModelContext { cloudModelContextProvider() }
|
||||||
|
|
||||||
@@ -61,6 +69,25 @@ struct VocabFlashcardView: View {
|
|||||||
.padding(12)
|
.padding(12)
|
||||||
}
|
}
|
||||||
.glassEffect(in: .circle)
|
.glassEffect(in: .circle)
|
||||||
|
|
||||||
|
if isStemChangingDeck {
|
||||||
|
Button {
|
||||||
|
withAnimation(.smooth) { showConjugation.toggle() }
|
||||||
|
} label: {
|
||||||
|
Label(
|
||||||
|
showConjugation ? "Hide conjugation" : "Show conjugation",
|
||||||
|
systemImage: showConjugation ? "chevron.up" : "chevron.down"
|
||||||
|
)
|
||||||
|
.font(.subheadline.weight(.medium))
|
||||||
|
}
|
||||||
|
.buttonStyle(.bordered)
|
||||||
|
.tint(.blue)
|
||||||
|
|
||||||
|
if showConjugation {
|
||||||
|
StemChangeConjugationView(infinitive: stripToInfinitive(card.front))
|
||||||
|
.transition(.opacity.combined(with: .move(edge: .top)))
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
.transition(.blurReplace)
|
.transition(.blurReplace)
|
||||||
} else {
|
} else {
|
||||||
@@ -111,6 +138,7 @@ struct VocabFlashcardView: View {
|
|||||||
guard currentIndex > 0 else { return }
|
guard currentIndex > 0 else { return }
|
||||||
withAnimation(.smooth) {
|
withAnimation(.smooth) {
|
||||||
isRevealed = false
|
isRevealed = false
|
||||||
|
showConjugation = false
|
||||||
currentIndex -= 1
|
currentIndex -= 1
|
||||||
}
|
}
|
||||||
} label: {
|
} label: {
|
||||||
@@ -125,6 +153,7 @@ struct VocabFlashcardView: View {
|
|||||||
Button {
|
Button {
|
||||||
withAnimation(.smooth) {
|
withAnimation(.smooth) {
|
||||||
isRevealed = false
|
isRevealed = false
|
||||||
|
showConjugation = false
|
||||||
currentIndex += 1
|
currentIndex += 1
|
||||||
}
|
}
|
||||||
} label: {
|
} label: {
|
||||||
@@ -189,9 +218,25 @@ struct VocabFlashcardView: View {
|
|||||||
// Next card
|
// Next card
|
||||||
withAnimation(.smooth) {
|
withAnimation(.smooth) {
|
||||||
isRevealed = false
|
isRevealed = false
|
||||||
|
showConjugation = false
|
||||||
currentIndex += 1
|
currentIndex += 1
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Card fronts may be plain infinitives ("cerrar") or, in reversed decks,
|
||||||
|
/// stored as English. Strip any reflexive-se suffix or parenthetical notes
|
||||||
|
/// to improve the verb lookup hit rate.
|
||||||
|
private func stripToInfinitive(_ s: String) -> String {
|
||||||
|
var t = s.trimmingCharacters(in: .whitespacesAndNewlines)
|
||||||
|
if let paren = t.firstIndex(of: "(") {
|
||||||
|
t = String(t[..<paren]).trimmingCharacters(in: .whitespacesAndNewlines)
|
||||||
|
}
|
||||||
|
if t.hasSuffix("se") && t.count > 4 {
|
||||||
|
// "acostarse" → "acostar" for verb lookup
|
||||||
|
t = String(t.dropLast(2))
|
||||||
|
}
|
||||||
|
return t
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#Preview {
|
#Preview {
|
||||||
|
|||||||
@@ -128,7 +128,7 @@ struct OnboardingView: View {
|
|||||||
|
|
||||||
private func completeOnboarding() {
|
private func completeOnboarding() {
|
||||||
let progress = ReviewStore.fetchOrCreateUserProgress(context: cloudModelContext)
|
let progress = ReviewStore.fetchOrCreateUserProgress(context: cloudModelContext)
|
||||||
progress.selectedVerbLevel = selectedLevel
|
progress.selectedVerbLevels = [selectedLevel]
|
||||||
if progress.enabledTenseIDs.isEmpty {
|
if progress.enabledTenseIDs.isEmpty {
|
||||||
progress.enabledTenseIDs = ReviewStore.defaultEnabledTenses()
|
progress.enabledTenseIDs = ReviewStore.defaultEnabledTenses()
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,16 +1,27 @@
|
|||||||
import SwiftUI
|
import SwiftUI
|
||||||
import SharedModels
|
import SharedModels
|
||||||
import SwiftData
|
import SwiftData
|
||||||
|
import FoundationModels
|
||||||
|
|
||||||
|
@Generable
|
||||||
|
private struct ChatWordInfo {
|
||||||
|
@Guide(description: "Dictionary base form") var baseForm: String
|
||||||
|
@Guide(description: "English translation") var english: String
|
||||||
|
@Guide(description: "Part of speech") var partOfSpeech: String
|
||||||
|
}
|
||||||
|
|
||||||
struct ChatView: View {
|
struct ChatView: View {
|
||||||
let conversation: Conversation
|
let conversation: Conversation
|
||||||
|
|
||||||
@Environment(\.cloudModelContextProvider) private var cloudModelContextProvider
|
@Environment(\.cloudModelContextProvider) private var cloudModelContextProvider
|
||||||
|
@Environment(DictionaryService.self) private var dictionary
|
||||||
@State private var service = ConversationService()
|
@State private var service = ConversationService()
|
||||||
@State private var messages: [ChatMessage] = []
|
@State private var messages: [ChatMessage] = []
|
||||||
@State private var inputText = ""
|
@State private var inputText = ""
|
||||||
@State private var errorMessage: String?
|
@State private var errorMessage: String?
|
||||||
@State private var hasStarted = false
|
@State private var hasStarted = false
|
||||||
|
@State private var selectedWord: WordAnnotation?
|
||||||
|
@State private var lookupCache: [String: WordAnnotation] = [:]
|
||||||
|
|
||||||
private var cloudContext: ModelContext { cloudModelContextProvider() }
|
private var cloudContext: ModelContext { cloudModelContextProvider() }
|
||||||
|
|
||||||
@@ -21,7 +32,9 @@ struct ChatView: View {
|
|||||||
ScrollView {
|
ScrollView {
|
||||||
LazyVStack(spacing: 12) {
|
LazyVStack(spacing: 12) {
|
||||||
ForEach(messages) { message in
|
ForEach(messages) { message in
|
||||||
ChatBubble(message: message)
|
ChatBubble(message: message, dictionary: dictionary, lookupCache: $lookupCache) { word in
|
||||||
|
selectedWord = word
|
||||||
|
}
|
||||||
.id(message.id)
|
.id(message.id)
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -68,6 +81,10 @@ struct ChatView: View {
|
|||||||
}
|
}
|
||||||
.navigationTitle(conversation.scenario)
|
.navigationTitle(conversation.scenario)
|
||||||
.navigationBarTitleDisplayMode(.inline)
|
.navigationBarTitleDisplayMode(.inline)
|
||||||
|
.sheet(item: $selectedWord) { word in
|
||||||
|
ChatWordDetailSheet(word: word)
|
||||||
|
.presentationDetents([.height(200)])
|
||||||
|
}
|
||||||
.alert("Error", isPresented: .init(
|
.alert("Error", isPresented: .init(
|
||||||
get: { errorMessage != nil },
|
get: { errorMessage != nil },
|
||||||
set: { if !$0 { errorMessage = nil } }
|
set: { if !$0 { errorMessage = nil } }
|
||||||
@@ -121,6 +138,9 @@ struct ChatView: View {
|
|||||||
|
|
||||||
private struct ChatBubble: View {
|
private struct ChatBubble: View {
|
||||||
let message: ChatMessage
|
let message: ChatMessage
|
||||||
|
let dictionary: DictionaryService
|
||||||
|
@Binding var lookupCache: [String: WordAnnotation]
|
||||||
|
let onWordTap: (WordAnnotation) -> Void
|
||||||
|
|
||||||
private var isUser: Bool { message.role == "user" }
|
private var isUser: Bool { message.role == "user" }
|
||||||
|
|
||||||
@@ -129,11 +149,15 @@ private struct ChatBubble: View {
|
|||||||
if isUser { Spacer(minLength: 60) }
|
if isUser { Spacer(minLength: 60) }
|
||||||
|
|
||||||
VStack(alignment: isUser ? .trailing : .leading, spacing: 4) {
|
VStack(alignment: isUser ? .trailing : .leading, spacing: 4) {
|
||||||
|
if isUser {
|
||||||
Text(message.content)
|
Text(message.content)
|
||||||
.font(.body)
|
.font(.body)
|
||||||
.padding(.horizontal, 14)
|
.padding(.horizontal, 14)
|
||||||
.padding(.vertical, 10)
|
.padding(.vertical, 10)
|
||||||
.background(isUser ? AnyShapeStyle(.green.opacity(0.2)) : AnyShapeStyle(.fill.quaternary), in: RoundedRectangle(cornerRadius: 16))
|
.background(.green.opacity(0.2), in: RoundedRectangle(cornerRadius: 16))
|
||||||
|
} else {
|
||||||
|
tappableBubble
|
||||||
|
}
|
||||||
|
|
||||||
if let correction = message.correction, !correction.isEmpty {
|
if let correction = message.correction, !correction.isEmpty {
|
||||||
Text(correction)
|
Text(correction)
|
||||||
@@ -147,4 +171,179 @@ private struct ChatBubble: View {
|
|||||||
}
|
}
|
||||||
.padding(.horizontal)
|
.padding(.horizontal)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private var tappableBubble: some View {
|
||||||
|
let words = message.content.components(separatedBy: " ")
|
||||||
|
return ChatFlowLayout(spacing: 0) {
|
||||||
|
ForEach(Array(words.enumerated()), id: \.offset) { _, word in
|
||||||
|
ChatWordButton(word: word, dictionary: dictionary, cache: lookupCache) { annotation in
|
||||||
|
if annotation.english.isEmpty {
|
||||||
|
lookupWordAsync(annotation.word)
|
||||||
|
} else {
|
||||||
|
onWordTap(annotation)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.padding(.horizontal, 14)
|
||||||
|
.padding(.vertical, 10)
|
||||||
|
.background(.fill.quaternary, in: RoundedRectangle(cornerRadius: 16))
|
||||||
|
}
|
||||||
|
|
||||||
|
private func lookupWordAsync(_ word: String) {
|
||||||
|
// Try dictionary first
|
||||||
|
if let entry = dictionary.lookup(word) {
|
||||||
|
let annotation = WordAnnotation(word: word, baseForm: entry.baseForm, english: entry.english, partOfSpeech: entry.partOfSpeech)
|
||||||
|
lookupCache[word] = annotation
|
||||||
|
onWordTap(annotation)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Show loading then AI lookup
|
||||||
|
onWordTap(WordAnnotation(word: word, baseForm: word, english: "Looking up...", partOfSpeech: ""))
|
||||||
|
|
||||||
|
Task {
|
||||||
|
do {
|
||||||
|
let session = LanguageModelSession(instructions: "You are a Spanish dictionary. Provide base form, English translation, and part of speech.")
|
||||||
|
let response = try await session.respond(to: "Word: \"\(word)\"", generating: ChatWordInfo.self)
|
||||||
|
let info = response.content
|
||||||
|
let annotation = WordAnnotation(word: word, baseForm: info.baseForm, english: info.english, partOfSpeech: info.partOfSpeech)
|
||||||
|
lookupCache[word] = annotation
|
||||||
|
onWordTap(annotation)
|
||||||
|
} catch {
|
||||||
|
onWordTap(WordAnnotation(word: word, baseForm: word, english: "Lookup unavailable", partOfSpeech: ""))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Chat Word Button
|
||||||
|
|
||||||
|
private struct ChatWordButton: View {
|
||||||
|
let word: String
|
||||||
|
let dictionary: DictionaryService
|
||||||
|
let cache: [String: WordAnnotation]
|
||||||
|
let onTap: (WordAnnotation) -> Void
|
||||||
|
|
||||||
|
private var cleaned: String {
|
||||||
|
word.lowercased()
|
||||||
|
.trimmingCharacters(in: .punctuationCharacters)
|
||||||
|
.trimmingCharacters(in: .whitespaces)
|
||||||
|
}
|
||||||
|
|
||||||
|
private var annotation: WordAnnotation? {
|
||||||
|
if let cached = cache[cleaned] { return cached }
|
||||||
|
if let entry = dictionary.lookup(cleaned) {
|
||||||
|
return WordAnnotation(word: cleaned, baseForm: entry.baseForm, english: entry.english, partOfSpeech: entry.partOfSpeech)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
Button {
|
||||||
|
onTap(annotation ?? WordAnnotation(word: cleaned, baseForm: cleaned, english: "", partOfSpeech: ""))
|
||||||
|
} label: {
|
||||||
|
Text(word + " ")
|
||||||
|
.font(.body)
|
||||||
|
.foregroundStyle(.primary)
|
||||||
|
.underline(annotation != nil, color: .teal.opacity(0.3))
|
||||||
|
}
|
||||||
|
.buttonStyle(.plain)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Word Detail Sheet
|
||||||
|
|
||||||
|
private struct ChatWordDetailSheet: View {
|
||||||
|
let word: WordAnnotation
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
VStack(spacing: 16) {
|
||||||
|
HStack {
|
||||||
|
Text(word.word)
|
||||||
|
.font(.title2.bold())
|
||||||
|
Spacer()
|
||||||
|
if !word.partOfSpeech.isEmpty {
|
||||||
|
Text(word.partOfSpeech)
|
||||||
|
.font(.caption.weight(.medium))
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
.padding(.horizontal, 8)
|
||||||
|
.padding(.vertical, 4)
|
||||||
|
.background(.fill.tertiary, in: Capsule())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Divider()
|
||||||
|
|
||||||
|
if word.english == "Looking up..." {
|
||||||
|
HStack(spacing: 8) {
|
||||||
|
ProgressView()
|
||||||
|
Text("Looking up word...")
|
||||||
|
.font(.subheadline)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
.frame(maxWidth: .infinity)
|
||||||
|
} else {
|
||||||
|
VStack(alignment: .leading, spacing: 8) {
|
||||||
|
if !word.baseForm.isEmpty && word.baseForm != word.word {
|
||||||
|
HStack {
|
||||||
|
Text("Base form:")
|
||||||
|
.font(.subheadline)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
Text(word.baseForm)
|
||||||
|
.font(.subheadline.weight(.semibold))
|
||||||
|
.italic()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !word.english.isEmpty {
|
||||||
|
HStack {
|
||||||
|
Text("English:")
|
||||||
|
.font(.subheadline)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
Text(word.english)
|
||||||
|
.font(.subheadline.weight(.semibold))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
.frame(maxWidth: .infinity, alignment: .leading)
|
||||||
|
}
|
||||||
|
|
||||||
|
Spacer()
|
||||||
|
}
|
||||||
|
.padding()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Chat Flow Layout
|
||||||
|
|
||||||
|
private struct ChatFlowLayout: Layout {
|
||||||
|
var spacing: CGFloat = 0
|
||||||
|
func sizeThatFits(proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) -> CGSize {
|
||||||
|
let rows = computeRows(proposal: proposal, subviews: subviews)
|
||||||
|
var height: CGFloat = 0
|
||||||
|
for row in rows { height += row.map { $0.height }.max() ?? 0 }
|
||||||
|
height += CGFloat(max(0, rows.count - 1)) * spacing
|
||||||
|
return CGSize(width: proposal.width ?? 0, height: height)
|
||||||
|
}
|
||||||
|
func placeSubviews(in bounds: CGRect, proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) {
|
||||||
|
let rows = computeRows(proposal: proposal, subviews: subviews)
|
||||||
|
var y = bounds.minY; var idx = 0
|
||||||
|
for row in rows {
|
||||||
|
var x = bounds.minX; let rh = row.map { $0.height }.max() ?? 0
|
||||||
|
for size in row {
|
||||||
|
subviews[idx].place(at: CGPoint(x: x, y: y), proposal: ProposedViewSize(size))
|
||||||
|
x += size.width; idx += 1
|
||||||
|
}
|
||||||
|
y += rh + spacing
|
||||||
|
}
|
||||||
|
}
|
||||||
|
private func computeRows(proposal: ProposedViewSize, subviews: Subviews) -> [[CGSize]] {
|
||||||
|
let mw = proposal.width ?? .infinity; var rows: [[CGSize]] = [[]]; var cw: CGFloat = 0
|
||||||
|
for sv in subviews {
|
||||||
|
let s = sv.sizeThatFits(.unspecified)
|
||||||
|
if cw + s.width > mw && !rows[rows.count - 1].isEmpty { rows.append([]); cw = 0 }
|
||||||
|
rows[rows.count - 1].append(s); cw += s.width
|
||||||
|
}
|
||||||
|
return rows
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -243,7 +243,11 @@ struct FullTableView: View {
|
|||||||
results = Array(repeating: nil, count: 6)
|
results = Array(repeating: nil, count: 6)
|
||||||
correctForms = []
|
correctForms = []
|
||||||
drawings = Array(repeating: PKDrawing(), count: 6)
|
drawings = Array(repeating: PKDrawing(), count: 6)
|
||||||
let service = PracticeSessionService(localContext: modelContext, cloudContext: cloudModelContext)
|
let service = PracticeSessionService(
|
||||||
|
localContext: modelContext,
|
||||||
|
cloudContext: cloudModelContext,
|
||||||
|
reflexiveBaseInfinitives: ReflexiveVerbStore.shared.baseInfinitives
|
||||||
|
)
|
||||||
guard let prompt = service.randomFullTablePrompt() else {
|
guard let prompt = service.randomFullTablePrompt() else {
|
||||||
currentVerb = nil
|
currentVerb = nil
|
||||||
currentTense = nil
|
currentTense = nil
|
||||||
@@ -312,7 +316,11 @@ struct FullTableView: View {
|
|||||||
if allCorrect { sessionCorrect += 1 }
|
if allCorrect { sessionCorrect += 1 }
|
||||||
|
|
||||||
if let verb = currentVerb, let tense = currentTense {
|
if let verb = currentVerb, let tense = currentTense {
|
||||||
let service = PracticeSessionService(localContext: modelContext, cloudContext: cloudModelContext)
|
let service = PracticeSessionService(
|
||||||
|
localContext: modelContext,
|
||||||
|
cloudContext: cloudModelContext,
|
||||||
|
reflexiveBaseInfinitives: ReflexiveVerbStore.shared.baseInfinitives
|
||||||
|
)
|
||||||
let reviewResults = Dictionary(uniqueKeysWithValues: personsToShow.map { ($0.index, results[$0.index] == true) })
|
let reviewResults = Dictionary(uniqueKeysWithValues: personsToShow.map { ($0.index, results[$0.index] == true) })
|
||||||
_ = service.recordFullTableReview(verbId: verb.id, tenseId: tense.id, results: reviewResults)
|
_ = service.recordFullTableReview(verbId: verb.id, tenseId: tense.id, results: reviewResults)
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -4,6 +4,10 @@ import SharedModels
|
|||||||
struct LyricsReaderView: View {
|
struct LyricsReaderView: View {
|
||||||
let song: SavedSong
|
let song: SavedSong
|
||||||
|
|
||||||
|
@Environment(DictionaryService.self) private var dictionary
|
||||||
|
@State private var selectedWord: LyricsWordLookup?
|
||||||
|
@State private var lookupCache: [String: LyricsWordLookup] = [:]
|
||||||
|
|
||||||
var body: some View {
|
var body: some View {
|
||||||
ScrollView {
|
ScrollView {
|
||||||
VStack(spacing: 20) {
|
VStack(spacing: 20) {
|
||||||
@@ -15,6 +19,10 @@ struct LyricsReaderView: View {
|
|||||||
}
|
}
|
||||||
.navigationTitle(song.title)
|
.navigationTitle(song.title)
|
||||||
.navigationBarTitleDisplayMode(.inline)
|
.navigationBarTitleDisplayMode(.inline)
|
||||||
|
.sheet(item: $selectedWord) { word in
|
||||||
|
LyricsWordDetailSheet(word: word)
|
||||||
|
.presentationDetents([.height(260)])
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// MARK: - Header
|
// MARK: - Header
|
||||||
@@ -56,15 +64,6 @@ struct LyricsReaderView: View {
|
|||||||
let spanishLines = song.lyricsES.components(separatedBy: "\n")
|
let spanishLines = song.lyricsES.components(separatedBy: "\n")
|
||||||
let englishLines = song.lyricsEN.components(separatedBy: "\n")
|
let englishLines = song.lyricsEN.components(separatedBy: "\n")
|
||||||
let lineCount = max(spanishLines.count, englishLines.count)
|
let lineCount = max(spanishLines.count, englishLines.count)
|
||||||
let _ = {
|
|
||||||
print("[LyricsReader] ES lines: \(spanishLines.count), EN lines: \(englishLines.count), rendering: \(lineCount)")
|
|
||||||
for i in 0..<min(15, lineCount) {
|
|
||||||
let es = i < spanishLines.count ? spanishLines[i] : "(none)"
|
|
||||||
let en = i < englishLines.count ? englishLines[i] : "(none)"
|
|
||||||
print(" [\(i)] ES: \(es.isEmpty ? "(blank)" : es)")
|
|
||||||
print(" EN: \(en.isEmpty ? "(blank)" : en)")
|
|
||||||
}
|
|
||||||
}()
|
|
||||||
|
|
||||||
return VStack(alignment: .leading, spacing: 0) {
|
return VStack(alignment: .leading, spacing: 0) {
|
||||||
ForEach(0..<lineCount, id: \.self) { index in
|
ForEach(0..<lineCount, id: \.self) { index in
|
||||||
@@ -78,8 +77,7 @@ struct LyricsReaderView: View {
|
|||||||
} else {
|
} else {
|
||||||
VStack(alignment: .leading, spacing: 2) {
|
VStack(alignment: .leading, spacing: 2) {
|
||||||
if !es.isEmpty {
|
if !es.isEmpty {
|
||||||
Text(es)
|
spanishLine(es)
|
||||||
.font(.body.weight(.medium))
|
|
||||||
}
|
}
|
||||||
if !en.isEmpty {
|
if !en.isEmpty {
|
||||||
Text(en)
|
Text(en)
|
||||||
@@ -94,4 +92,183 @@ struct LyricsReaderView: View {
|
|||||||
.padding()
|
.padding()
|
||||||
.background(.fill.quinary, in: RoundedRectangle(cornerRadius: 12))
|
.background(.fill.quinary, in: RoundedRectangle(cornerRadius: 12))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private func spanishLine(_ line: String) -> some View {
|
||||||
|
let tokens = line.components(separatedBy: " ")
|
||||||
|
return LyricsFlowLayout(spacing: 0) {
|
||||||
|
ForEach(Array(tokens.enumerated()), id: \.offset) { _, token in
|
||||||
|
LyricsWordView(token: token, lookup: makeLookup(for: token)) { word in
|
||||||
|
selectedWord = word
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Lookup
|
||||||
|
|
||||||
|
private func makeLookup(for rawToken: String) -> LyricsWordLookup? {
|
||||||
|
let cleaned = rawToken.lowercased()
|
||||||
|
.trimmingCharacters(in: .punctuationCharacters)
|
||||||
|
.trimmingCharacters(in: .whitespaces)
|
||||||
|
guard !cleaned.isEmpty else { return nil }
|
||||||
|
|
||||||
|
if let cached = lookupCache[cleaned] { return cached }
|
||||||
|
guard let entry = dictionary.lookup(cleaned) else { return nil }
|
||||||
|
|
||||||
|
let displayWord = rawToken
|
||||||
|
.trimmingCharacters(in: .punctuationCharacters)
|
||||||
|
.trimmingCharacters(in: .whitespaces)
|
||||||
|
|
||||||
|
let tenseDisplay = entry.tenseId.flatMap { TenseInfo.find($0)?.english }
|
||||||
|
|
||||||
|
let lookup = LyricsWordLookup(
|
||||||
|
word: displayWord.isEmpty ? entry.word : displayWord,
|
||||||
|
baseForm: entry.baseForm,
|
||||||
|
english: entry.english,
|
||||||
|
partOfSpeech: entry.partOfSpeech,
|
||||||
|
tenseDisplay: tenseDisplay,
|
||||||
|
person: entry.person
|
||||||
|
)
|
||||||
|
lookupCache[cleaned] = lookup
|
||||||
|
return lookup
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Word Lookup Model
|
||||||
|
|
||||||
|
private struct LyricsWordLookup: Identifiable, Hashable {
|
||||||
|
let word: String
|
||||||
|
let baseForm: String
|
||||||
|
let english: String
|
||||||
|
let partOfSpeech: String
|
||||||
|
let tenseDisplay: String?
|
||||||
|
let person: String?
|
||||||
|
|
||||||
|
var id: String { word }
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Word View
|
||||||
|
|
||||||
|
private struct LyricsWordView: View {
|
||||||
|
let token: String
|
||||||
|
let lookup: LyricsWordLookup?
|
||||||
|
let onLookup: (LyricsWordLookup) -> Void
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
Text(token + " ")
|
||||||
|
.font(.body.weight(.medium))
|
||||||
|
.foregroundStyle(.primary)
|
||||||
|
.underline(lookup != nil, color: .teal.opacity(0.35))
|
||||||
|
.contentShape(Rectangle())
|
||||||
|
.onLongPressGesture(minimumDuration: 0.35) {
|
||||||
|
if let lookup {
|
||||||
|
onLookup(lookup)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Detail Sheet
|
||||||
|
|
||||||
|
private struct LyricsWordDetailSheet: View {
|
||||||
|
let word: LyricsWordLookup
|
||||||
|
|
||||||
|
var body: some View {
|
||||||
|
VStack(alignment: .leading, spacing: 14) {
|
||||||
|
HStack {
|
||||||
|
Text(word.word)
|
||||||
|
.font(.title2.bold())
|
||||||
|
Spacer()
|
||||||
|
if !word.partOfSpeech.isEmpty {
|
||||||
|
Text(word.partOfSpeech)
|
||||||
|
.font(.caption.weight(.medium))
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
.padding(.horizontal, 8)
|
||||||
|
.padding(.vertical, 4)
|
||||||
|
.background(.fill.tertiary, in: Capsule())
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Divider()
|
||||||
|
|
||||||
|
VStack(alignment: .leading, spacing: 10) {
|
||||||
|
if !word.baseForm.isEmpty && word.baseForm.lowercased() != word.word.lowercased() {
|
||||||
|
detailRow(label: "Base form", value: word.baseForm, italic: true)
|
||||||
|
}
|
||||||
|
|
||||||
|
if !word.english.isEmpty {
|
||||||
|
detailRow(label: "English", value: word.english)
|
||||||
|
}
|
||||||
|
|
||||||
|
if let tenseDisplay = word.tenseDisplay {
|
||||||
|
let personSuffix = (word.person?.isEmpty == false) ? " · \(word.person!)" : ""
|
||||||
|
detailRow(label: "Tense", value: tenseDisplay + personSuffix)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Spacer(minLength: 0)
|
||||||
|
}
|
||||||
|
.padding()
|
||||||
|
.frame(maxWidth: .infinity, alignment: .leading)
|
||||||
|
}
|
||||||
|
|
||||||
|
@ViewBuilder
|
||||||
|
private func detailRow(label: String, value: String, italic: Bool = false) -> some View {
|
||||||
|
HStack(alignment: .firstTextBaseline, spacing: 8) {
|
||||||
|
Text("\(label):")
|
||||||
|
.font(.subheadline)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
.frame(width: 86, alignment: .leading)
|
||||||
|
Text(value)
|
||||||
|
.font(.subheadline.weight(.semibold))
|
||||||
|
.italic(italic)
|
||||||
|
Spacer(minLength: 0)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Flow Layout
|
||||||
|
|
||||||
|
private struct LyricsFlowLayout: Layout {
|
||||||
|
var spacing: CGFloat = 0
|
||||||
|
|
||||||
|
func sizeThatFits(proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) -> CGSize {
|
||||||
|
let rows = computeRows(proposal: proposal, subviews: subviews)
|
||||||
|
var height: CGFloat = 0
|
||||||
|
for row in rows { height += row.map { $0.height }.max() ?? 0 }
|
||||||
|
height += CGFloat(max(0, rows.count - 1)) * spacing
|
||||||
|
return CGSize(width: proposal.width ?? 0, height: height)
|
||||||
|
}
|
||||||
|
|
||||||
|
func placeSubviews(in bounds: CGRect, proposal: ProposedViewSize, subviews: Subviews, cache: inout ()) {
|
||||||
|
let rows = computeRows(proposal: proposal, subviews: subviews)
|
||||||
|
var y = bounds.minY
|
||||||
|
var idx = 0
|
||||||
|
for row in rows {
|
||||||
|
var x = bounds.minX
|
||||||
|
let rh = row.map { $0.height }.max() ?? 0
|
||||||
|
for size in row {
|
||||||
|
subviews[idx].place(at: CGPoint(x: x, y: y), proposal: ProposedViewSize(size))
|
||||||
|
x += size.width
|
||||||
|
idx += 1
|
||||||
|
}
|
||||||
|
y += rh + spacing
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func computeRows(proposal: ProposedViewSize, subviews: Subviews) -> [[CGSize]] {
|
||||||
|
let mw = proposal.width ?? .infinity
|
||||||
|
var rows: [[CGSize]] = [[]]
|
||||||
|
var cw: CGFloat = 0
|
||||||
|
for sv in subviews {
|
||||||
|
let s = sv.sizeThatFits(.unspecified)
|
||||||
|
if cw + s.width > mw && !rows[rows.count - 1].isEmpty {
|
||||||
|
rows.append([])
|
||||||
|
cw = 0
|
||||||
|
}
|
||||||
|
rows[rows.count - 1].append(s)
|
||||||
|
cw += s.width
|
||||||
|
}
|
||||||
|
return rows
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -9,9 +9,11 @@ struct SettingsView: View {
|
|||||||
@State private var dailyGoal: Double = 50
|
@State private var dailyGoal: Double = 50
|
||||||
@State private var showVosotros: Bool = true
|
@State private var showVosotros: Bool = true
|
||||||
@State private var autoFillStem: Bool = false
|
@State private var autoFillStem: Bool = false
|
||||||
@State private var selectedLevel: VerbLevel = .basic
|
|
||||||
|
|
||||||
private let levels = VerbLevel.allCases
|
private let levels = VerbLevel.allCases
|
||||||
|
private let irregularCategories: [IrregularSpan.SpanCategory] = [
|
||||||
|
.spelling, .stemChange, .uniqueIrregular
|
||||||
|
]
|
||||||
private var cloudModelContext: ModelContext { cloudModelContextProvider() }
|
private var cloudModelContext: ModelContext { cloudModelContextProvider() }
|
||||||
|
|
||||||
var body: some View {
|
var body: some View {
|
||||||
@@ -40,19 +42,26 @@ struct SettingsView: View {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
Section("Level") {
|
Section {
|
||||||
Picker("Current Level", selection: $selectedLevel) {
|
|
||||||
ForEach(levels, id: \.self) { level in
|
ForEach(levels, id: \.self) { level in
|
||||||
Text(level.displayName).tag(level)
|
Toggle(level.displayName, isOn: Binding(
|
||||||
}
|
get: {
|
||||||
}
|
progress?.selectedVerbLevels.contains(level) ?? false
|
||||||
.onChange(of: selectedLevel) { _, newValue in
|
},
|
||||||
progress?.selectedVerbLevel = newValue
|
set: { enabled in
|
||||||
|
guard let progress else { return }
|
||||||
|
progress.setLevelEnabled(level, enabled: enabled)
|
||||||
saveProgress()
|
saveProgress()
|
||||||
}
|
}
|
||||||
|
))
|
||||||
|
}
|
||||||
|
} header: {
|
||||||
|
Text("Levels")
|
||||||
|
} footer: {
|
||||||
|
Text("Practice pulls only from verbs whose level is enabled. Turn on multiple to mix.")
|
||||||
}
|
}
|
||||||
|
|
||||||
Section("Tenses") {
|
Section {
|
||||||
ForEach(TenseInfo.all) { tense in
|
ForEach(TenseInfo.all) { tense in
|
||||||
Toggle(tense.english, isOn: Binding(
|
Toggle(tense.english, isOn: Binding(
|
||||||
get: {
|
get: {
|
||||||
@@ -65,6 +74,41 @@ struct SettingsView: View {
|
|||||||
}
|
}
|
||||||
))
|
))
|
||||||
}
|
}
|
||||||
|
} header: {
|
||||||
|
Text("Tenses")
|
||||||
|
}
|
||||||
|
|
||||||
|
Section {
|
||||||
|
ForEach(irregularCategories, id: \.self) { category in
|
||||||
|
Toggle(category.rawValue, isOn: Binding(
|
||||||
|
get: {
|
||||||
|
progress?.enabledIrregularCategories.contains(category) ?? false
|
||||||
|
},
|
||||||
|
set: { enabled in
|
||||||
|
guard let progress else { return }
|
||||||
|
progress.setIrregularCategoryEnabled(category, enabled: enabled)
|
||||||
|
saveProgress()
|
||||||
|
}
|
||||||
|
))
|
||||||
|
}
|
||||||
|
} header: {
|
||||||
|
Text("Irregular Types")
|
||||||
|
} footer: {
|
||||||
|
Text("Leave all off to include regular and irregular verbs. Enable any to restrict practice to those irregularity types.")
|
||||||
|
}
|
||||||
|
|
||||||
|
Section {
|
||||||
|
Toggle("Reflexive verbs only", isOn: Binding(
|
||||||
|
get: { progress?.showReflexiveVerbsOnly ?? false },
|
||||||
|
set: { enabled in
|
||||||
|
progress?.showReflexiveVerbsOnly = enabled
|
||||||
|
saveProgress()
|
||||||
|
}
|
||||||
|
))
|
||||||
|
} header: {
|
||||||
|
Text("Reflexive")
|
||||||
|
} footer: {
|
||||||
|
Text("When on, practice pulls only from the curated list of common reflexive verbs.")
|
||||||
}
|
}
|
||||||
|
|
||||||
Section("Stats") {
|
Section("Stats") {
|
||||||
@@ -96,7 +140,6 @@ struct SettingsView: View {
|
|||||||
dailyGoal = Double(resolved.dailyGoal)
|
dailyGoal = Double(resolved.dailyGoal)
|
||||||
showVosotros = resolved.showVosotros
|
showVosotros = resolved.showVosotros
|
||||||
autoFillStem = resolved.autoFillStem
|
autoFillStem = resolved.autoFillStem
|
||||||
selectedLevel = resolved.selectedVerbLevel
|
|
||||||
}
|
}
|
||||||
|
|
||||||
private func saveProgress() {
|
private func saveProgress() {
|
||||||
|
|||||||
@@ -4,14 +4,40 @@ import SwiftData
|
|||||||
|
|
||||||
struct VerbDetailView: View {
|
struct VerbDetailView: View {
|
||||||
@Environment(\.modelContext) private var modelContext
|
@Environment(\.modelContext) private var modelContext
|
||||||
|
@Environment(VerbExampleCache.self) private var exampleCache
|
||||||
|
@Environment(ReflexiveVerbStore.self) private var reflexiveStore
|
||||||
@State private var speechService = SpeechService()
|
@State private var speechService = SpeechService()
|
||||||
let verb: Verb
|
let verb: Verb
|
||||||
@State private var selectedTense: TenseInfo = TenseInfo.all[0]
|
@State private var selectedTense: TenseInfo = TenseInfo.all[0]
|
||||||
|
|
||||||
|
@State private var examples: [VerbExample] = []
|
||||||
|
@State private var examplesState: ExamplesState = .idle
|
||||||
|
|
||||||
|
private enum ExamplesState: Equatable {
|
||||||
|
case idle
|
||||||
|
case loading
|
||||||
|
case loaded
|
||||||
|
case unavailable
|
||||||
|
case failed(String)
|
||||||
|
}
|
||||||
|
|
||||||
|
private static let exampleTenseIds: [String] = [
|
||||||
|
TenseID.ind_presente.rawValue,
|
||||||
|
TenseID.ind_preterito.rawValue,
|
||||||
|
TenseID.ind_imperfecto.rawValue,
|
||||||
|
TenseID.ind_futuro.rawValue,
|
||||||
|
TenseID.subj_presente.rawValue,
|
||||||
|
TenseID.imp_afirmativo.rawValue,
|
||||||
|
]
|
||||||
|
|
||||||
private var formsForTense: [VerbForm] {
|
private var formsForTense: [VerbForm] {
|
||||||
ReferenceStore(context: modelContext).fetchForms(verbId: verb.id, tenseId: selectedTense.id)
|
ReferenceStore(context: modelContext).fetchForms(verbId: verb.id, tenseId: selectedTense.id)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private var reflexiveEntries: [ReflexiveVerb] {
|
||||||
|
reflexiveStore.entries(for: verb.infinitive)
|
||||||
|
}
|
||||||
|
|
||||||
var body: some View {
|
var body: some View {
|
||||||
List {
|
List {
|
||||||
Section {
|
Section {
|
||||||
@@ -25,6 +51,10 @@ struct VerbDetailView: View {
|
|||||||
Text("Info")
|
Text("Info")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if !reflexiveEntries.isEmpty {
|
||||||
|
reflexiveSection
|
||||||
|
}
|
||||||
|
|
||||||
Section {
|
Section {
|
||||||
Picker("Tense", selection: $selectedTense) {
|
Picker("Tense", selection: $selectedTense) {
|
||||||
ForEach(TenseInfo.all) { tense in
|
ForEach(TenseInfo.all) { tense in
|
||||||
@@ -66,6 +96,8 @@ struct VerbDetailView: View {
|
|||||||
} header: {
|
} header: {
|
||||||
Text("Conjugation")
|
Text("Conjugation")
|
||||||
}
|
}
|
||||||
|
|
||||||
|
examplesSection
|
||||||
}
|
}
|
||||||
.navigationTitle(verb.infinitive)
|
.navigationTitle(verb.infinitive)
|
||||||
.toolbar {
|
.toolbar {
|
||||||
@@ -78,6 +110,126 @@ struct VerbDetailView: View {
|
|||||||
.tint(.secondary)
|
.tint(.secondary)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
.task(id: verb.id) {
|
||||||
|
await loadExamples()
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Reflexive
|
||||||
|
|
||||||
|
private var reflexiveSection: some View {
|
||||||
|
Section {
|
||||||
|
ForEach(Array(reflexiveEntries.enumerated()), id: \.offset) { _, entry in
|
||||||
|
VStack(alignment: .leading, spacing: 4) {
|
||||||
|
HStack(alignment: .firstTextBaseline, spacing: 8) {
|
||||||
|
Text(entry.infinitive)
|
||||||
|
.font(.body.weight(.semibold))
|
||||||
|
.italic()
|
||||||
|
if let hint = entry.usageHint, !hint.isEmpty {
|
||||||
|
Text(hint)
|
||||||
|
.font(.caption.weight(.medium))
|
||||||
|
.foregroundStyle(.tint)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Text(entry.english)
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
.padding(.vertical, 2)
|
||||||
|
}
|
||||||
|
} header: {
|
||||||
|
Text("Reflexive")
|
||||||
|
} footer: {
|
||||||
|
if reflexiveEntries.contains(where: { $0.usageHint != nil }) {
|
||||||
|
Text("Highlighted words are prepositions or phrases this verb commonly pairs with.")
|
||||||
|
.font(.caption2)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Examples
|
||||||
|
|
||||||
|
@ViewBuilder
|
||||||
|
private var examplesSection: some View {
|
||||||
|
Section {
|
||||||
|
switch examplesState {
|
||||||
|
case .idle, .loading:
|
||||||
|
HStack(spacing: 10) {
|
||||||
|
ProgressView()
|
||||||
|
Text("Generating examples…")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
case .unavailable:
|
||||||
|
Label("Examples require Apple Intelligence on this device.", systemImage: "sparkles")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
case .failed(let message):
|
||||||
|
Label(message, systemImage: "exclamationmark.triangle")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
case .loaded:
|
||||||
|
if examples.isEmpty {
|
||||||
|
Label("No examples available.", systemImage: "text.quote")
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
} else {
|
||||||
|
ForEach(Array(examples.enumerated()), id: \.offset) { _, example in
|
||||||
|
VStack(alignment: .leading, spacing: 4) {
|
||||||
|
if let info = TenseInfo.find(example.tenseId) {
|
||||||
|
Text(info.english)
|
||||||
|
.font(.caption2.weight(.semibold))
|
||||||
|
.foregroundStyle(.tint)
|
||||||
|
}
|
||||||
|
Text(example.spanish)
|
||||||
|
.font(.body)
|
||||||
|
.italic()
|
||||||
|
Text(example.english)
|
||||||
|
.font(.caption)
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
.padding(.vertical, 2)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
} header: {
|
||||||
|
Text("Examples")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func loadExamples() async {
|
||||||
|
// Reset state when navigating between verbs via NavigationSplitView.
|
||||||
|
examples = []
|
||||||
|
examplesState = .idle
|
||||||
|
|
||||||
|
if let cached = exampleCache.examples(for: verb.id), !cached.isEmpty {
|
||||||
|
examples = cached
|
||||||
|
examplesState = .loaded
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
guard VerbExampleGenerator.isAvailable else {
|
||||||
|
examplesState = .unavailable
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
examplesState = .loading
|
||||||
|
do {
|
||||||
|
let generated = try await VerbExampleGenerator.generate(
|
||||||
|
verbInfinitive: verb.infinitive,
|
||||||
|
verbEnglish: verb.english,
|
||||||
|
tenseIds: Self.exampleTenseIds
|
||||||
|
)
|
||||||
|
guard !generated.isEmpty else {
|
||||||
|
examplesState = .failed("Could not generate examples.")
|
||||||
|
return
|
||||||
|
}
|
||||||
|
exampleCache.setExamples(generated, for: verb.id)
|
||||||
|
examples = generated
|
||||||
|
examplesState = .loaded
|
||||||
|
} catch {
|
||||||
|
examplesState = .failed("Could not generate examples.")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -86,4 +238,6 @@ struct VerbDetailView: View {
|
|||||||
VerbDetailView(verb: Verb(id: 1, infinitive: "hablar", english: "to speak", rank: 1, ending: "ar", reflexive: 0, level: "basic"))
|
VerbDetailView(verb: Verb(id: 1, infinitive: "hablar", english: "to speak", rank: 1, ending: "ar", reflexive: 0, level: "basic"))
|
||||||
}
|
}
|
||||||
.modelContainer(for: [Verb.self, VerbForm.self], inMemory: true)
|
.modelContainer(for: [Verb.self, VerbForm.self], inMemory: true)
|
||||||
|
.environment(VerbExampleCache())
|
||||||
|
.environment(ReflexiveVerbStore())
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -2,11 +2,33 @@ import SwiftUI
|
|||||||
import SwiftData
|
import SwiftData
|
||||||
import SharedModels
|
import SharedModels
|
||||||
|
|
||||||
|
enum IrregularityCategory: String, CaseIterable, Identifiable {
|
||||||
|
case anyIrregular = "Any Irregular"
|
||||||
|
case spelling = "Spelling Change"
|
||||||
|
case stemChange = "Stem Change"
|
||||||
|
case uniqueIrregular = "Unique Irregular"
|
||||||
|
|
||||||
|
var id: String { rawValue }
|
||||||
|
|
||||||
|
var systemImage: String {
|
||||||
|
switch self {
|
||||||
|
case .anyIrregular: "asterisk"
|
||||||
|
case .spelling: "character.cursor.ibeam"
|
||||||
|
case .stemChange: "arrow.triangle.2.circlepath"
|
||||||
|
case .uniqueIrregular: "star"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
struct VerbListView: View {
|
struct VerbListView: View {
|
||||||
@Environment(\.modelContext) private var modelContext
|
@Environment(\.modelContext) private var modelContext
|
||||||
|
@Environment(ReflexiveVerbStore.self) private var reflexiveStore
|
||||||
@State private var verbs: [Verb] = []
|
@State private var verbs: [Verb] = []
|
||||||
|
@State private var irregularityByVerbId: [Int: Set<IrregularityCategory>] = [:]
|
||||||
@State private var searchText = ""
|
@State private var searchText = ""
|
||||||
@State private var selectedLevel: String?
|
@State private var selectedLevel: String?
|
||||||
|
@State private var selectedIrregularity: IrregularityCategory?
|
||||||
|
@State private var reflexiveOnly: Bool = false
|
||||||
@State private var selectedVerb: Verb?
|
@State private var selectedVerb: Verb?
|
||||||
|
|
||||||
private var filteredVerbs: [Verb] {
|
private var filteredVerbs: [Verb] {
|
||||||
@@ -14,6 +36,15 @@ struct VerbListView: View {
|
|||||||
if let level = selectedLevel {
|
if let level = selectedLevel {
|
||||||
result = result.filter { VerbLevelGroup.matches($0.level, selectedLevel: level) }
|
result = result.filter { VerbLevelGroup.matches($0.level, selectedLevel: level) }
|
||||||
}
|
}
|
||||||
|
if let category = selectedIrregularity {
|
||||||
|
result = result.filter { verb in
|
||||||
|
guard let cats = irregularityByVerbId[verb.id] else { return false }
|
||||||
|
return category == .anyIrregular ? !cats.isEmpty : cats.contains(category)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if reflexiveOnly {
|
||||||
|
result = result.filter { reflexiveStore.isReflexive(baseInfinitive: $0.infinitive) }
|
||||||
|
}
|
||||||
if !searchText.isEmpty {
|
if !searchText.isEmpty {
|
||||||
let query = searchText.lowercased()
|
let query = searchText.lowercased()
|
||||||
result = result.filter {
|
result = result.filter {
|
||||||
@@ -30,20 +61,58 @@ struct VerbListView: View {
|
|||||||
NavigationSplitView {
|
NavigationSplitView {
|
||||||
List(filteredVerbs, selection: $selectedVerb) { verb in
|
List(filteredVerbs, selection: $selectedVerb) { verb in
|
||||||
NavigationLink(value: verb) {
|
NavigationLink(value: verb) {
|
||||||
VerbRowView(verb: verb)
|
VerbRowView(verb: verb, irregularities: irregularityByVerbId[verb.id] ?? [])
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
.navigationTitle("Verbs")
|
.navigationTitle("Verbs")
|
||||||
.searchable(text: $searchText, prompt: "Search verbs...")
|
.searchable(text: $searchText, prompt: "Search verbs...")
|
||||||
|
.safeAreaInset(edge: .top, spacing: 0) {
|
||||||
|
if hasActiveFilter {
|
||||||
|
activeFilterBar
|
||||||
|
}
|
||||||
|
}
|
||||||
.toolbar {
|
.toolbar {
|
||||||
ToolbarItem(placement: .topBarTrailing) {
|
ToolbarItem(placement: .topBarTrailing) {
|
||||||
Menu {
|
Menu {
|
||||||
Button("All Levels") { selectedLevel = nil }
|
Section("Level") {
|
||||||
|
Button {
|
||||||
|
selectedLevel = nil
|
||||||
|
} label: {
|
||||||
|
Label("All Levels", systemImage: selectedLevel == nil ? "checkmark" : "")
|
||||||
|
}
|
||||||
ForEach(levels, id: \.self) { level in
|
ForEach(levels, id: \.self) { level in
|
||||||
Button(level.capitalized) { selectedLevel = level }
|
Button {
|
||||||
|
selectedLevel = level
|
||||||
|
} label: {
|
||||||
|
Label(level.capitalized, systemImage: selectedLevel == level ? "checkmark" : "")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Section("Irregularity") {
|
||||||
|
Button {
|
||||||
|
selectedIrregularity = nil
|
||||||
|
} label: {
|
||||||
|
Label("All Verbs", systemImage: selectedIrregularity == nil ? "checkmark" : "")
|
||||||
|
}
|
||||||
|
ForEach(IrregularityCategory.allCases) { category in
|
||||||
|
Button {
|
||||||
|
selectedIrregularity = category
|
||||||
|
} label: {
|
||||||
|
Label(category.rawValue, systemImage: selectedIrregularity == category ? "checkmark" : category.systemImage)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
Section("Reflexive") {
|
||||||
|
Button {
|
||||||
|
reflexiveOnly.toggle()
|
||||||
|
} label: {
|
||||||
|
Label("Reflexive verbs only", systemImage: reflexiveOnly ? "checkmark" : "arrow.triangle.2.circlepath")
|
||||||
|
}
|
||||||
}
|
}
|
||||||
} label: {
|
} label: {
|
||||||
Label("Filter", systemImage: "line.3.horizontal.decrease.circle")
|
Label("Filter", systemImage: hasActiveFilter ? "line.3.horizontal.decrease.circle.fill" : "line.3.horizontal.decrease.circle")
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@@ -58,6 +127,56 @@ struct VerbListView: View {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
private var hasActiveFilter: Bool {
|
||||||
|
selectedLevel != nil || selectedIrregularity != nil || reflexiveOnly
|
||||||
|
}
|
||||||
|
|
||||||
|
@ViewBuilder
|
||||||
|
private var activeFilterBar: some View {
|
||||||
|
HStack(spacing: 8) {
|
||||||
|
if let level = selectedLevel {
|
||||||
|
filterChip(text: level.capitalized, systemImage: "graduationcap") {
|
||||||
|
selectedLevel = nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if let cat = selectedIrregularity {
|
||||||
|
filterChip(text: cat.rawValue, systemImage: cat.systemImage) {
|
||||||
|
selectedIrregularity = nil
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if reflexiveOnly {
|
||||||
|
filterChip(text: "Reflexive", systemImage: "arrow.triangle.2.circlepath") {
|
||||||
|
reflexiveOnly = false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
Spacer()
|
||||||
|
Text("\(filteredVerbs.count)")
|
||||||
|
.font(.caption.monospacedDigit())
|
||||||
|
.foregroundStyle(.secondary)
|
||||||
|
}
|
||||||
|
.padding(.horizontal)
|
||||||
|
.padding(.vertical, 8)
|
||||||
|
.background(.bar)
|
||||||
|
}
|
||||||
|
|
||||||
|
private func filterChip(text: String, systemImage: String, onClear: @escaping () -> Void) -> some View {
|
||||||
|
Button(action: onClear) {
|
||||||
|
HStack(spacing: 4) {
|
||||||
|
Image(systemName: systemImage)
|
||||||
|
.font(.caption2)
|
||||||
|
Text(text)
|
||||||
|
.font(.caption.weight(.medium))
|
||||||
|
Image(systemName: "xmark")
|
||||||
|
.font(.caption2)
|
||||||
|
}
|
||||||
|
.padding(.horizontal, 10)
|
||||||
|
.padding(.vertical, 5)
|
||||||
|
.background(.blue.opacity(0.15), in: Capsule())
|
||||||
|
.foregroundStyle(.blue)
|
||||||
|
}
|
||||||
|
.buttonStyle(.plain)
|
||||||
|
}
|
||||||
|
|
||||||
private func loadVerbs() {
|
private func loadVerbs() {
|
||||||
// Hit the shared local container directly, bypassing @Environment.
|
// Hit the shared local container directly, bypassing @Environment.
|
||||||
guard let container = SharedStore.localContainer else {
|
guard let container = SharedStore.localContainer else {
|
||||||
@@ -69,12 +188,30 @@ struct VerbListView: View {
|
|||||||
}
|
}
|
||||||
let context = ModelContext(container)
|
let context = ModelContext(container)
|
||||||
verbs = ReferenceStore(context: context).fetchVerbs()
|
verbs = ReferenceStore(context: context).fetchVerbs()
|
||||||
print("[VerbListView] loaded \(verbs.count) verbs (container: \(ObjectIdentifier(container)))")
|
irregularityByVerbId = buildIrregularityIndex(context: context)
|
||||||
|
print("[VerbListView] loaded \(verbs.count) verbs, \(irregularityByVerbId.count) flagged irregular (container: \(ObjectIdentifier(container)))")
|
||||||
|
}
|
||||||
|
|
||||||
|
private func buildIrregularityIndex(context: ModelContext) -> [Int: Set<IrregularityCategory>] {
|
||||||
|
let spans = (try? context.fetch(FetchDescriptor<IrregularSpan>())) ?? []
|
||||||
|
var index: [Int: Set<IrregularityCategory>] = [:]
|
||||||
|
for span in spans {
|
||||||
|
let category: IrregularityCategory
|
||||||
|
switch span.spanType {
|
||||||
|
case 100..<200: category = .spelling
|
||||||
|
case 200..<300: category = .stemChange
|
||||||
|
case 300..<400: category = .uniqueIrregular
|
||||||
|
default: continue
|
||||||
|
}
|
||||||
|
index[span.verbId, default: []].insert(category)
|
||||||
|
}
|
||||||
|
return index
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
struct VerbRowView: View {
|
struct VerbRowView: View {
|
||||||
let verb: Verb
|
let verb: Verb
|
||||||
|
var irregularities: Set<IrregularityCategory> = []
|
||||||
|
|
||||||
var body: some View {
|
var body: some View {
|
||||||
HStack {
|
HStack {
|
||||||
@@ -88,6 +225,15 @@ struct VerbRowView: View {
|
|||||||
|
|
||||||
Spacer()
|
Spacer()
|
||||||
|
|
||||||
|
HStack(spacing: 4) {
|
||||||
|
ForEach(orderedIrregularities, id: \.self) { cat in
|
||||||
|
Image(systemName: cat.systemImage)
|
||||||
|
.font(.caption2.weight(.semibold))
|
||||||
|
.foregroundStyle(irregularityColor(cat))
|
||||||
|
.help(cat.rawValue)
|
||||||
|
.accessibilityLabel(cat.rawValue)
|
||||||
|
}
|
||||||
|
|
||||||
Text(verb.level.prefix(3).uppercased())
|
Text(verb.level.prefix(3).uppercased())
|
||||||
.font(.caption2)
|
.font(.caption2)
|
||||||
.fontWeight(.semibold)
|
.fontWeight(.semibold)
|
||||||
@@ -98,6 +244,22 @@ struct VerbRowView: View {
|
|||||||
.clipShape(Capsule())
|
.clipShape(Capsule())
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private var orderedIrregularities: [IrregularityCategory] {
|
||||||
|
// Order: unique > stem > spelling (most notable first)
|
||||||
|
let order: [IrregularityCategory] = [.uniqueIrregular, .stemChange, .spelling]
|
||||||
|
return order.filter { irregularities.contains($0) }
|
||||||
|
}
|
||||||
|
|
||||||
|
private func irregularityColor(_ category: IrregularityCategory) -> Color {
|
||||||
|
switch category {
|
||||||
|
case .uniqueIrregular: return .purple
|
||||||
|
case .stemChange: return .orange
|
||||||
|
case .spelling: return .teal
|
||||||
|
case .anyIrregular: return .gray
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
private func levelColor(_ level: String) -> Color {
|
private func levelColor(_ level: String) -> Color {
|
||||||
switch level {
|
switch level {
|
||||||
|
|||||||
104
Conjuga/Conjuga/reflexive_verbs.json
Normal file
104
Conjuga/Conjuga/reflexive_verbs.json
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
[
|
||||||
|
{"infinitive": "aburrirse", "baseInfinitive": "aburrir", "english": "to get bored"},
|
||||||
|
{"infinitive": "acercarse", "baseInfinitive": "acercar", "english": "to get close to", "usageHint": "a"},
|
||||||
|
{"infinitive": "acordarse", "baseInfinitive": "acordar", "english": "to remember", "usageHint": "de"},
|
||||||
|
{"infinitive": "acostarse", "baseInfinitive": "acostar", "english": "to lay down / to go to bed"},
|
||||||
|
{"infinitive": "acostumbrarse", "baseInfinitive": "acostumbrar", "english": "to get used to", "usageHint": "a"},
|
||||||
|
{"infinitive": "afeitarse", "baseInfinitive": "afeitar", "english": "to shave"},
|
||||||
|
{"infinitive": "alegrarse", "baseInfinitive": "alegrar", "english": "to be glad / happy / pleased"},
|
||||||
|
{"infinitive": "alejarse", "baseInfinitive": "alejar", "english": "to get away from", "usageHint": "de"},
|
||||||
|
{"infinitive": "animarse", "baseInfinitive": "animar", "english": "to cheer up / to dare to do something", "usageHint": "a"},
|
||||||
|
{"infinitive": "apurarse", "baseInfinitive": "apurar", "english": "to hurry"},
|
||||||
|
{"infinitive": "aprovecharse", "baseInfinitive": "aprovechar", "english": "to take advantage of", "usageHint": "de"},
|
||||||
|
{"infinitive": "asustarse", "baseInfinitive": "asustar", "english": "to get or become afraid"},
|
||||||
|
{"infinitive": "atreverse", "baseInfinitive": "atrever", "english": "to dare to", "usageHint": "a"},
|
||||||
|
{"infinitive": "bañarse", "baseInfinitive": "bañar", "english": "to take a bath / shower"},
|
||||||
|
{"infinitive": "burlarse", "baseInfinitive": "burlar", "english": "to make fun of", "usageHint": "de"},
|
||||||
|
{"infinitive": "caerse", "baseInfinitive": "caer", "english": "to fall down"},
|
||||||
|
{"infinitive": "calmarse", "baseInfinitive": "calmar", "english": "to calm down"},
|
||||||
|
{"infinitive": "cansarse", "baseInfinitive": "cansar", "english": "to get tired (of)", "usageHint": "(de)"},
|
||||||
|
{"infinitive": "casarse", "baseInfinitive": "casar", "english": "to marry", "usageHint": "con"},
|
||||||
|
{"infinitive": "cepillarse", "baseInfinitive": "cepillar", "english": "to brush (hair, teeth)"},
|
||||||
|
{"infinitive": "deprimirse", "baseInfinitive": "deprimir", "english": "to become depressed"},
|
||||||
|
{"infinitive": "conformarse", "baseInfinitive": "conformar", "english": "to resign oneself to", "usageHint": "con"},
|
||||||
|
{"infinitive": "volverse", "baseInfinitive": "volver", "english": "to become / to turn into / to return"},
|
||||||
|
{"infinitive": "darse", "baseInfinitive": "dar", "english": "to realize", "usageHint": "cuenta de"},
|
||||||
|
{"infinitive": "dedicarse", "baseInfinitive": "dedicar", "english": "to dedicate oneself to / to do for a living", "usageHint": "a"},
|
||||||
|
{"infinitive": "despedirse", "baseInfinitive": "despedir", "english": "to say goodbye", "usageHint": "(de)"},
|
||||||
|
{"infinitive": "despertarse", "baseInfinitive": "despertar", "english": "to wake up"},
|
||||||
|
{"infinitive": "desvestirse", "baseInfinitive": "desvestir", "english": "to undress"},
|
||||||
|
{"infinitive": "dirigirse", "baseInfinitive": "dirigir", "english": "to go to / make one's way toward / to address", "usageHint": "a"},
|
||||||
|
{"infinitive": "hacerse", "baseInfinitive": "hacer", "english": "to become / to pretend"},
|
||||||
|
{"infinitive": "divertirse", "baseInfinitive": "divertir", "english": "to have fun"},
|
||||||
|
{"infinitive": "dormirse", "baseInfinitive": "dormir", "english": "to fall asleep / to oversleep"},
|
||||||
|
{"infinitive": "ducharse", "baseInfinitive": "duchar", "english": "to shower"},
|
||||||
|
{"infinitive": "echarse", "baseInfinitive": "echar", "english": "to begin (usually suddenly) to do something / to break into", "usageHint": "a"},
|
||||||
|
{"infinitive": "enamorarse", "baseInfinitive": "enamorar", "english": "to fall in love with", "usageHint": "de"},
|
||||||
|
{"infinitive": "encargarse", "baseInfinitive": "encargar", "english": "to take charge of or be responsible for", "usageHint": "de"},
|
||||||
|
{"infinitive": "encogerse", "baseInfinitive": "encoger", "english": "to shrug (shoulders)", "usageHint": "(de hombros)"},
|
||||||
|
{"infinitive": "encontrarse", "baseInfinitive": "encontrar", "english": "to meet with / to run into someone", "usageHint": "(con)"},
|
||||||
|
{"infinitive": "enfermarse", "baseInfinitive": "enfermar", "english": "to get sick"},
|
||||||
|
{"infinitive": "enojarse", "baseInfinitive": "enojar", "english": "to get or become angry"},
|
||||||
|
{"infinitive": "enterarse", "baseInfinitive": "enterar", "english": "to find out, to realize", "usageHint": "de"},
|
||||||
|
{"infinitive": "exponerse", "baseInfinitive": "exponer", "english": "to expose oneself to or run the risk of", "usageHint": "a"},
|
||||||
|
{"infinitive": "fijarse", "baseInfinitive": "fijar", "english": "to pay attention to / to take a look"},
|
||||||
|
{"infinitive": "jugarse", "baseInfinitive": "jugar", "english": "to risk"},
|
||||||
|
{"infinitive": "lastimarse", "baseInfinitive": "lastimar", "english": "to get hurt or hurt oneself"},
|
||||||
|
{"infinitive": "lavarse", "baseInfinitive": "lavar", "english": "to wash (a body part)"},
|
||||||
|
{"infinitive": "levantarse", "baseInfinitive": "levantar", "english": "to get up"},
|
||||||
|
{"infinitive": "maquillarse", "baseInfinitive": "maquillar", "english": "to put makeup on"},
|
||||||
|
{"infinitive": "meterse", "baseInfinitive": "meter", "english": "to get into / to pick on / to pick a fight with", "usageHint": "en / con"},
|
||||||
|
{"infinitive": "motivarse", "baseInfinitive": "motivar", "english": "to become or get motivated to"},
|
||||||
|
{"infinitive": "moverse", "baseInfinitive": "mover", "english": "to move oneself"},
|
||||||
|
{"infinitive": "mudarse", "baseInfinitive": "mudar", "english": "to move (change residence)"},
|
||||||
|
{"infinitive": "negarse", "baseInfinitive": "negar", "english": "to refuse to", "usageHint": "a"},
|
||||||
|
{"infinitive": "obsesionarse", "baseInfinitive": "obsesionar", "english": "to be or get obsessed with", "usageHint": "con"},
|
||||||
|
{"infinitive": "ocuparse", "baseInfinitive": "ocupar", "english": "to look after", "usageHint": "de"},
|
||||||
|
{"infinitive": "olvidarse", "baseInfinitive": "olvidar", "english": "to forget", "usageHint": "de"},
|
||||||
|
{"infinitive": "parecerse", "baseInfinitive": "parecer", "english": "to look like someone or something", "usageHint": "a"},
|
||||||
|
{"infinitive": "peinarse", "baseInfinitive": "peinar", "english": "to comb your hair"},
|
||||||
|
{"infinitive": "ponerse", "baseInfinitive": "poner", "english": "to put on (clothing) / to get or become"},
|
||||||
|
{"infinitive": "ponerse", "baseInfinitive": "poner", "english": "to come to an agreement with someone", "usageHint": "de acuerdo"},
|
||||||
|
{"infinitive": "preocuparse", "baseInfinitive": "preocupar", "english": "to worry about", "usageHint": "por"},
|
||||||
|
{"infinitive": "prepararse", "baseInfinitive": "preparar", "english": "to prepare to"},
|
||||||
|
{"infinitive": "probarse", "baseInfinitive": "probar", "english": "to try on"},
|
||||||
|
{"infinitive": "quebrarse", "baseInfinitive": "quebrar", "english": "to break (an arm, leg, etc.)"},
|
||||||
|
{"infinitive": "quejarse", "baseInfinitive": "quejar", "english": "to complain about", "usageHint": "de"},
|
||||||
|
{"infinitive": "quedarse", "baseInfinitive": "quedar", "english": "to remain / to stay"},
|
||||||
|
{"infinitive": "quemarse", "baseInfinitive": "quemar", "english": "to burn oneself / one's body"},
|
||||||
|
{"infinitive": "quitarse", "baseInfinitive": "quitar", "english": "to take off (clothing, etc.)"},
|
||||||
|
{"infinitive": "reírse", "baseInfinitive": "reír", "english": "to laugh about", "usageHint": "de"},
|
||||||
|
{"infinitive": "resignarse", "baseInfinitive": "resignar", "english": "to resign oneself to", "usageHint": "a"},
|
||||||
|
{"infinitive": "romperse", "baseInfinitive": "romper", "english": "to break (an arm, leg, etc.)"},
|
||||||
|
{"infinitive": "secarse", "baseInfinitive": "secar", "english": "to dry (a body part)"},
|
||||||
|
{"infinitive": "sentarse", "baseInfinitive": "sentar", "english": "to sit down"},
|
||||||
|
{"infinitive": "sentirse", "baseInfinitive": "sentir", "english": "to feel"},
|
||||||
|
{"infinitive": "servirse", "baseInfinitive": "servir", "english": "to help oneself to (food)"},
|
||||||
|
{"infinitive": "suicidarse", "baseInfinitive": "suicidar", "english": "to commit suicide"},
|
||||||
|
{"infinitive": "tratarse", "baseInfinitive": "tratar", "english": "to be about", "usageHint": "de"},
|
||||||
|
{"infinitive": "vestirse", "baseInfinitive": "vestir", "english": "to get dressed"},
|
||||||
|
{"infinitive": "marearse", "baseInfinitive": "marear", "english": "to get sick, to get dizzy"},
|
||||||
|
{"infinitive": "irse", "baseInfinitive": "ir", "english": "to leave"},
|
||||||
|
{"infinitive": "imaginarse", "baseInfinitive": "imaginar", "english": "to imagine"},
|
||||||
|
{"infinitive": "preguntarse", "baseInfinitive": "preguntar", "english": "to wonder"},
|
||||||
|
{"infinitive": "llamarse", "baseInfinitive": "llamar", "english": "to be called"},
|
||||||
|
{"infinitive": "verse", "baseInfinitive": "ver", "english": "to look or appear"},
|
||||||
|
{"infinitive": "distraerse", "baseInfinitive": "distraer", "english": "to get distracted"},
|
||||||
|
{"infinitive": "concentrarse", "baseInfinitive": "concentrar", "english": "to focus"},
|
||||||
|
{"infinitive": "rendirse", "baseInfinitive": "rendir", "english": "to give up"},
|
||||||
|
{"infinitive": "relajarse", "baseInfinitive": "relajar", "english": "to relax"},
|
||||||
|
{"infinitive": "merecerse", "baseInfinitive": "merecer", "english": "to deserve"},
|
||||||
|
{"infinitive": "suponerse", "baseInfinitive": "suponer", "english": "to suppose"},
|
||||||
|
{"infinitive": "conectarse", "baseInfinitive": "conectar", "english": "to connect"},
|
||||||
|
{"infinitive": "destacarse", "baseInfinitive": "destacar", "english": "to stand out"},
|
||||||
|
{"infinitive": "recibirse", "baseInfinitive": "recibir", "english": "to graduate"},
|
||||||
|
{"infinitive": "graduarse", "baseInfinitive": "graduar", "english": "to graduate"},
|
||||||
|
{"infinitive": "perderse", "baseInfinitive": "perder", "english": "to get lost"},
|
||||||
|
{"infinitive": "cambiarse", "baseInfinitive": "cambiar", "english": "to change (clothing)", "usageHint": "(de ropa)"},
|
||||||
|
{"infinitive": "adaptarse", "baseInfinitive": "adaptar", "english": "to adapt, to adjust", "usageHint": "a"},
|
||||||
|
{"infinitive": "salirse", "baseInfinitive": "salir", "english": "to get away with", "usageHint": "con (la suya)"},
|
||||||
|
{"infinitive": "subirse", "baseInfinitive": "subir", "english": "to get on (the bus, etc.)", "usageHint": "a"},
|
||||||
|
{"infinitive": "tranquilizarse", "baseInfinitive": "tranquilizar", "english": "to relax"},
|
||||||
|
{"infinitive": "equivocarse", "baseInfinitive": "equivocar", "english": "to get something wrong / confused"},
|
||||||
|
{"infinitive": "confundirse", "baseInfinitive": "confundir", "english": "to get something wrong / confused"}
|
||||||
|
]
|
||||||
1
Conjuga/Conjuga/textbook_data.json
Normal file
1
Conjuga/Conjuga/textbook_data.json
Normal file
File diff suppressed because one or more lines are too long
25099
Conjuga/Conjuga/textbook_vocab.json
Normal file
25099
Conjuga/Conjuga/textbook_vocab.json
Normal file
File diff suppressed because it is too large
Load Diff
95
Conjuga/ConjugaUITests/AllChaptersScreenshotTests.swift
Normal file
95
Conjuga/ConjugaUITests/AllChaptersScreenshotTests.swift
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
import XCTest
|
||||||
|
|
||||||
|
/// Screenshot every chapter of the textbook — one top + one bottom frame each —
|
||||||
|
/// so you can visually audit parsing / rendering issues across all 30 chapters.
|
||||||
|
final class AllChaptersScreenshotTests: XCTestCase {
|
||||||
|
|
||||||
|
override func setUpWithError() throws {
|
||||||
|
continueAfterFailure = true
|
||||||
|
}
|
||||||
|
|
||||||
|
func testScreenshotEveryChapter() throws {
|
||||||
|
let app = XCUIApplication()
|
||||||
|
app.launchArguments += ["-onboardingComplete", "YES"]
|
||||||
|
app.launch()
|
||||||
|
|
||||||
|
let courseTab = app.tabBars.buttons["Course"]
|
||||||
|
XCTAssertTrue(courseTab.waitForExistence(timeout: 5))
|
||||||
|
courseTab.tap()
|
||||||
|
|
||||||
|
let textbookRow = app.buttons.containing(NSPredicate(
|
||||||
|
format: "label CONTAINS[c] 'Complete Spanish'"
|
||||||
|
)).firstMatch
|
||||||
|
XCTAssertTrue(textbookRow.waitForExistence(timeout: 5))
|
||||||
|
textbookRow.tap()
|
||||||
|
|
||||||
|
// NOTE: SwiftUI List preserves scroll position across navigation pushes,
|
||||||
|
// so visiting chapters in-order means the next one is already visible
|
||||||
|
// after we return from the previous one. No need to reset.
|
||||||
|
attach(app, name: "00-chapter-list-top")
|
||||||
|
|
||||||
|
for chapter in 1...30 {
|
||||||
|
guard let row = findChapterRow(app: app, chapter: chapter) else {
|
||||||
|
XCTFail("Chapter \(chapter) row not reachable")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
row.tap()
|
||||||
|
|
||||||
|
// Chapter body — wait until the chapter's title appears as a nav bar label
|
||||||
|
_ = app.navigationBars.firstMatch.waitForExistence(timeout: 3)
|
||||||
|
|
||||||
|
attach(app, name: String(format: "ch%02d-top", chapter))
|
||||||
|
// One big scroll to sample the bottom of the chapter
|
||||||
|
dragFullScreen(app, direction: .up)
|
||||||
|
dragFullScreen(app, direction: .up)
|
||||||
|
attach(app, name: String(format: "ch%02d-bottom", chapter))
|
||||||
|
|
||||||
|
tapNavBack(app)
|
||||||
|
// Small settle wait
|
||||||
|
_ = app.navigationBars.firstMatch.waitForExistence(timeout: 2)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// MARK: - Helpers
|
||||||
|
|
||||||
|
private enum DragDirection { case up, down }
|
||||||
|
|
||||||
|
private func dragFullScreen(_ app: XCUIApplication, direction: DragDirection) {
|
||||||
|
let top = app.coordinate(withNormalizedOffset: CGVector(dx: 0.5, dy: 0.12))
|
||||||
|
let bot = app.coordinate(withNormalizedOffset: CGVector(dx: 0.5, dy: 0.88))
|
||||||
|
switch direction {
|
||||||
|
case .up: bot.press(forDuration: 0.1, thenDragTo: top)
|
||||||
|
case .down: top.press(forDuration: 0.1, thenDragTo: bot)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
private func findChapterRow(app: XCUIApplication, chapter: Int) -> XCUIElement? {
|
||||||
|
// Chapter row accessibility label: "<n>, <title>, ..." (SwiftUI composes
|
||||||
|
// label from inner Texts). Match by starting number.
|
||||||
|
let predicate = NSPredicate(format: "label BEGINSWITH %@", "\(chapter),")
|
||||||
|
let row = app.buttons.matching(predicate).firstMatch
|
||||||
|
|
||||||
|
if row.exists && row.isHittable { return row }
|
||||||
|
|
||||||
|
// Scroll down up to 8 times searching for the row — chapters visited
|
||||||
|
// in order, so usually 0–2 swipes suffice.
|
||||||
|
for _ in 0..<8 {
|
||||||
|
if row.exists && row.isHittable { return row }
|
||||||
|
dragFullScreen(app, direction: .up)
|
||||||
|
}
|
||||||
|
return row.exists ? row : nil
|
||||||
|
}
|
||||||
|
|
||||||
|
private func tapNavBack(_ app: XCUIApplication) {
|
||||||
|
let back = app.navigationBars.buttons.firstMatch
|
||||||
|
if back.exists && back.isHittable { back.tap() }
|
||||||
|
}
|
||||||
|
|
||||||
|
private func attach(_ app: XCUIApplication, name: String) {
|
||||||
|
let screenshot = app.screenshot()
|
||||||
|
let attachment = XCTAttachment(screenshot: screenshot)
|
||||||
|
attachment.name = name
|
||||||
|
attachment.lifetime = .keepAlways
|
||||||
|
add(attachment)
|
||||||
|
}
|
||||||
|
}
|
||||||
66
Conjuga/ConjugaUITests/StemChangeToggleTests.swift
Normal file
66
Conjuga/ConjugaUITests/StemChangeToggleTests.swift
Normal file
@@ -0,0 +1,66 @@
|
|||||||
|
import XCTest
|
||||||
|
|
||||||
|
final class StemChangeToggleTests: XCTestCase {
|
||||||
|
|
||||||
|
override func setUpWithError() throws {
|
||||||
|
continueAfterFailure = false
|
||||||
|
}
|
||||||
|
|
||||||
|
func testStemChangeConjugationToggle() throws {
|
||||||
|
let app = XCUIApplication()
|
||||||
|
app.launchArguments += ["-onboardingComplete", "YES"]
|
||||||
|
app.launch()
|
||||||
|
|
||||||
|
// Course → LanGo Beginner I → Week 4 → E-IE stem-changing verbs
|
||||||
|
app.tabBars.buttons["Course"].tap()
|
||||||
|
|
||||||
|
// Locate the E-IE deck row. Deck titles appear as static text / button.
|
||||||
|
// Scroll until visible, then tap.
|
||||||
|
let deckPredicate = NSPredicate(format: "label CONTAINS[c] 'E-IE stem changing verbs' AND NOT label CONTAINS[c] 'REVÉS'")
|
||||||
|
let deckRow = app.buttons.matching(deckPredicate).firstMatch
|
||||||
|
|
||||||
|
let listRef = app.coordinate(withNormalizedOffset: CGVector(dx: 0.5, dy: 0.85))
|
||||||
|
let topRef = app.coordinate(withNormalizedOffset: CGVector(dx: 0.5, dy: 0.10))
|
||||||
|
for _ in 0..<12 {
|
||||||
|
if deckRow.exists && deckRow.isHittable { break }
|
||||||
|
listRef.press(forDuration: 0.1, thenDragTo: topRef)
|
||||||
|
}
|
||||||
|
XCTAssertTrue(deckRow.waitForExistence(timeout: 3), "E-IE deck row missing")
|
||||||
|
deckRow.tap()
|
||||||
|
|
||||||
|
attach(app, name: "01-deck-top")
|
||||||
|
|
||||||
|
// Tap "Show conjugation" on the first card
|
||||||
|
let showBtn = app.buttons.matching(NSPredicate(format: "label BEGINSWITH 'Show conjugation'")).firstMatch
|
||||||
|
XCTAssertTrue(showBtn.waitForExistence(timeout: 3), "Show conjugation button missing")
|
||||||
|
showBtn.tap()
|
||||||
|
|
||||||
|
// Wait for the conjugation rows + animation to settle.
|
||||||
|
let yoLabel = app.staticTexts["yo"].firstMatch
|
||||||
|
XCTAssertTrue(yoLabel.waitForExistence(timeout: 3), "yo row not rendered")
|
||||||
|
// Give the transition time to complete before snapshotting.
|
||||||
|
Thread.sleep(forTimeInterval: 0.6)
|
||||||
|
attach(app, name: "02-conjugation-open")
|
||||||
|
|
||||||
|
// Also confirm all expected person labels are rendered.
|
||||||
|
for person in ["yo", "tú", "nosotros"] {
|
||||||
|
XCTAssertTrue(
|
||||||
|
app.staticTexts[person].firstMatch.exists,
|
||||||
|
"missing conjugation row for \(person)"
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Tap again to hide
|
||||||
|
let hideBtn = app.buttons.matching(NSPredicate(format: "label BEGINSWITH 'Hide conjugation'")).firstMatch
|
||||||
|
XCTAssertTrue(hideBtn.waitForExistence(timeout: 2))
|
||||||
|
hideBtn.tap()
|
||||||
|
}
|
||||||
|
|
||||||
|
private func attach(_ app: XCUIApplication, name: String) {
|
||||||
|
let s = app.screenshot()
|
||||||
|
let a = XCTAttachment(screenshot: s)
|
||||||
|
a.name = name
|
||||||
|
a.lifetime = .keepAlways
|
||||||
|
add(a)
|
||||||
|
}
|
||||||
|
}
|
||||||
80
Conjuga/ConjugaUITests/TextbookFlowUITests.swift
Normal file
80
Conjuga/ConjugaUITests/TextbookFlowUITests.swift
Normal file
@@ -0,0 +1,80 @@
|
|||||||
|
import XCTest
|
||||||
|
|
||||||
|
final class TextbookFlowUITests: XCTestCase {
|
||||||
|
|
||||||
|
override func setUpWithError() throws {
|
||||||
|
continueAfterFailure = false
|
||||||
|
}
|
||||||
|
|
||||||
|
func testTextbookFlow() throws {
|
||||||
|
let app = XCUIApplication()
|
||||||
|
// Skip onboarding via defaults (already set by run script, but harmless to override)
|
||||||
|
app.launchArguments += ["-onboardingComplete", "YES"]
|
||||||
|
app.launch()
|
||||||
|
|
||||||
|
// Dashboard should be default tab. Switch to Course.
|
||||||
|
let courseTab = app.tabBars.buttons["Course"]
|
||||||
|
XCTAssertTrue(courseTab.waitForExistence(timeout: 5), "Course tab missing")
|
||||||
|
courseTab.tap()
|
||||||
|
|
||||||
|
// Attach a screenshot of the Course list
|
||||||
|
attach(app, name: "01-course-list")
|
||||||
|
|
||||||
|
// Tap the Textbook entry
|
||||||
|
let textbookRow = app.buttons.containing(NSPredicate(
|
||||||
|
format: "label CONTAINS[c] 'Complete Spanish'"
|
||||||
|
)).firstMatch
|
||||||
|
XCTAssertTrue(textbookRow.waitForExistence(timeout: 5), "Textbook row missing in Course")
|
||||||
|
textbookRow.tap()
|
||||||
|
|
||||||
|
attach(app, name: "02-textbook-chapter-list")
|
||||||
|
|
||||||
|
// Tap chapter 1 — should navigate to reader
|
||||||
|
let chapterOneRow = app.buttons.containing(NSPredicate(
|
||||||
|
format: "label CONTAINS[c] 'Nouns, Articles'"
|
||||||
|
)).firstMatch
|
||||||
|
XCTAssertTrue(chapterOneRow.waitForExistence(timeout: 5), "Chapter 1 row missing")
|
||||||
|
chapterOneRow.tap()
|
||||||
|
|
||||||
|
attach(app, name: "03-chapter-body")
|
||||||
|
|
||||||
|
// Find the first exercise link ("Exercise 1.1")
|
||||||
|
let exerciseRow = app.buttons.containing(NSPredicate(
|
||||||
|
format: "label CONTAINS[c] 'Exercise 1.1'"
|
||||||
|
)).firstMatch
|
||||||
|
XCTAssertTrue(exerciseRow.waitForExistence(timeout: 5), "Exercise 1.1 link missing")
|
||||||
|
exerciseRow.tap()
|
||||||
|
|
||||||
|
attach(app, name: "04-exercise-view")
|
||||||
|
|
||||||
|
// Check presence of input fields: at least a few numbered prompts
|
||||||
|
// Text fields use SwiftUI placeholder "Your answer"
|
||||||
|
let firstField = app.textFields["Your answer"].firstMatch
|
||||||
|
XCTAssertTrue(firstField.waitForExistence(timeout: 5), "No input fields rendered for exercise")
|
||||||
|
firstField.tap()
|
||||||
|
firstField.typeText("el")
|
||||||
|
|
||||||
|
attach(app, name: "05-exercise-typed-el")
|
||||||
|
|
||||||
|
// Tap Check answers
|
||||||
|
let checkButton = app.buttons["Check answers"]
|
||||||
|
XCTAssertTrue(checkButton.waitForExistence(timeout: 3), "Check answers button missing")
|
||||||
|
checkButton.tap()
|
||||||
|
|
||||||
|
attach(app, name: "06-exercise-graded")
|
||||||
|
|
||||||
|
// The first answer to Exercise 1.1 is "el" — we should see the first prompt
|
||||||
|
// graded correct. Iterating too deeply is fragile; just take a screenshot
|
||||||
|
// and check for presence of either a checkmark-like label or "Try again".
|
||||||
|
let tryAgain = app.buttons["Try again"]
|
||||||
|
XCTAssertTrue(tryAgain.waitForExistence(timeout: 3), "Grading did not complete")
|
||||||
|
}
|
||||||
|
|
||||||
|
private func attach(_ app: XCUIApplication, name: String) {
|
||||||
|
let screenshot = app.screenshot()
|
||||||
|
let attachment = XCTAttachment(screenshot: screenshot)
|
||||||
|
attachment.name = name
|
||||||
|
attachment.lifetime = .keepAlways
|
||||||
|
add(attachment)
|
||||||
|
}
|
||||||
|
}
|
||||||
53
Conjuga/ConjugaUITests/VocabGridTests.swift
Normal file
53
Conjuga/ConjugaUITests/VocabGridTests.swift
Normal file
@@ -0,0 +1,53 @@
|
|||||||
|
import XCTest
|
||||||
|
|
||||||
|
final class VocabGridTests: XCTestCase {
|
||||||
|
|
||||||
|
override func setUpWithError() throws {
|
||||||
|
continueAfterFailure = false
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Verifies the chapter reader renders vocab tables as a paired Spanish↔English grid.
|
||||||
|
func testChapter4VocabGrid() throws {
|
||||||
|
let app = XCUIApplication()
|
||||||
|
app.launchArguments += ["-onboardingComplete", "YES"]
|
||||||
|
app.launch()
|
||||||
|
|
||||||
|
app.tabBars.buttons["Course"].tap()
|
||||||
|
|
||||||
|
let textbookRow = app.buttons.containing(NSPredicate(
|
||||||
|
format: "label CONTAINS[c] 'Complete Spanish'"
|
||||||
|
)).firstMatch
|
||||||
|
XCTAssertTrue(textbookRow.waitForExistence(timeout: 5))
|
||||||
|
textbookRow.tap()
|
||||||
|
|
||||||
|
let ch4 = app.buttons["textbook-chapter-row-4"]
|
||||||
|
XCTAssertTrue(ch4.waitForExistence(timeout: 3))
|
||||||
|
ch4.tap()
|
||||||
|
|
||||||
|
attach(app, name: "01-ch4-top")
|
||||||
|
|
||||||
|
// Tap the first vocab disclosure — "Vocabulary (N items)"
|
||||||
|
let vocabButton = app.buttons.matching(NSPredicate(
|
||||||
|
format: "label BEGINSWITH 'Vocabulary ('"
|
||||||
|
)).firstMatch
|
||||||
|
XCTAssertTrue(vocabButton.waitForExistence(timeout: 3))
|
||||||
|
vocabButton.tap()
|
||||||
|
Thread.sleep(forTimeInterval: 0.4)
|
||||||
|
|
||||||
|
attach(app, name: "02-ch4-vocab-open")
|
||||||
|
|
||||||
|
// Scroll a little and screenshot a deeper vocab — numbers table is
|
||||||
|
// typically a few screens down in chapter 4.
|
||||||
|
app.swipeUp(velocity: .fast)
|
||||||
|
app.swipeUp(velocity: .fast)
|
||||||
|
attach(app, name: "03-ch4-deeper")
|
||||||
|
}
|
||||||
|
|
||||||
|
private func attach(_ app: XCUIApplication, name: String) {
|
||||||
|
let s = app.screenshot()
|
||||||
|
let a = XCTAttachment(screenshot: s)
|
||||||
|
a.name = name
|
||||||
|
a.lifetime = .keepAlways
|
||||||
|
add(a)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -41,14 +41,16 @@ struct CombinedProvider: TimelineProvider {
|
|||||||
private func fetchWordOfDay(for date: Date) -> WordOfDay? {
|
private func fetchWordOfDay(for date: Date) -> WordOfDay? {
|
||||||
guard let localURL = SharedStore.localStoreURL() else { return nil }
|
guard let localURL = SharedStore.localStoreURL() else { return nil }
|
||||||
|
|
||||||
// MUST declare all 6 local entities to match the main app's schema.
|
// MUST declare all 7 local entities to match the main app's schema.
|
||||||
// Declaring a subset would cause SwiftData to destructively migrate the store
|
// Declaring a subset would cause SwiftData to destructively migrate the
|
||||||
// on open, dropping the entities not listed here.
|
// store on open, dropping the entities not listed here (this is how we
|
||||||
|
// previously lost all TextbookChapter rows on every widget refresh).
|
||||||
let config = ModelConfiguration(
|
let config = ModelConfiguration(
|
||||||
"local",
|
"local",
|
||||||
schema: Schema([
|
schema: Schema([
|
||||||
Verb.self, VerbForm.self, IrregularSpan.self,
|
Verb.self, VerbForm.self, IrregularSpan.self,
|
||||||
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
||||||
|
TextbookChapter.self,
|
||||||
]),
|
]),
|
||||||
url: localURL,
|
url: localURL,
|
||||||
cloudKitDatabase: .none
|
cloudKitDatabase: .none
|
||||||
@@ -56,6 +58,7 @@ struct CombinedProvider: TimelineProvider {
|
|||||||
guard let container = try? ModelContainer(
|
guard let container = try? ModelContainer(
|
||||||
for: Verb.self, VerbForm.self, IrregularSpan.self,
|
for: Verb.self, VerbForm.self, IrregularSpan.self,
|
||||||
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
||||||
|
TextbookChapter.self,
|
||||||
configurations: config
|
configurations: config
|
||||||
) else { return nil }
|
) else { return nil }
|
||||||
|
|
||||||
|
|||||||
@@ -32,14 +32,16 @@ struct WordOfDayProvider: TimelineProvider {
|
|||||||
private func fetchWordOfDay(for date: Date) -> WordOfDay? {
|
private func fetchWordOfDay(for date: Date) -> WordOfDay? {
|
||||||
guard let localURL = SharedStore.localStoreURL() else { return nil }
|
guard let localURL = SharedStore.localStoreURL() else { return nil }
|
||||||
|
|
||||||
// MUST declare all 6 local entities to match the main app's schema.
|
// MUST declare all 7 local entities to match the main app's schema.
|
||||||
// Declaring a subset would cause SwiftData to destructively migrate the store
|
// Declaring a subset would cause SwiftData to destructively migrate the
|
||||||
// on open, dropping the entities not listed here.
|
// store on open, dropping the entities not listed here (this is how we
|
||||||
|
// previously lost all TextbookChapter rows on every widget refresh).
|
||||||
let config = ModelConfiguration(
|
let config = ModelConfiguration(
|
||||||
"local",
|
"local",
|
||||||
schema: Schema([
|
schema: Schema([
|
||||||
Verb.self, VerbForm.self, IrregularSpan.self,
|
Verb.self, VerbForm.self, IrregularSpan.self,
|
||||||
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
||||||
|
TextbookChapter.self,
|
||||||
]),
|
]),
|
||||||
url: localURL,
|
url: localURL,
|
||||||
cloudKitDatabase: .none
|
cloudKitDatabase: .none
|
||||||
@@ -47,6 +49,7 @@ struct WordOfDayProvider: TimelineProvider {
|
|||||||
guard let container = try? ModelContainer(
|
guard let container = try? ModelContainer(
|
||||||
for: Verb.self, VerbForm.self, IrregularSpan.self,
|
for: Verb.self, VerbForm.self, IrregularSpan.self,
|
||||||
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
TenseGuide.self, CourseDeck.self, VocabCard.self,
|
||||||
|
TextbookChapter.self,
|
||||||
configurations: config
|
configurations: config
|
||||||
) else { return nil }
|
) else { return nil }
|
||||||
|
|
||||||
|
|||||||
374
Conjuga/Scripts/textbook/build_book.py
Normal file
374
Conjuga/Scripts/textbook/build_book.py
Normal file
@@ -0,0 +1,374 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Merge chapters.json + answers.json + ocr.json → book.json (single source).
|
||||||
|
|
||||||
|
Also emits vocab_cards.json: flashcards derived from vocab_image blocks where
|
||||||
|
OCR text parses as a clean two-column (Spanish ↔ English) table.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
HERE = Path(__file__).resolve().parent
|
||||||
|
CHAPTERS_JSON = HERE / "chapters.json"
|
||||||
|
ANSWERS_JSON = HERE / "answers.json"
|
||||||
|
OCR_JSON = HERE / "ocr.json"
|
||||||
|
OUT_BOOK = HERE / "book.json"
|
||||||
|
OUT_VOCAB = HERE / "vocab_cards.json"
|
||||||
|
|
||||||
|
COURSE_NAME = "Complete Spanish Step-by-Step"
|
||||||
|
|
||||||
|
# Heuristic: parseable "Spanish | English" vocab rows.
|
||||||
|
# OCR usually produces "word — translation" or "word translation" separated
|
||||||
|
# by 2+ spaces. We detect rows that contain both Spanish and English words.
|
||||||
|
SPANISH_ACCENT_RE = re.compile(r"[áéíóúñüÁÉÍÓÚÑÜ¿¡]")
|
||||||
|
SPANISH_ARTICLES = {"el", "la", "los", "las", "un", "una", "unos", "unas"}
|
||||||
|
ENGLISH_STARTERS = {"the", "a", "an", "to", "my", "his", "her", "our", "their", "your", "some"}
|
||||||
|
# English-only words that would never appear as Spanish
|
||||||
|
ENGLISH_ONLY_WORDS = {"the", "he", "she", "it", "we", "they", "I", "is", "are", "was", "were",
|
||||||
|
"been", "have", "has", "had", "will", "would", "should", "could"}
|
||||||
|
SEP_RE = re.compile(r"[ \t]{2,}|\s[—–−-]\s")
|
||||||
|
|
||||||
|
|
||||||
|
def classify_line(line: str) -> str:
|
||||||
|
"""Return 'es', 'en', or 'unknown' for the dominant language of a vocab line."""
|
||||||
|
line = line.strip()
|
||||||
|
if not line:
|
||||||
|
return "unknown"
|
||||||
|
# Accent = definitely Spanish
|
||||||
|
if SPANISH_ACCENT_RE.search(line):
|
||||||
|
return "es"
|
||||||
|
first = line.split()[0].lower().strip(",.;:")
|
||||||
|
if first in SPANISH_ARTICLES:
|
||||||
|
return "es"
|
||||||
|
if first in ENGLISH_STARTERS:
|
||||||
|
return "en"
|
||||||
|
# Check if the leading word is an English-only function word
|
||||||
|
if first in ENGLISH_ONLY_WORDS:
|
||||||
|
return "en"
|
||||||
|
return "unknown"
|
||||||
|
|
||||||
|
|
||||||
|
def looks_english(word: str) -> bool:
|
||||||
|
"""Legacy helper — kept for try_split_row below."""
|
||||||
|
w = word.lower().strip()
|
||||||
|
if not w:
|
||||||
|
return False
|
||||||
|
if SPANISH_ACCENT_RE.search(w):
|
||||||
|
return False
|
||||||
|
if w in SPANISH_ARTICLES:
|
||||||
|
return False
|
||||||
|
if w in ENGLISH_STARTERS or w in ENGLISH_ONLY_WORDS:
|
||||||
|
return True
|
||||||
|
return bool(re.match(r"^[a-z][a-z\s'/()\-,.]*$", w))
|
||||||
|
|
||||||
|
|
||||||
|
def try_split_row(line: str) -> "tuple[str, str] | None":
|
||||||
|
"""Split a line into (spanish, english) if it looks like a vocab entry."""
|
||||||
|
line = line.strip()
|
||||||
|
if not line or len(line) < 3:
|
||||||
|
return None
|
||||||
|
# Try explicit separators first
|
||||||
|
parts = SEP_RE.split(line)
|
||||||
|
parts = [p.strip() for p in parts if p.strip()]
|
||||||
|
if len(parts) == 2:
|
||||||
|
spanish, english = parts
|
||||||
|
if looks_english(english) and not looks_english(spanish.split()[0]):
|
||||||
|
return (spanish, english)
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def load(p: Path) -> dict:
|
||||||
|
return json.loads(p.read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def build_vocab_cards_for_block(block: dict, ocr_entry: dict, chapter: dict, context_title: str, idx: int) -> list:
|
||||||
|
"""Given a vocab_image block + its OCR lines, derive flashcards.
|
||||||
|
|
||||||
|
Vision OCR reads top-to-bottom, left-to-right; a two-column vocab table
|
||||||
|
produces Spanish lines first, then English lines. We split the list in
|
||||||
|
half when one side is predominantly Spanish and the other English.
|
||||||
|
Per-line '—' separators are also supported as a fallback.
|
||||||
|
"""
|
||||||
|
cards = []
|
||||||
|
if not ocr_entry:
|
||||||
|
return cards
|
||||||
|
lines = [l.strip() for l in ocr_entry.get("lines", []) if l.strip()]
|
||||||
|
if not lines:
|
||||||
|
return cards
|
||||||
|
|
||||||
|
def card(front: str, back: str) -> dict:
|
||||||
|
return {
|
||||||
|
"front": front,
|
||||||
|
"back": back,
|
||||||
|
"chapter": chapter["number"],
|
||||||
|
"chapterTitle": chapter["title"],
|
||||||
|
"section": context_title,
|
||||||
|
"sourceImage": block["src"],
|
||||||
|
}
|
||||||
|
|
||||||
|
# Attempt 1: explicit inline separator (e.g. "la casa — the house")
|
||||||
|
inline = []
|
||||||
|
all_inline = True
|
||||||
|
for line in lines:
|
||||||
|
pair = try_split_row(line)
|
||||||
|
if pair:
|
||||||
|
inline.append(pair)
|
||||||
|
else:
|
||||||
|
all_inline = False
|
||||||
|
break
|
||||||
|
if all_inline and inline:
|
||||||
|
for es, en in inline:
|
||||||
|
cards.append(card(es, en))
|
||||||
|
return cards
|
||||||
|
|
||||||
|
# Attempt 2: block-alternating layout.
|
||||||
|
# Vision OCR reads columns top-to-bottom, so a 2-col table rendered across
|
||||||
|
# 2 visual columns produces runs like: [ES...ES][EN...EN][ES...ES][EN...EN]
|
||||||
|
# We classify each line, smooth "unknown" using neighbors, then pair
|
||||||
|
# same-sized consecutive ES/EN blocks.
|
||||||
|
classes = [classify_line(l) for l in lines]
|
||||||
|
|
||||||
|
# Pass 1: fill unknowns using nearest non-unknown neighbor (forward)
|
||||||
|
last_known = "unknown"
|
||||||
|
forward = []
|
||||||
|
for c in classes:
|
||||||
|
if c != "unknown":
|
||||||
|
last_known = c
|
||||||
|
forward.append(last_known)
|
||||||
|
# Pass 2: backfill leading unknowns (backward)
|
||||||
|
last_known = "unknown"
|
||||||
|
backward = [""] * len(classes)
|
||||||
|
for i in range(len(classes) - 1, -1, -1):
|
||||||
|
if classes[i] != "unknown":
|
||||||
|
last_known = classes[i]
|
||||||
|
backward[i] = last_known
|
||||||
|
# Merge: prefer forward unless still unknown
|
||||||
|
resolved = []
|
||||||
|
for f, b in zip(forward, backward):
|
||||||
|
if f != "unknown":
|
||||||
|
resolved.append(f)
|
||||||
|
elif b != "unknown":
|
||||||
|
resolved.append(b)
|
||||||
|
else:
|
||||||
|
resolved.append("unknown")
|
||||||
|
|
||||||
|
# Group consecutive same-lang lines
|
||||||
|
blocks: list = []
|
||||||
|
cur_lang: "str | None" = None
|
||||||
|
cur_block: list = []
|
||||||
|
for line, lang in zip(lines, resolved):
|
||||||
|
if lang != cur_lang:
|
||||||
|
if cur_block and cur_lang is not None:
|
||||||
|
blocks.append((cur_lang, cur_block))
|
||||||
|
cur_block = [line]
|
||||||
|
cur_lang = lang
|
||||||
|
else:
|
||||||
|
cur_block.append(line)
|
||||||
|
if cur_block and cur_lang is not None:
|
||||||
|
blocks.append((cur_lang, cur_block))
|
||||||
|
|
||||||
|
# Walk blocks pairing ES then EN of equal length
|
||||||
|
i = 0
|
||||||
|
while i < len(blocks) - 1:
|
||||||
|
lang_a, lines_a = blocks[i]
|
||||||
|
lang_b, lines_b = blocks[i + 1]
|
||||||
|
if lang_a == "es" and lang_b == "en" and len(lines_a) == len(lines_b):
|
||||||
|
for es, en in zip(lines_a, lines_b):
|
||||||
|
cards.append(card(es, en))
|
||||||
|
i += 2
|
||||||
|
continue
|
||||||
|
# If reversed order (some pages have EN column on left), try that too
|
||||||
|
if lang_a == "en" and lang_b == "es" and len(lines_a) == len(lines_b):
|
||||||
|
for es, en in zip(lines_b, lines_a):
|
||||||
|
cards.append(card(es, en))
|
||||||
|
i += 2
|
||||||
|
continue
|
||||||
|
i += 1
|
||||||
|
|
||||||
|
return cards
|
||||||
|
|
||||||
|
|
||||||
|
def clean_instruction(text: str) -> str:
|
||||||
|
"""Strip leading/trailing emphasis markers from a parsed instruction."""
|
||||||
|
# Our XHTML parser emitted * and ** for emphasis; flatten them
|
||||||
|
t = re.sub(r"\*+", "", text)
|
||||||
|
return t.strip()
|
||||||
|
|
||||||
|
|
||||||
|
def merge() -> None:
|
||||||
|
chapters_data = load(CHAPTERS_JSON)
|
||||||
|
answers_data = load(ANSWERS_JSON)
|
||||||
|
try:
|
||||||
|
ocr_data = load(OCR_JSON)
|
||||||
|
except FileNotFoundError:
|
||||||
|
print("ocr.json not found — proceeding with empty OCR data")
|
||||||
|
ocr_data = {}
|
||||||
|
|
||||||
|
answers = answers_data["answers"]
|
||||||
|
chapters = chapters_data["chapters"]
|
||||||
|
parts = chapters_data.get("part_memberships", {})
|
||||||
|
|
||||||
|
book_chapters = []
|
||||||
|
all_vocab_cards = []
|
||||||
|
missing_ocr = set()
|
||||||
|
current_section_title = ""
|
||||||
|
|
||||||
|
for ch in chapters:
|
||||||
|
out_blocks = []
|
||||||
|
current_section_title = ch["title"]
|
||||||
|
|
||||||
|
for bi, block in enumerate(ch["blocks"]):
|
||||||
|
k = block["kind"]
|
||||||
|
|
||||||
|
if k == "heading":
|
||||||
|
current_section_title = block["text"]
|
||||||
|
out_blocks.append(block)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if k == "paragraph":
|
||||||
|
out_blocks.append(block)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if k == "key_vocab_header":
|
||||||
|
out_blocks.append(block)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if k == "vocab_image":
|
||||||
|
ocr_entry = ocr_data.get(block["src"])
|
||||||
|
if ocr_entry is None:
|
||||||
|
missing_ocr.add(block["src"])
|
||||||
|
derived = build_vocab_cards_for_block(
|
||||||
|
block, ocr_entry, ch, current_section_title, bi
|
||||||
|
)
|
||||||
|
all_vocab_cards.extend(derived)
|
||||||
|
out_blocks.append({
|
||||||
|
"kind": "vocab_table",
|
||||||
|
"sourceImage": block["src"],
|
||||||
|
"ocrLines": ocr_entry.get("lines", []) if ocr_entry else [],
|
||||||
|
"ocrConfidence": ocr_entry.get("confidence", 0.0) if ocr_entry else 0.0,
|
||||||
|
"cardCount": len(derived),
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
|
||||||
|
if k == "exercise":
|
||||||
|
ans = answers.get(block["id"])
|
||||||
|
image_ocr_lines = []
|
||||||
|
for src in block.get("image_refs", []):
|
||||||
|
e = ocr_data.get(src)
|
||||||
|
if e is None:
|
||||||
|
missing_ocr.add(src)
|
||||||
|
continue
|
||||||
|
image_ocr_lines.extend(e.get("lines", []))
|
||||||
|
|
||||||
|
# Build the final prompt list. If we have text prompts from
|
||||||
|
# XHTML, prefer them. Otherwise, attempt to use OCR lines.
|
||||||
|
prompts = [p for p in block.get("prompts", []) if p.strip()]
|
||||||
|
extras = [e for e in block.get("extra", []) if e.strip()]
|
||||||
|
if not prompts and image_ocr_lines:
|
||||||
|
# Extract numbered lines from OCR (look for "1. ..." pattern)
|
||||||
|
for line in image_ocr_lines:
|
||||||
|
m = re.match(r"^(\d+)[.)]\s*(.+)", line.strip())
|
||||||
|
if m:
|
||||||
|
prompts.append(f"{m.group(1)}. {m.group(2)}")
|
||||||
|
|
||||||
|
# Cross-reference prompts with answers
|
||||||
|
sub = ans["subparts"] if ans else []
|
||||||
|
answer_items = []
|
||||||
|
for sp in sub:
|
||||||
|
for it in sp["items"]:
|
||||||
|
answer_items.append({
|
||||||
|
"label": sp["label"],
|
||||||
|
"number": it["number"],
|
||||||
|
"answer": it["answer"],
|
||||||
|
"alternates": it["alternates"],
|
||||||
|
})
|
||||||
|
|
||||||
|
out_blocks.append({
|
||||||
|
"kind": "exercise",
|
||||||
|
"id": block["id"],
|
||||||
|
"ansAnchor": block.get("ans_anchor", ""),
|
||||||
|
"instruction": clean_instruction(block.get("instruction", "")),
|
||||||
|
"extra": extras,
|
||||||
|
"prompts": prompts,
|
||||||
|
"ocrLines": image_ocr_lines,
|
||||||
|
"freeform": ans["freeform"] if ans else False,
|
||||||
|
"answerItems": answer_items,
|
||||||
|
"answerRaw": ans["raw"] if ans else "",
|
||||||
|
"answerSubparts": sub,
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
|
||||||
|
out_blocks.append(block)
|
||||||
|
|
||||||
|
book_chapters.append({
|
||||||
|
"id": ch["id"],
|
||||||
|
"number": ch["number"],
|
||||||
|
"title": ch["title"],
|
||||||
|
"part": ch.get("part"),
|
||||||
|
"blocks": out_blocks,
|
||||||
|
})
|
||||||
|
|
||||||
|
book = {
|
||||||
|
"courseName": COURSE_NAME,
|
||||||
|
"totalChapters": len(book_chapters),
|
||||||
|
"totalExercises": sum(
|
||||||
|
1 for ch in book_chapters for b in ch["blocks"] if b["kind"] == "exercise"
|
||||||
|
),
|
||||||
|
"totalVocabTables": sum(
|
||||||
|
1 for ch in book_chapters for b in ch["blocks"] if b["kind"] == "vocab_table"
|
||||||
|
),
|
||||||
|
"totalVocabCards": len(all_vocab_cards),
|
||||||
|
"parts": parts,
|
||||||
|
"chapters": book_chapters,
|
||||||
|
}
|
||||||
|
OUT_BOOK.write_text(json.dumps(book, ensure_ascii=False))
|
||||||
|
|
||||||
|
# Vocab cards as a separate file (grouped per chapter so they can be seeded
|
||||||
|
# as CourseDecks in the existing schema).
|
||||||
|
vocab_by_chapter: dict = {}
|
||||||
|
for card in all_vocab_cards:
|
||||||
|
vocab_by_chapter.setdefault(card["chapter"], []).append(card)
|
||||||
|
OUT_VOCAB.write_text(json.dumps({
|
||||||
|
"courseName": COURSE_NAME,
|
||||||
|
"chapters": [
|
||||||
|
{
|
||||||
|
"chapter": ch_num,
|
||||||
|
"cards": cards,
|
||||||
|
}
|
||||||
|
for ch_num, cards in sorted(vocab_by_chapter.items())
|
||||||
|
],
|
||||||
|
}, ensure_ascii=False, indent=2))
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
print(f"Wrote {OUT_BOOK}")
|
||||||
|
print(f"Wrote {OUT_VOCAB}")
|
||||||
|
print(f"Chapters: {book['totalChapters']}")
|
||||||
|
print(f"Exercises: {book['totalExercises']}")
|
||||||
|
print(f"Vocab tables: {book['totalVocabTables']}")
|
||||||
|
print(f"Vocab cards (auto): {book['totalVocabCards']}")
|
||||||
|
if missing_ocr:
|
||||||
|
print(f"Missing OCR for {len(missing_ocr)} images (first 5): {sorted(list(missing_ocr))[:5]}")
|
||||||
|
|
||||||
|
# Validation
|
||||||
|
total_exercises = book["totalExercises"]
|
||||||
|
exercises_with_prompts = sum(
|
||||||
|
1 for ch in book_chapters for b in ch["blocks"]
|
||||||
|
if b["kind"] == "exercise" and (b["prompts"] or b["extra"])
|
||||||
|
)
|
||||||
|
exercises_with_answers = sum(
|
||||||
|
1 for ch in book_chapters for b in ch["blocks"]
|
||||||
|
if b["kind"] == "exercise" and b["answerItems"]
|
||||||
|
)
|
||||||
|
exercises_freeform = sum(
|
||||||
|
1 for ch in book_chapters for b in ch["blocks"]
|
||||||
|
if b["kind"] == "exercise" and b["freeform"]
|
||||||
|
)
|
||||||
|
print(f"Exercises with prompts: {exercises_with_prompts}/{total_exercises}")
|
||||||
|
print(f"Exercises with answers: {exercises_with_answers}/{total_exercises}")
|
||||||
|
print(f"Freeform exercises: {exercises_freeform}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
merge()
|
||||||
126
Conjuga/Scripts/textbook/build_review.py
Normal file
126
Conjuga/Scripts/textbook/build_review.py
Normal file
@@ -0,0 +1,126 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Render book.json + ocr.json into a static HTML review page.
|
||||||
|
|
||||||
|
The HTML surfaces low-confidence OCR results in red, and shows the parsed
|
||||||
|
exercise prompts/answers next to the original image. Designed for rapid
|
||||||
|
visual diffing against the source book.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import html
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
HERE = Path(__file__).resolve().parent
|
||||||
|
BOOK = HERE / "book.json"
|
||||||
|
OCR = HERE / "ocr.json"
|
||||||
|
OUT_HTML = HERE / "review.html"
|
||||||
|
EPUB_IMAGES = Path(HERE).parents[2] / "epub_extract" / "OEBPS"
|
||||||
|
IMAGE_REL = EPUB_IMAGES.relative_to(HERE.parent) if False else EPUB_IMAGES
|
||||||
|
|
||||||
|
|
||||||
|
def load(p: Path) -> dict:
|
||||||
|
return json.loads(p.read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
|
||||||
|
def esc(s: str) -> str:
|
||||||
|
return html.escape(s or "")
|
||||||
|
|
||||||
|
|
||||||
|
def img_tag(src: str) -> str:
|
||||||
|
full = (EPUB_IMAGES / src).resolve()
|
||||||
|
return f'<img src="file://{full}" alt="{esc(src)}" class="src"/>'
|
||||||
|
|
||||||
|
|
||||||
|
def render() -> None:
|
||||||
|
book = load(BOOK)
|
||||||
|
ocr = load(OCR) if OCR.exists() else {}
|
||||||
|
|
||||||
|
out: list = []
|
||||||
|
out.append("""<!DOCTYPE html>
|
||||||
|
<html><head><meta charset='utf-8'><title>Book review</title>
|
||||||
|
<style>
|
||||||
|
body { font-family: -apple-system, system-ui, sans-serif; margin: 2em; max-width: 1000px; color: #222; }
|
||||||
|
h1 { color: #c44; }
|
||||||
|
h2.chapter { background: #eee; padding: 0.5em; border-left: 4px solid #c44; }
|
||||||
|
h3.heading { color: #555; }
|
||||||
|
.para { margin: 0.5em 0; }
|
||||||
|
.vocab-table { background: #fafff0; padding: 0.5em; margin: 0.5em 0; border: 1px solid #bda; border-radius: 6px; }
|
||||||
|
.ocr-line { font-family: ui-monospace, monospace; font-size: 12px; }
|
||||||
|
.lowconf { color: #c44; background: #fee; }
|
||||||
|
.exercise { background: #fff8e8; padding: 0.5em; margin: 0.75em 0; border: 1px solid #cb9; border-radius: 6px; }
|
||||||
|
.prompt { font-family: ui-monospace, monospace; font-size: 13px; margin: 2px 0; }
|
||||||
|
.answer { color: #080; font-family: ui-monospace, monospace; font-size: 13px; }
|
||||||
|
img.src { max-width: 520px; border: 1px solid #ccc; margin: 4px 0; }
|
||||||
|
.kv { color: #04a; font-weight: bold; }
|
||||||
|
summary { cursor: pointer; font-weight: bold; color: #666; }
|
||||||
|
.card-pair { font-family: ui-monospace, monospace; font-size: 12px; }
|
||||||
|
.card-es { color: #04a; }
|
||||||
|
.card-en { color: #555; }
|
||||||
|
.counts { color: #888; font-size: 12px; }
|
||||||
|
</style></head><body>""")
|
||||||
|
out.append(f"<h1>{esc(book['courseName'])} — review</h1>")
|
||||||
|
out.append(f"<p>{book['totalChapters']} chapters · {book['totalExercises']} exercises · {book['totalVocabTables']} vocab tables · {book['totalVocabCards']} auto-derived cards</p>")
|
||||||
|
|
||||||
|
for ch in book["chapters"]:
|
||||||
|
part = ch.get("part")
|
||||||
|
part_str = f" (Part {part})" if part else ""
|
||||||
|
out.append(f"<h2 class='chapter'>Chapter {ch['number']}: {esc(ch['title'])}{esc(part_str)}</h2>")
|
||||||
|
|
||||||
|
for b in ch["blocks"]:
|
||||||
|
kind = b["kind"]
|
||||||
|
if kind == "heading":
|
||||||
|
level = b["level"]
|
||||||
|
out.append(f"<h{level} class='heading'>{esc(b['text'])}</h{level}>")
|
||||||
|
elif kind == "paragraph":
|
||||||
|
out.append(f"<p class='para'>{esc(b['text'])}</p>")
|
||||||
|
elif kind == "key_vocab_header":
|
||||||
|
out.append(f"<p class='kv'>★ Key Vocabulary</p>")
|
||||||
|
elif kind == "vocab_table":
|
||||||
|
src = b["sourceImage"]
|
||||||
|
conf = b["ocrConfidence"]
|
||||||
|
conf_class = "lowconf" if conf < 0.85 else ""
|
||||||
|
out.append(f"<div class='vocab-table'>")
|
||||||
|
out.append(f"<details><summary>vocab {esc(src)} · confidence {conf:.2f} · {b['cardCount']} card(s)</summary>")
|
||||||
|
out.append(img_tag(src))
|
||||||
|
out.append("<div>")
|
||||||
|
for line in b.get("ocrLines", []):
|
||||||
|
out.append(f"<div class='ocr-line {conf_class}'>{esc(line)}</div>")
|
||||||
|
out.append("</div>")
|
||||||
|
# Show derived pairs (if any). We don't have them inline in book.json,
|
||||||
|
# but we can recompute from ocrLines using the same function.
|
||||||
|
out.append("</details></div>")
|
||||||
|
elif kind == "exercise":
|
||||||
|
out.append(f"<div class='exercise'>")
|
||||||
|
out.append(f"<b>Exercise {esc(b['id'])}</b> — <i>{esc(b['instruction'])}</i>")
|
||||||
|
if b.get("extra"):
|
||||||
|
for e in b["extra"]:
|
||||||
|
out.append(f"<div class='para'>{esc(e)}</div>")
|
||||||
|
if b.get("ocrLines"):
|
||||||
|
out.append(f"<details><summary>OCR lines from image</summary>")
|
||||||
|
for line in b["ocrLines"]:
|
||||||
|
out.append(f"<div class='ocr-line'>{esc(line)}</div>")
|
||||||
|
out.append("</details>")
|
||||||
|
if b.get("prompts"):
|
||||||
|
out.append("<div><b>Parsed prompts:</b></div>")
|
||||||
|
for p in b["prompts"]:
|
||||||
|
out.append(f"<div class='prompt'>• {esc(p)}</div>")
|
||||||
|
if b.get("answerItems"):
|
||||||
|
out.append("<div><b>Answer key:</b></div>")
|
||||||
|
for a in b["answerItems"]:
|
||||||
|
label_str = f"{a['label']}. " if a.get("label") else ""
|
||||||
|
alts = ", ".join(a["alternates"])
|
||||||
|
alt_str = f" <span style='color:#999'>(also: {esc(alts)})</span>" if alts else ""
|
||||||
|
out.append(f"<div class='answer'>{esc(label_str)}{a['number']}. {esc(a['answer'])}{alt_str}</div>")
|
||||||
|
if b.get("freeform"):
|
||||||
|
out.append("<div style='color:#c44'>(Freeform — answers will vary)</div>")
|
||||||
|
for img_src in b.get("image_refs", []):
|
||||||
|
out.append(img_tag(img_src))
|
||||||
|
out.append("</div>")
|
||||||
|
|
||||||
|
out.append("</body></html>")
|
||||||
|
OUT_HTML.write_text("\n".join(out), encoding="utf-8")
|
||||||
|
print(f"Wrote {OUT_HTML}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
render()
|
||||||
205
Conjuga/Scripts/textbook/extract_answers.py
Normal file
205
Conjuga/Scripts/textbook/extract_answers.py
Normal file
@@ -0,0 +1,205 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Parse ans.xhtml into structured answers.json.
|
||||||
|
|
||||||
|
Output schema:
|
||||||
|
{
|
||||||
|
"answers": {
|
||||||
|
"1.1": {
|
||||||
|
"id": "1.1",
|
||||||
|
"anchor": "ch1ans1",
|
||||||
|
"chapter": 1,
|
||||||
|
"subparts": [
|
||||||
|
{"label": null, "items": [
|
||||||
|
{"number": 1, "answer": "el", "alternates": []},
|
||||||
|
{"number": 2, "answer": "el", "alternates": []},
|
||||||
|
...
|
||||||
|
]}
|
||||||
|
],
|
||||||
|
"freeform": false, # true if "Answers will vary"
|
||||||
|
"raw": "..." # raw text for fallback
|
||||||
|
},
|
||||||
|
"2.4": { # multi-part exercise
|
||||||
|
"subparts": [
|
||||||
|
{"label": "A", "items": [...]},
|
||||||
|
{"label": "B", "items": [...]},
|
||||||
|
{"label": "C", "items": [...]}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from bs4 import BeautifulSoup, NavigableString
|
||||||
|
|
||||||
|
ROOT = Path(__file__).resolve().parents[3] / "epub_extract" / "OEBPS"
|
||||||
|
OUT = Path(__file__).resolve().parent / "answers.json"
|
||||||
|
|
||||||
|
ANSWER_CLASSES = {"answerq", "answerq1", "answerq2", "answerqa"}
|
||||||
|
EXERCISE_ID_RE = re.compile(r"^([0-9]+)\.([0-9]+)$")
|
||||||
|
SUBPART_LABEL_RE = re.compile(r"^([A-Z])\b")
|
||||||
|
NUMBERED_ITEM_RE = re.compile(r"(?:^|\s)(\d+)\.\s+")
|
||||||
|
FREEFORM_PATTERNS = [
|
||||||
|
re.compile(r"answers? will vary", re.IGNORECASE),
|
||||||
|
re.compile(r"answer will vary", re.IGNORECASE),
|
||||||
|
]
|
||||||
|
OR_TOKEN = "{{OR}}"
|
||||||
|
|
||||||
|
|
||||||
|
def render_with_or(p) -> str:
|
||||||
|
"""Convert <p> to plain text, replacing 'OR' span markers with sentinel."""
|
||||||
|
soup = BeautifulSoup(str(p), "lxml")
|
||||||
|
# Replace <span class="small">OR</span> with sentinel
|
||||||
|
for span in soup.find_all("span"):
|
||||||
|
cls = span.get("class") or []
|
||||||
|
if "small" in cls and span.get_text(strip=True).upper() == "OR":
|
||||||
|
span.replace_with(f" {OR_TOKEN} ")
|
||||||
|
# Drop pagebreak spans
|
||||||
|
for span in soup.find_all("span", attrs={"epub:type": "pagebreak"}):
|
||||||
|
span.decompose()
|
||||||
|
# Drop emphasis but keep text
|
||||||
|
for tag in soup.find_all(["em", "i", "strong", "b"]):
|
||||||
|
tag.unwrap()
|
||||||
|
text = soup.get_text(separator=" ", strip=False)
|
||||||
|
text = re.sub(r"\s+", " ", text).strip()
|
||||||
|
return text
|
||||||
|
|
||||||
|
|
||||||
|
def split_numbered_items(text: str) -> "list[dict]":
|
||||||
|
"""Given '1. el 2. la 3. el ...' return [{'number':1,'answer':'el'}, ...]."""
|
||||||
|
# Find positions of N. tokens
|
||||||
|
matches = list(NUMBERED_ITEM_RE.finditer(text))
|
||||||
|
items = []
|
||||||
|
for i, m in enumerate(matches):
|
||||||
|
num = int(m.group(1))
|
||||||
|
start = m.end()
|
||||||
|
end = matches[i + 1].start() if i + 1 < len(matches) else len(text)
|
||||||
|
body = text[start:end].strip().rstrip(".,;")
|
||||||
|
# Split alternates on the OR token
|
||||||
|
parts = [p.strip() for p in body.split(OR_TOKEN) if p.strip()]
|
||||||
|
if not parts:
|
||||||
|
continue
|
||||||
|
items.append({
|
||||||
|
"number": num,
|
||||||
|
"answer": parts[0],
|
||||||
|
"alternates": parts[1:],
|
||||||
|
})
|
||||||
|
return items
|
||||||
|
|
||||||
|
|
||||||
|
def parse_subpart_label(text: str) -> "tuple[str | None, str]":
|
||||||
|
"""Try to peel a leading subpart label (A, B, C) from the text.
|
||||||
|
Returns (label_or_None, remaining_text)."""
|
||||||
|
# Pattern at start: "A " or "A " (lots of whitespace from <em>A</em><tab>)
|
||||||
|
m = re.match(r"^([A-Z])\s+(?=\d)", text)
|
||||||
|
if m:
|
||||||
|
return m.group(1), text[m.end():]
|
||||||
|
return None, text
|
||||||
|
|
||||||
|
|
||||||
|
def parse_answer_paragraph(p, exercise_id: str) -> "list[dict]":
|
||||||
|
"""Convert one <p> into a list of subparts.
|
||||||
|
For p.answerq, the text typically starts with the exercise id, then items.
|
||||||
|
For p.answerqa, the text starts with a subpart label letter."""
|
||||||
|
raw = render_with_or(p)
|
||||||
|
# Strip the leading exercise id if present
|
||||||
|
raw = re.sub(rf"^{re.escape(exercise_id)}\s*", "", raw)
|
||||||
|
|
||||||
|
label, body = parse_subpart_label(raw)
|
||||||
|
|
||||||
|
# Detect freeform
|
||||||
|
freeform = any(pat.search(body) for pat in FREEFORM_PATTERNS)
|
||||||
|
if freeform:
|
||||||
|
return [{"label": label, "items": [], "freeform": True, "raw": body}]
|
||||||
|
|
||||||
|
items = split_numbered_items(body)
|
||||||
|
return [{"label": label, "items": items, "freeform": False, "raw": body}]
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
src = ROOT / "ans.xhtml"
|
||||||
|
soup = BeautifulSoup(src.read_text(encoding="utf-8"), "lxml")
|
||||||
|
body = soup.find("body")
|
||||||
|
|
||||||
|
answers: dict = {}
|
||||||
|
current_chapter = None
|
||||||
|
current_exercise_id: "str | None" = None
|
||||||
|
|
||||||
|
for el in body.find_all(["h3", "p"]):
|
||||||
|
classes = set(el.get("class") or [])
|
||||||
|
|
||||||
|
# Chapter boundary
|
||||||
|
if el.name == "h3" and "h3b" in classes:
|
||||||
|
text = el.get_text(strip=True)
|
||||||
|
m = re.search(r"Chapter\s+(\d+)", text)
|
||||||
|
if m:
|
||||||
|
current_chapter = int(m.group(1))
|
||||||
|
current_exercise_id = None
|
||||||
|
continue
|
||||||
|
|
||||||
|
if el.name != "p" or not (classes & ANSWER_CLASSES):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Find the exercise-id anchor (only present on p.answerq, not on continuation)
|
||||||
|
a = el.find("a", href=True)
|
||||||
|
ex_link = None
|
||||||
|
if a:
|
||||||
|
link_text = a.get_text(strip=True)
|
||||||
|
if EXERCISE_ID_RE.match(link_text):
|
||||||
|
ex_link = link_text
|
||||||
|
|
||||||
|
if ex_link:
|
||||||
|
current_exercise_id = ex_link
|
||||||
|
anchor = ""
|
||||||
|
href = a.get("href", "")
|
||||||
|
anchor_m = re.search(r"#(ch\d+ans\d+)", href + " " + (a.get("id") or ""))
|
||||||
|
anchor = anchor_m.group(1) if anchor_m else (a.get("id") or "")
|
||||||
|
# Use the anchor's `id` attr if it's the entry id (e.g. "ch1ans1")
|
||||||
|
entry_id = a.get("id") or anchor
|
||||||
|
|
||||||
|
answers[ex_link] = {
|
||||||
|
"id": ex_link,
|
||||||
|
"anchor": entry_id,
|
||||||
|
"chapter": current_chapter,
|
||||||
|
"subparts": [],
|
||||||
|
"freeform": False,
|
||||||
|
"raw": "",
|
||||||
|
}
|
||||||
|
new_subparts = parse_answer_paragraph(el, ex_link)
|
||||||
|
answers[ex_link]["subparts"].extend(new_subparts)
|
||||||
|
answers[ex_link]["raw"] = render_with_or(el)
|
||||||
|
answers[ex_link]["freeform"] = any(sp["freeform"] for sp in new_subparts)
|
||||||
|
else:
|
||||||
|
# Continuation paragraph for current exercise
|
||||||
|
if current_exercise_id and current_exercise_id in answers:
|
||||||
|
more = parse_answer_paragraph(el, current_exercise_id)
|
||||||
|
answers[current_exercise_id]["subparts"].extend(more)
|
||||||
|
if any(sp["freeform"] for sp in more):
|
||||||
|
answers[current_exercise_id]["freeform"] = True
|
||||||
|
|
||||||
|
out = {"answers": answers}
|
||||||
|
OUT.write_text(json.dumps(out, ensure_ascii=False, indent=2))
|
||||||
|
|
||||||
|
total = len(answers)
|
||||||
|
freeform = sum(1 for v in answers.values() if v["freeform"])
|
||||||
|
multipart = sum(1 for v in answers.values() if len(v["subparts"]) > 1)
|
||||||
|
total_items = sum(
|
||||||
|
len(sp["items"]) for v in answers.values() for sp in v["subparts"]
|
||||||
|
)
|
||||||
|
with_alternates = sum(
|
||||||
|
1 for v in answers.values()
|
||||||
|
for sp in v["subparts"] for it in sp["items"]
|
||||||
|
if it["alternates"]
|
||||||
|
)
|
||||||
|
print(f"Exercises with answers: {total}")
|
||||||
|
print(f" freeform: {freeform}")
|
||||||
|
print(f" multi-part (A/B/C): {multipart}")
|
||||||
|
print(f" total numbered items: {total_items}")
|
||||||
|
print(f" items with alternates:{with_alternates}")
|
||||||
|
print(f"Wrote {OUT}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
369
Conjuga/Scripts/textbook/extract_chapters.py
Normal file
369
Conjuga/Scripts/textbook/extract_chapters.py
Normal file
@@ -0,0 +1,369 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Parse all chapter XHTMLs + appendix into structured chapters.json.
|
||||||
|
|
||||||
|
Output schema:
|
||||||
|
{
|
||||||
|
"chapters": [
|
||||||
|
{
|
||||||
|
"id": "ch1",
|
||||||
|
"number": 1,
|
||||||
|
"title": "Nouns, Articles, and Adjectives",
|
||||||
|
"part": 1, # part 1/2/3 or null
|
||||||
|
"blocks": [ # ordered content
|
||||||
|
{"kind": "heading", "level": 3, "text": "..."},
|
||||||
|
{"kind": "paragraph", "text": "...", "hasItalic": false},
|
||||||
|
{"kind": "key_vocab_header", "title": "Los colores (The colors)"},
|
||||||
|
{"kind": "vocab_image", "src": "f0010-03.jpg"},
|
||||||
|
{
|
||||||
|
"kind": "exercise",
|
||||||
|
"id": "1.1",
|
||||||
|
"ans_anchor": "ch1ans1",
|
||||||
|
"instruction": "Write the appropriate...",
|
||||||
|
"image_refs": ["f0005-02.jpg"]
|
||||||
|
},
|
||||||
|
{"kind": "image", "src": "...", "alt": "..."}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
from bs4 import BeautifulSoup
|
||||||
|
|
||||||
|
ROOT = Path(__file__).resolve().parents[3] / "epub_extract" / "OEBPS"
|
||||||
|
OUT = Path(__file__).resolve().parent / "chapters.json"
|
||||||
|
|
||||||
|
# Common icon images embedded in headings — ignore when collecting content images
|
||||||
|
ICON_IMAGES = {"Common01.jpg", "Common02.jpg", "Common03.jpg", "Common04.jpg", "Common05.jpg"}
|
||||||
|
|
||||||
|
EXERCISE_ID_RE = re.compile(r"Exercise\s+([0-9]+\.[0-9]+)")
|
||||||
|
ANS_REF_RE = re.compile(r"ch(\d+)ans(\d+)")
|
||||||
|
|
||||||
|
|
||||||
|
def clean_text(el) -> str:
|
||||||
|
"""Extract text preserving inline emphasis markers."""
|
||||||
|
if el is None:
|
||||||
|
return ""
|
||||||
|
# Replace <em>/<i> with markdown-ish *...*, <strong>/<b> with **...**
|
||||||
|
html = str(el)
|
||||||
|
soup = BeautifulSoup(html, "lxml")
|
||||||
|
# First: flatten nested emphasis so we don't emit overlapping markers.
|
||||||
|
# For <strong><em>X</em></strong>, drop the inner em (the bold wrapping
|
||||||
|
# already carries the emphasis visually). Same for <em><strong>...</strong></em>.
|
||||||
|
for tag in soup.find_all(["strong", "b"]):
|
||||||
|
for inner in tag.find_all(["em", "i"]):
|
||||||
|
inner.unwrap()
|
||||||
|
for tag in soup.find_all(["em", "i"]):
|
||||||
|
for inner in tag.find_all(["strong", "b"]):
|
||||||
|
inner.unwrap()
|
||||||
|
# Drop ALL inline emphasis. The source has nested/sibling em/strong
|
||||||
|
# patterns that CommonMark can't reliably parse, causing markers to leak
|
||||||
|
# into the UI. Plain text renders cleanly everywhere.
|
||||||
|
for tag in soup.find_all(["em", "i", "strong", "b"]):
|
||||||
|
tag.unwrap()
|
||||||
|
# Drop pagebreak spans
|
||||||
|
for tag in soup.find_all("span", attrs={"epub:type": "pagebreak"}):
|
||||||
|
tag.decompose()
|
||||||
|
# Replace <br/> with newline
|
||||||
|
for br in soup.find_all("br"):
|
||||||
|
br.replace_with("\n")
|
||||||
|
# Use a separator so adjacent inline tags don't concatenate without spaces
|
||||||
|
# (e.g. "<strong><em>Ir</em></strong> and" would otherwise become "Irand").
|
||||||
|
text = soup.get_text(separator=" ", strip=False)
|
||||||
|
# Collapse runs of whitespace first.
|
||||||
|
text = re.sub(r"\s+", " ", text).strip()
|
||||||
|
# Strip any stray asterisks that sneak through (e.g. author's literal *).
|
||||||
|
text = text.replace("*", "")
|
||||||
|
# De-space punctuation
|
||||||
|
text = re.sub(r"\s+([,.;:!?])", r"\1", text)
|
||||||
|
# Tighten brackets that picked up separator-spaces: "( foo )" -> "(foo)"
|
||||||
|
text = re.sub(r"([(\[])\s+", r"\1", text)
|
||||||
|
text = re.sub(r"\s+([)\]])", r"\1", text)
|
||||||
|
# Collapse any double-spaces
|
||||||
|
text = re.sub(r" +", " ", text).strip()
|
||||||
|
return text
|
||||||
|
|
||||||
|
|
||||||
|
def is_exercise_header(h) -> bool:
|
||||||
|
"""Heading with an <a href='ans.xhtml#...'>Exercise N.N</a> link.
|
||||||
|
Chapters 1-16 use h3.h3k; chapters 17+ use h4.h4."""
|
||||||
|
if h.name not in ("h3", "h4"):
|
||||||
|
return False
|
||||||
|
a = h.find("a", href=True)
|
||||||
|
if a and "ans.xhtml" in a["href"]:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def is_key_vocab_header(h) -> bool:
|
||||||
|
"""Heading with 'Key Vocabulary' text (no anchor link to answers)."""
|
||||||
|
if h.name not in ("h3", "h4"):
|
||||||
|
return False
|
||||||
|
text = h.get_text(strip=True)
|
||||||
|
if "Key Vocabulary" in text and not h.find("a", href=lambda v: v and "ans.xhtml" in v):
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def extract_image_srcs(parent) -> list:
|
||||||
|
"""Return list of image src attributes, skipping icon images."""
|
||||||
|
srcs = []
|
||||||
|
for img in parent.find_all("img"):
|
||||||
|
src = img.get("src", "")
|
||||||
|
if not src or Path(src).name in ICON_IMAGES:
|
||||||
|
continue
|
||||||
|
srcs.append(src)
|
||||||
|
return srcs
|
||||||
|
|
||||||
|
|
||||||
|
def parse_chapter(path: Path) -> "dict | None":
|
||||||
|
"""Parse one chapter file into structured blocks."""
|
||||||
|
html = path.read_text(encoding="utf-8")
|
||||||
|
soup = BeautifulSoup(html, "lxml")
|
||||||
|
body = soup.find("body")
|
||||||
|
if body is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Chapter number + title
|
||||||
|
number = None
|
||||||
|
title = ""
|
||||||
|
h2s = body.find_all("h2")
|
||||||
|
for h2 in h2s:
|
||||||
|
classes = h2.get("class") or []
|
||||||
|
# Use a separator so consecutive inline tags don't concatenate
|
||||||
|
# (e.g. "<strong><em>Ir</em></strong> and the Future" → "Ir and the Future")
|
||||||
|
text_with_sep = re.sub(r"\s+", " ", h2.get_text(" ", strip=True))
|
||||||
|
# Strip spaces that were inserted before punctuation
|
||||||
|
text_with_sep = re.sub(r"\s+([,.;:!?])", r"\1", text_with_sep).strip()
|
||||||
|
if "h2c" in classes and text_with_sep.isdigit():
|
||||||
|
number = int(text_with_sep)
|
||||||
|
# Chapters 1–16 use h2c1; chapters 17+ use h2-c
|
||||||
|
elif ("h2c1" in classes or "h2-c" in classes) and not title:
|
||||||
|
title = text_with_sep
|
||||||
|
if number is None:
|
||||||
|
# Try id on chapter header (ch1 → 1)
|
||||||
|
for h2 in h2s:
|
||||||
|
id_ = h2.get("id", "")
|
||||||
|
m = re.match(r"ch(\d+)", id_)
|
||||||
|
if m:
|
||||||
|
number = int(m.group(1))
|
||||||
|
break
|
||||||
|
|
||||||
|
chapter_id = path.stem # ch1, ch2, ...
|
||||||
|
|
||||||
|
# Walk section content in document order
|
||||||
|
section = body.find("section") or body
|
||||||
|
blocks: list = []
|
||||||
|
pending_instruction = None # holds italic paragraph following an exercise header
|
||||||
|
|
||||||
|
for el in section.descendants:
|
||||||
|
if el.name is None:
|
||||||
|
continue
|
||||||
|
|
||||||
|
classes = el.get("class") or []
|
||||||
|
|
||||||
|
# Skip nested tags already captured via parent processing
|
||||||
|
# We operate only on direct h2/h3/h4/h5/p elements
|
||||||
|
if el.name not in ("h2", "h3", "h4", "h5", "p"):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Exercise header detection (h3 in ch1-16, h4 in ch17+)
|
||||||
|
if is_exercise_header(el):
|
||||||
|
a = el.find("a", href=True)
|
||||||
|
href = a["href"] if a else ""
|
||||||
|
m = EXERCISE_ID_RE.search(el.get_text())
|
||||||
|
ex_id = m.group(1) if m else ""
|
||||||
|
anchor_m = ANS_REF_RE.search(href)
|
||||||
|
ans_anchor = anchor_m.group(0) if anchor_m else ""
|
||||||
|
blocks.append({
|
||||||
|
"kind": "exercise",
|
||||||
|
"id": ex_id,
|
||||||
|
"ans_anchor": ans_anchor,
|
||||||
|
"instruction": "",
|
||||||
|
"image_refs": [],
|
||||||
|
"prompts": []
|
||||||
|
})
|
||||||
|
pending_instruction = blocks[-1]
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Key Vocabulary header
|
||||||
|
if is_key_vocab_header(el):
|
||||||
|
blocks.append({"kind": "key_vocab_header", "title": "Key Vocabulary"})
|
||||||
|
pending_instruction = None
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Other headings
|
||||||
|
if el.name in ("h2", "h3", "h4", "h5"):
|
||||||
|
if el.name == "h2":
|
||||||
|
# Skip the chapter-number/chapter-title h2s we already captured
|
||||||
|
continue
|
||||||
|
txt = clean_text(el)
|
||||||
|
if txt:
|
||||||
|
blocks.append({
|
||||||
|
"kind": "heading",
|
||||||
|
"level": int(el.name[1]),
|
||||||
|
"text": txt,
|
||||||
|
})
|
||||||
|
pending_instruction = None
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Paragraphs
|
||||||
|
if el.name == "p":
|
||||||
|
imgs = extract_image_srcs(el)
|
||||||
|
text = clean_text(el)
|
||||||
|
p_classes = set(classes)
|
||||||
|
|
||||||
|
# Skip pure blank-line class ("nump" = underscore lines under number prompts)
|
||||||
|
if p_classes & {"nump", "numpa"} and not text:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Exercise prompt: <p class="number">1. Prompt text</p>
|
||||||
|
# Also number1, number2 (continuation numbering), numbera, numbert
|
||||||
|
if pending_instruction is not None and p_classes & {"number", "number1", "number2", "numbera", "numbert"}:
|
||||||
|
if text:
|
||||||
|
pending_instruction["prompts"].append(text)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Image container for a pending exercise
|
||||||
|
if pending_instruction is not None and imgs and not text:
|
||||||
|
pending_instruction["image_refs"].extend(imgs)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Instruction line right after the exercise header
|
||||||
|
if pending_instruction is not None and text and not imgs and not pending_instruction["instruction"]:
|
||||||
|
pending_instruction["instruction"] = text
|
||||||
|
continue
|
||||||
|
|
||||||
|
# While in pending-exercise state, extra text paragraphs are word
|
||||||
|
# banks / context ("from the following list:" etc) — keep pending alive.
|
||||||
|
if pending_instruction is not None and text and not imgs:
|
||||||
|
pending_instruction.setdefault("extra", []).append(text)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Paragraphs that contain an image belong to vocab/key-vocab callouts
|
||||||
|
if imgs and not text:
|
||||||
|
for src in imgs:
|
||||||
|
blocks.append({"kind": "vocab_image", "src": src})
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Mixed paragraph: image with caption
|
||||||
|
if imgs and text:
|
||||||
|
for src in imgs:
|
||||||
|
blocks.append({"kind": "vocab_image", "src": src})
|
||||||
|
blocks.append({"kind": "paragraph", "text": text})
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Plain paragraph — outside any exercise
|
||||||
|
if text:
|
||||||
|
blocks.append({"kind": "paragraph", "text": text})
|
||||||
|
|
||||||
|
return {
|
||||||
|
"id": chapter_id,
|
||||||
|
"number": number,
|
||||||
|
"title": title,
|
||||||
|
"blocks": blocks,
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
def assign_parts(chapters: list, part_files: "dict[int, list[int]]") -> None:
|
||||||
|
"""Annotate chapters with part number based on TOC membership."""
|
||||||
|
for part_num, chapter_nums in part_files.items():
|
||||||
|
for ch in chapters:
|
||||||
|
if ch["number"] in chapter_nums:
|
||||||
|
ch["part"] = part_num
|
||||||
|
for ch in chapters:
|
||||||
|
ch.setdefault("part", None)
|
||||||
|
|
||||||
|
|
||||||
|
def read_part_memberships() -> "dict[int, list[int]]":
|
||||||
|
"""Derive part→chapter grouping from the OPF spine order."""
|
||||||
|
opf = next(ROOT.glob("*.opf"), None)
|
||||||
|
if opf is None:
|
||||||
|
return {}
|
||||||
|
soup = BeautifulSoup(opf.read_text(encoding="utf-8"), "xml")
|
||||||
|
memberships: dict = {}
|
||||||
|
current_part: "int | None" = None
|
||||||
|
for item in soup.find_all("item"):
|
||||||
|
href = item.get("href", "")
|
||||||
|
m_part = re.match(r"part(\d+)\.xhtml", href)
|
||||||
|
m_ch = re.match(r"ch(\d+)\.xhtml", href)
|
||||||
|
if m_part:
|
||||||
|
current_part = int(m_part.group(1))
|
||||||
|
memberships.setdefault(current_part, [])
|
||||||
|
elif m_ch and current_part is not None:
|
||||||
|
memberships[current_part].append(int(m_ch.group(1)))
|
||||||
|
# Manifest order tends to match spine order for this book; verify via spine just in case
|
||||||
|
spine = soup.find("spine")
|
||||||
|
if spine is not None:
|
||||||
|
order = []
|
||||||
|
for ref in spine.find_all("itemref"):
|
||||||
|
idref = ref.get("idref")
|
||||||
|
item = soup.find("item", attrs={"id": idref})
|
||||||
|
if item is not None:
|
||||||
|
order.append(item.get("href", ""))
|
||||||
|
# Rebuild from spine order
|
||||||
|
memberships = {}
|
||||||
|
current_part = None
|
||||||
|
for href in order:
|
||||||
|
m_part = re.match(r"part(\d+)\.xhtml", href)
|
||||||
|
m_ch = re.match(r"ch(\d+)\.xhtml", href)
|
||||||
|
if m_part:
|
||||||
|
current_part = int(m_part.group(1))
|
||||||
|
memberships.setdefault(current_part, [])
|
||||||
|
elif m_ch and current_part is not None:
|
||||||
|
memberships[current_part].append(int(m_ch.group(1)))
|
||||||
|
return memberships
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
chapter_files = sorted(
|
||||||
|
ROOT.glob("ch*.xhtml"),
|
||||||
|
key=lambda p: int(re.match(r"ch(\d+)", p.stem).group(1))
|
||||||
|
)
|
||||||
|
chapters = []
|
||||||
|
for path in chapter_files:
|
||||||
|
ch = parse_chapter(path)
|
||||||
|
if ch:
|
||||||
|
chapters.append(ch)
|
||||||
|
|
||||||
|
part_memberships = read_part_memberships()
|
||||||
|
assign_parts(chapters, part_memberships)
|
||||||
|
|
||||||
|
out = {
|
||||||
|
"chapters": chapters,
|
||||||
|
"part_memberships": part_memberships,
|
||||||
|
}
|
||||||
|
OUT.write_text(json.dumps(out, ensure_ascii=False, indent=2))
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
ex_total = sum(1 for ch in chapters for b in ch["blocks"] if b["kind"] == "exercise")
|
||||||
|
ex_with_prompts = sum(
|
||||||
|
1 for ch in chapters for b in ch["blocks"]
|
||||||
|
if b["kind"] == "exercise" and b["prompts"]
|
||||||
|
)
|
||||||
|
ex_with_images = sum(
|
||||||
|
1 for ch in chapters for b in ch["blocks"]
|
||||||
|
if b["kind"] == "exercise" and b["image_refs"]
|
||||||
|
)
|
||||||
|
ex_empty = sum(
|
||||||
|
1 for ch in chapters for b in ch["blocks"]
|
||||||
|
if b["kind"] == "exercise" and not b["prompts"] and not b["image_refs"]
|
||||||
|
)
|
||||||
|
para_total = sum(1 for ch in chapters for b in ch["blocks"] if b["kind"] == "paragraph")
|
||||||
|
vocab_img_total = sum(1 for ch in chapters for b in ch["blocks"] if b["kind"] == "vocab_image")
|
||||||
|
print(f"Chapters: {len(chapters)}")
|
||||||
|
print(f"Exercises total: {ex_total}")
|
||||||
|
print(f" with text prompts: {ex_with_prompts}")
|
||||||
|
print(f" with image prompts: {ex_with_images}")
|
||||||
|
print(f" empty: {ex_empty}")
|
||||||
|
print(f"Paragraphs: {para_total}")
|
||||||
|
print(f"Vocab images: {vocab_img_total}")
|
||||||
|
print(f"Parts: {part_memberships}")
|
||||||
|
print(f"Wrote {OUT}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
94
Conjuga/Scripts/textbook/extract_pdf_text.py
Normal file
94
Conjuga/Scripts/textbook/extract_pdf_text.py
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Extract clean text from the PDF source and map each PDF page to the
|
||||||
|
book's printed page number.
|
||||||
|
|
||||||
|
Output: pdf_text.json
|
||||||
|
{
|
||||||
|
"pdfPageCount": 806,
|
||||||
|
"bookPages": {
|
||||||
|
"3": { "text": "...", "pdfIndex": 29 },
|
||||||
|
"4": { ... },
|
||||||
|
...
|
||||||
|
},
|
||||||
|
"unmapped": [list of pdfIndex values with no detectable book page number]
|
||||||
|
}
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
from pathlib import Path
|
||||||
|
import pypdf
|
||||||
|
|
||||||
|
HERE = Path(__file__).resolve().parent
|
||||||
|
PDF = next(
|
||||||
|
Path(__file__).resolve().parents[3].glob("Complete Spanish Step-By-Step*.pdf"),
|
||||||
|
None,
|
||||||
|
)
|
||||||
|
OUT = HERE / "pdf_text.json"
|
||||||
|
|
||||||
|
ROMAN_RE = re.compile(r"^[ivxlcdmIVXLCDM]+$")
|
||||||
|
# Match a page number on its own line at top/bottom of the page.
|
||||||
|
# The book uses Arabic numerals for main chapters (e.g., "3") and Roman for front matter.
|
||||||
|
PAGE_NUM_LINE_RE = re.compile(r"^\s*(\d{1,4})\s*$", re.MULTILINE)
|
||||||
|
|
||||||
|
|
||||||
|
def detect_book_page(text: str) -> "int | None":
|
||||||
|
"""Find the printed page number from standalone page-number lines at the
|
||||||
|
top or bottom of a page."""
|
||||||
|
lines = [l.strip() for l in text.splitlines() if l.strip()]
|
||||||
|
# Check first 2 lines and last 2 lines
|
||||||
|
for candidate in lines[:2] + lines[-2:]:
|
||||||
|
m = re.match(r"^(\d{1,4})$", candidate)
|
||||||
|
if m:
|
||||||
|
return int(m.group(1))
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
if PDF is None:
|
||||||
|
print("No PDF found in project root")
|
||||||
|
return
|
||||||
|
|
||||||
|
print(f"Reading {PDF.name}")
|
||||||
|
reader = pypdf.PdfReader(str(PDF))
|
||||||
|
pages = reader.pages
|
||||||
|
print(f"PDF has {len(pages)} pages")
|
||||||
|
|
||||||
|
by_book_page: dict = {}
|
||||||
|
unmapped: list = []
|
||||||
|
last_seen: "int | None" = None
|
||||||
|
missed_count = 0
|
||||||
|
|
||||||
|
for i, page in enumerate(pages):
|
||||||
|
text = page.extract_text() or ""
|
||||||
|
book_page = detect_book_page(text)
|
||||||
|
|
||||||
|
if book_page is None:
|
||||||
|
# Carry forward sequence: if we saw page N last, assume N+1.
|
||||||
|
if last_seen is not None:
|
||||||
|
book_page = last_seen + 1
|
||||||
|
missed_count += 1
|
||||||
|
else:
|
||||||
|
unmapped.append(i)
|
||||||
|
continue
|
||||||
|
last_seen = book_page
|
||||||
|
# Strip the detected page number from text to clean the output
|
||||||
|
cleaned = re.sub(r"(?m)^\s*\d{1,4}\s*$", "", text).strip()
|
||||||
|
by_book_page[str(book_page)] = {
|
||||||
|
"text": cleaned,
|
||||||
|
"pdfIndex": i,
|
||||||
|
}
|
||||||
|
|
||||||
|
out = {
|
||||||
|
"pdfPageCount": len(pages),
|
||||||
|
"bookPages": by_book_page,
|
||||||
|
"unmapped": unmapped,
|
||||||
|
"inferredPages": missed_count,
|
||||||
|
}
|
||||||
|
OUT.write_text(json.dumps(out, ensure_ascii=False))
|
||||||
|
print(f"Mapped {len(by_book_page)} book pages; {missed_count} inferred; {len(unmapped)} unmapped")
|
||||||
|
print(f"Wrote {OUT}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
252
Conjuga/Scripts/textbook/fix_vocab.py
Normal file
252
Conjuga/Scripts/textbook/fix_vocab.py
Normal file
@@ -0,0 +1,252 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Apply high-confidence auto-fixes from vocab_validation.json to vocab_cards.json.
|
||||||
|
|
||||||
|
Auto-fix rules (conservative):
|
||||||
|
1. If a flagged word has exactly one suggestion AND that suggestion differs by
|
||||||
|
<= 2 characters AND has the same starting letter (high-confidence character swap).
|
||||||
|
2. If a card is detected as reversed (Spanish on EN side, English on ES side),
|
||||||
|
swap front/back.
|
||||||
|
|
||||||
|
Cards that aren't auto-fixable end up in manual_review.json.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import unicodedata
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
HERE = Path(__file__).resolve().parent
|
||||||
|
VOCAB = HERE / "vocab_cards.json"
|
||||||
|
VALIDATION = HERE / "vocab_validation.json"
|
||||||
|
OUT_VOCAB = HERE / "vocab_cards.json"
|
||||||
|
OUT_REVIEW = HERE / "manual_review.json"
|
||||||
|
OUT_QUARANTINE = HERE / "quarantined_cards.json"
|
||||||
|
|
||||||
|
|
||||||
|
def _strip_accents(s: str) -> str:
|
||||||
|
return "".join(c for c in unicodedata.normalize("NFD", s) if unicodedata.category(c) != "Mn")
|
||||||
|
|
||||||
|
|
||||||
|
def _levenshtein(a: str, b: str) -> int:
|
||||||
|
if a == b: return 0
|
||||||
|
if not a: return len(b)
|
||||||
|
if not b: return len(a)
|
||||||
|
prev = list(range(len(b) + 1))
|
||||||
|
for i, ca in enumerate(a, 1):
|
||||||
|
curr = [i]
|
||||||
|
for j, cb in enumerate(b, 1):
|
||||||
|
cost = 0 if ca == cb else 1
|
||||||
|
curr.append(min(prev[j] + 1, curr[j - 1] + 1, prev[j - 1] + cost))
|
||||||
|
prev = curr
|
||||||
|
return prev[-1]
|
||||||
|
|
||||||
|
|
||||||
|
SPANISH_ACCENT_RE = re.compile(r"[áéíóúñüÁÉÍÓÚÑÜ¿¡]")
|
||||||
|
SPANISH_ARTICLES = {"el", "la", "los", "las", "un", "una", "unos", "unas"}
|
||||||
|
ENGLISH_STARTERS = {"the", "a", "an", "to", "my", "his", "her", "our", "their"}
|
||||||
|
|
||||||
|
|
||||||
|
def language_score(s: str) -> "tuple[int, int]":
|
||||||
|
"""Return (es_score, en_score) for a string."""
|
||||||
|
es = 0
|
||||||
|
en = 0
|
||||||
|
if SPANISH_ACCENT_RE.search(s):
|
||||||
|
es += 3
|
||||||
|
words = s.lower().split()
|
||||||
|
if not words:
|
||||||
|
return (es, en)
|
||||||
|
first = words[0].strip(",.;:")
|
||||||
|
if first in SPANISH_ARTICLES:
|
||||||
|
es += 2
|
||||||
|
if first in ENGLISH_STARTERS:
|
||||||
|
en += 2
|
||||||
|
# Spanish-likely endings on later words
|
||||||
|
for w in words:
|
||||||
|
w = w.strip(",.;:")
|
||||||
|
if not w: continue
|
||||||
|
if w.endswith(("ción", "sión", "dad", "tud")):
|
||||||
|
es += 1
|
||||||
|
if w.endswith(("ing", "tion", "ness", "ment", "able", "ly")):
|
||||||
|
en += 1
|
||||||
|
return (es, en)
|
||||||
|
|
||||||
|
|
||||||
|
def is_reversed(front: str, back: str) -> bool:
|
||||||
|
"""True when front looks like English and back looks like Spanish (i.e. swapped)."""
|
||||||
|
fes, fen = language_score(front)
|
||||||
|
bes, ben = language_score(back)
|
||||||
|
# Front English-leaning AND back Spanish-leaning
|
||||||
|
return fen > fes and bes > ben
|
||||||
|
|
||||||
|
|
||||||
|
def best_replacement(word: str, suggestions: list) -> "str | None":
|
||||||
|
"""Pick the one safe correction, or None to leave it alone."""
|
||||||
|
if not suggestions:
|
||||||
|
return None
|
||||||
|
# Prefer suggestions that share the same first letter
|
||||||
|
same_initial = [s for s in suggestions if s and word and s[0].lower() == word[0].lower()]
|
||||||
|
candidates = same_initial or suggestions
|
||||||
|
# Single best: short edit distance
|
||||||
|
best = None
|
||||||
|
best_d = 99
|
||||||
|
for s in candidates:
|
||||||
|
d = _levenshtein(word.lower(), s.lower())
|
||||||
|
# Don't apply if the "fix" changes too much
|
||||||
|
if d == 0:
|
||||||
|
continue
|
||||||
|
if d > 2:
|
||||||
|
continue
|
||||||
|
if d < best_d:
|
||||||
|
best = s
|
||||||
|
best_d = d
|
||||||
|
return best
|
||||||
|
|
||||||
|
|
||||||
|
def side_language_match(text: str, expected_side: str) -> bool:
|
||||||
|
"""Return True when `text` looks like the expected language (es/en).
|
||||||
|
Guards against applying Spanish spell-fix to English words on a mis-paired card.
|
||||||
|
"""
|
||||||
|
es, en = language_score(text)
|
||||||
|
if expected_side == "es":
|
||||||
|
return es > en # require clear Spanish signal
|
||||||
|
if expected_side == "en":
|
||||||
|
return en >= es # allow equal when text has no strong signal (common for English)
|
||||||
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
def apply_word_fixes(text: str, bad_words: list, expected_side: str) -> "tuple[str, list]":
|
||||||
|
"""Apply word-level corrections inside a string. Skips fixes entirely when
|
||||||
|
the side's actual language doesn't match the dictionary used, to avoid
|
||||||
|
corrupting mis-paired cards."""
|
||||||
|
if not side_language_match(text, expected_side):
|
||||||
|
return (text, [])
|
||||||
|
|
||||||
|
new_text = text
|
||||||
|
applied = []
|
||||||
|
for bw in bad_words:
|
||||||
|
word = bw["word"]
|
||||||
|
sugg = bw["suggestions"]
|
||||||
|
replacement = best_replacement(word, sugg)
|
||||||
|
if replacement is None:
|
||||||
|
continue
|
||||||
|
# Match standalone word including the (possibly-omitted) trailing period:
|
||||||
|
# `Uds` in the text should be replaced with `Uds.` even when adjacent to `.`.
|
||||||
|
escaped = re.escape(word)
|
||||||
|
# Allow an optional existing period that we'd otherwise duplicate.
|
||||||
|
pattern = re.compile(rf"(?<![A-Za-zÁ-ú]){escaped}\.?(?![A-Za-zÁ-ú])")
|
||||||
|
if pattern.search(new_text):
|
||||||
|
new_text = pattern.sub(replacement, new_text, count=1)
|
||||||
|
applied.append({"from": word, "to": replacement})
|
||||||
|
return (new_text, applied)
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
vocab_data = json.loads(VOCAB.read_text(encoding="utf-8"))
|
||||||
|
val_data = json.loads(VALIDATION.read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
# Index validation by (chapter, front, back, sourceImage) for lookup
|
||||||
|
val_index: dict = {}
|
||||||
|
for f in val_data["flags"]:
|
||||||
|
key = (f["chapter"], f["front"], f["back"], f["sourceImage"])
|
||||||
|
val_index[key] = f
|
||||||
|
|
||||||
|
# Walk the cards in place
|
||||||
|
auto_fixed_word = 0
|
||||||
|
auto_swapped = 0
|
||||||
|
quarantined = 0
|
||||||
|
manual_review_cards = []
|
||||||
|
quarantined_cards = []
|
||||||
|
|
||||||
|
for ch in vocab_data["chapters"]:
|
||||||
|
kept_cards = []
|
||||||
|
for card in ch["cards"]:
|
||||||
|
key = (ch["chapter"], card["front"], card["back"], card.get("sourceImage", ""))
|
||||||
|
flag = val_index.get(key)
|
||||||
|
|
||||||
|
# 1) Reversal swap (apply even when not flagged)
|
||||||
|
if is_reversed(card["front"], card["back"]):
|
||||||
|
card["front"], card["back"] = card["back"], card["front"]
|
||||||
|
auto_swapped += 1
|
||||||
|
# Re-key for any further validation lookup (no-op here)
|
||||||
|
|
||||||
|
if flag is None:
|
||||||
|
kept_cards.append(card)
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Quarantine only clear mis-pairs: both sides EXPLICITLY the wrong
|
||||||
|
# language (both Spanish or both English). "unknown" sides stay —
|
||||||
|
# the bounding-box pipeline already handled orientation correctly
|
||||||
|
# and many valid pairs lack the article/accent markers we classify on.
|
||||||
|
fes, fen = language_score(card["front"])
|
||||||
|
bes, ben = language_score(card["back"])
|
||||||
|
front_lang = "es" if fes > fen else ("en" if fen > fes else "unknown")
|
||||||
|
back_lang = "es" if bes > ben else ("en" if ben > bes else "unknown")
|
||||||
|
bothSameLang = (front_lang == "es" and back_lang == "es") or (front_lang == "en" and back_lang == "en")
|
||||||
|
reversed_pair = front_lang == "en" and back_lang == "es"
|
||||||
|
if bothSameLang or reversed_pair:
|
||||||
|
quarantined_cards.append({
|
||||||
|
"chapter": ch["chapter"],
|
||||||
|
"front": card["front"],
|
||||||
|
"back": card["back"],
|
||||||
|
"sourceImage": card.get("sourceImage", ""),
|
||||||
|
"reason": f"language-mismatch front={front_lang} back={back_lang}",
|
||||||
|
})
|
||||||
|
quarantined += 1
|
||||||
|
continue
|
||||||
|
|
||||||
|
# 2) Word-level fixes (language-aware)
|
||||||
|
new_front, applied_front = apply_word_fixes(card["front"], flag["badFront"], "es")
|
||||||
|
new_back, applied_back = apply_word_fixes(card["back"], flag["badBack"], "en")
|
||||||
|
card["front"] = new_front
|
||||||
|
card["back"] = new_back
|
||||||
|
auto_fixed_word += len(applied_front) + len(applied_back)
|
||||||
|
|
||||||
|
# If after auto-fix there are STILL flagged words with no
|
||||||
|
# confident replacement, flag for manual review.
|
||||||
|
unresolved_front = [
|
||||||
|
bw for bw in flag["badFront"]
|
||||||
|
if not any(a["from"] == bw["word"] for a in applied_front)
|
||||||
|
and best_replacement(bw["word"], bw["suggestions"]) is None
|
||||||
|
]
|
||||||
|
unresolved_back = [
|
||||||
|
bw for bw in flag["badBack"]
|
||||||
|
if not any(a["from"] == bw["word"] for a in applied_back)
|
||||||
|
and best_replacement(bw["word"], bw["suggestions"]) is None
|
||||||
|
]
|
||||||
|
if unresolved_front or unresolved_back:
|
||||||
|
manual_review_cards.append({
|
||||||
|
"chapter": ch["chapter"],
|
||||||
|
"front": card["front"],
|
||||||
|
"back": card["back"],
|
||||||
|
"sourceImage": card.get("sourceImage", ""),
|
||||||
|
"unresolvedFront": unresolved_front,
|
||||||
|
"unresolvedBack": unresolved_back,
|
||||||
|
})
|
||||||
|
kept_cards.append(card)
|
||||||
|
|
||||||
|
ch["cards"] = kept_cards
|
||||||
|
|
||||||
|
OUT_VOCAB.write_text(json.dumps(vocab_data, ensure_ascii=False, indent=2))
|
||||||
|
OUT_REVIEW.write_text(json.dumps({
|
||||||
|
"totalManualReview": len(manual_review_cards),
|
||||||
|
"cards": manual_review_cards,
|
||||||
|
}, ensure_ascii=False, indent=2))
|
||||||
|
|
||||||
|
OUT_QUARANTINE.write_text(json.dumps({
|
||||||
|
"totalQuarantined": len(quarantined_cards),
|
||||||
|
"cards": quarantined_cards,
|
||||||
|
}, ensure_ascii=False, indent=2))
|
||||||
|
|
||||||
|
total_cards = sum(len(c["cards"]) for c in vocab_data["chapters"])
|
||||||
|
print(f"Active cards (after quarantine): {total_cards}")
|
||||||
|
print(f"Auto-swapped (reversed): {auto_swapped}")
|
||||||
|
print(f"Auto-fixed words: {auto_fixed_word}")
|
||||||
|
print(f"Quarantined (mis-paired): {quarantined}")
|
||||||
|
print(f"Cards needing manual review: {len(manual_review_cards)}")
|
||||||
|
print(f"Wrote {OUT_VOCAB}")
|
||||||
|
print(f"Wrote {OUT_REVIEW}")
|
||||||
|
print(f"Wrote {OUT_QUARANTINE}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
147
Conjuga/Scripts/textbook/integrate_repaired.py
Normal file
147
Conjuga/Scripts/textbook/integrate_repaired.py
Normal file
@@ -0,0 +1,147 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Merge repaired_cards.json into vocab_cards.json.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
1. New pairs are added to their chapter's deck if they don't duplicate an existing pair.
|
||||||
|
2. Duplicate detection uses normalize(front)+normalize(back).
|
||||||
|
3. Pairs whose back side starts with a Spanish-article or front side starts
|
||||||
|
with an English article are dropped (pairer got orientation wrong).
|
||||||
|
4. Emits integrate_report.json with counts.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import unicodedata
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
HERE = Path(__file__).resolve().parent
|
||||||
|
VOCAB = HERE / "vocab_cards.json"
|
||||||
|
REPAIRED = HERE / "repaired_cards.json"
|
||||||
|
QUARANTINED = HERE / "quarantined_cards.json"
|
||||||
|
OUT = HERE / "vocab_cards.json"
|
||||||
|
REPORT = HERE / "integrate_report.json"
|
||||||
|
|
||||||
|
|
||||||
|
def _strip_accents(s: str) -> str:
|
||||||
|
return "".join(c for c in unicodedata.normalize("NFD", s) if unicodedata.category(c) != "Mn")
|
||||||
|
|
||||||
|
|
||||||
|
def norm(s: str) -> str:
|
||||||
|
return _strip_accents(s.lower()).strip()
|
||||||
|
|
||||||
|
|
||||||
|
SPANISH_ACCENT_RE = re.compile(r"[áéíóúñüÁÉÍÓÚÑÜ¿¡]")
|
||||||
|
SPANISH_ARTICLES = {"el", "la", "los", "las", "un", "una", "unos", "unas"}
|
||||||
|
ENGLISH_STARTERS = {"the", "a", "an", "to", "my", "his", "her", "our", "their"}
|
||||||
|
|
||||||
|
|
||||||
|
def looks_swapped(front: str, back: str) -> bool:
|
||||||
|
"""True if front looks English and back looks Spanish (pair should be swapped)."""
|
||||||
|
fl = front.lower().split()
|
||||||
|
bl = back.lower().split()
|
||||||
|
if not fl or not bl:
|
||||||
|
return False
|
||||||
|
f_first = fl[0].strip(",.;:")
|
||||||
|
b_first = bl[0].strip(",.;:")
|
||||||
|
front_is_en = f_first in ENGLISH_STARTERS
|
||||||
|
back_is_es = (
|
||||||
|
SPANISH_ACCENT_RE.search(back) is not None
|
||||||
|
or b_first in SPANISH_ARTICLES
|
||||||
|
)
|
||||||
|
return front_is_en and back_is_es
|
||||||
|
|
||||||
|
|
||||||
|
def looks_good(pair: dict) -> bool:
|
||||||
|
"""Basic sanity filter on a repaired pair before it enters the deck."""
|
||||||
|
es = pair["es"].strip()
|
||||||
|
en = pair["en"].strip()
|
||||||
|
if not es or not en: return False
|
||||||
|
if len(es) < 2 or len(en) < 2: return False
|
||||||
|
# Drop if both sides obviously same language (neither has clear orientation)
|
||||||
|
es_has_accent = SPANISH_ACCENT_RE.search(es) is not None
|
||||||
|
en_has_accent = SPANISH_ACCENT_RE.search(en) is not None
|
||||||
|
if en_has_accent and not es_has_accent:
|
||||||
|
# The "en" side has accents — likely swapped
|
||||||
|
return False
|
||||||
|
return True
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
vocab = json.loads(VOCAB.read_text(encoding="utf-8"))
|
||||||
|
repaired = json.loads(REPAIRED.read_text(encoding="utf-8"))
|
||||||
|
quarantined = json.loads(QUARANTINED.read_text(encoding="utf-8"))
|
||||||
|
|
||||||
|
# Map image → chapter (from the quarantine list — all images here belong to the
|
||||||
|
# chapter they were quarantined from).
|
||||||
|
image_chapter: dict = {}
|
||||||
|
for c in quarantined["cards"]:
|
||||||
|
image_chapter[c["sourceImage"]] = c["chapter"]
|
||||||
|
|
||||||
|
# Build existing key set
|
||||||
|
existing_keys = set()
|
||||||
|
chapter_map: dict = {c["chapter"]: c for c in vocab["chapters"]}
|
||||||
|
for c in vocab["chapters"]:
|
||||||
|
for card in c["cards"]:
|
||||||
|
existing_keys.add((c["chapter"], norm(card["front"]), norm(card["back"])))
|
||||||
|
|
||||||
|
added_per_image: dict = {}
|
||||||
|
dropped_swapped = 0
|
||||||
|
dropped_sanity = 0
|
||||||
|
dropped_dup = 0
|
||||||
|
|
||||||
|
for image_name, data in repaired["byImage"].items():
|
||||||
|
ch_num = image_chapter.get(image_name)
|
||||||
|
if ch_num is None:
|
||||||
|
# Image not in quarantine list (shouldn't happen, but bail)
|
||||||
|
continue
|
||||||
|
deck = chapter_map.setdefault(ch_num, {"chapter": ch_num, "cards": []})
|
||||||
|
added = 0
|
||||||
|
for p in data.get("pairs", []):
|
||||||
|
es = p["es"].strip()
|
||||||
|
en = p["en"].strip()
|
||||||
|
if looks_swapped(es, en):
|
||||||
|
es, en = en, es
|
||||||
|
pair = {"es": es, "en": en}
|
||||||
|
if not looks_good(pair):
|
||||||
|
dropped_sanity += 1
|
||||||
|
continue
|
||||||
|
key = (ch_num, norm(pair["es"]), norm(pair["en"]))
|
||||||
|
if key in existing_keys:
|
||||||
|
dropped_dup += 1
|
||||||
|
continue
|
||||||
|
existing_keys.add(key)
|
||||||
|
card = {
|
||||||
|
"front": pair["es"],
|
||||||
|
"back": pair["en"],
|
||||||
|
"chapter": ch_num,
|
||||||
|
"chapterTitle": "",
|
||||||
|
"section": "",
|
||||||
|
"sourceImage": image_name,
|
||||||
|
}
|
||||||
|
deck["cards"].append(card)
|
||||||
|
added += 1
|
||||||
|
if added:
|
||||||
|
added_per_image[image_name] = added
|
||||||
|
|
||||||
|
# If any new chapter was created, ensure ordered insertion
|
||||||
|
vocab["chapters"] = sorted(chapter_map.values(), key=lambda c: c["chapter"])
|
||||||
|
OUT.write_text(json.dumps(vocab, ensure_ascii=False, indent=2))
|
||||||
|
|
||||||
|
total_added = sum(added_per_image.values())
|
||||||
|
report = {
|
||||||
|
"totalRepairedInput": repaired["totalPairs"],
|
||||||
|
"added": total_added,
|
||||||
|
"dropped_duplicate": dropped_dup,
|
||||||
|
"dropped_sanity": dropped_sanity,
|
||||||
|
"addedPerImage": added_per_image,
|
||||||
|
}
|
||||||
|
REPORT.write_text(json.dumps(report, ensure_ascii=False, indent=2))
|
||||||
|
print(f"Repaired pairs in: {repaired['totalPairs']}")
|
||||||
|
print(f"Added to deck: {total_added}")
|
||||||
|
print(f"Dropped as duplicate: {dropped_dup}")
|
||||||
|
print(f"Dropped as swapped/bad: {dropped_sanity}")
|
||||||
|
print(f"Wrote {OUT}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
435
Conjuga/Scripts/textbook/merge_pdf_into_book.py
Normal file
435
Conjuga/Scripts/textbook/merge_pdf_into_book.py
Normal file
@@ -0,0 +1,435 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""Second-pass extractor: use PDF OCR (from ocr_pdf.swift) as a supplementary
|
||||||
|
source of clean text, then re-build book.json with PDF-derived content where it
|
||||||
|
improves on the EPUB's image-based extraction.
|
||||||
|
|
||||||
|
Inputs:
|
||||||
|
chapters.json — EPUB structural extraction (narrative text + exercise prompts + image refs)
|
||||||
|
answers.json — EPUB answer key
|
||||||
|
ocr.json — EPUB image OCR (first pass)
|
||||||
|
pdf_ocr.json — PDF page-level OCR (this pass, higher DPI + cleaner)
|
||||||
|
|
||||||
|
Outputs:
|
||||||
|
book.json — merged book used by the app
|
||||||
|
vocab_cards.json — derived vocabulary flashcards
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import re
|
||||||
|
import sys
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
HERE = Path(__file__).resolve().parent
|
||||||
|
sys.path.insert(0, str(HERE))
|
||||||
|
from build_book import ( # reuse the helpers defined in build_book.py
|
||||||
|
COURSE_NAME,
|
||||||
|
build_vocab_cards_for_block,
|
||||||
|
clean_instruction,
|
||||||
|
classify_line,
|
||||||
|
load,
|
||||||
|
)
|
||||||
|
|
||||||
|
CHAPTERS_JSON = HERE / "chapters.json"
|
||||||
|
ANSWERS_JSON = HERE / "answers.json"
|
||||||
|
OCR_JSON = HERE / "ocr.json"
|
||||||
|
PDF_OCR_JSON = HERE / "pdf_ocr.json"
|
||||||
|
PAIRED_VOCAB_JSON = HERE / "paired_vocab.json" # bounding-box pairs (preferred)
|
||||||
|
OUT_BOOK = HERE / "book.json"
|
||||||
|
OUT_VOCAB = HERE / "vocab_cards.json"
|
||||||
|
|
||||||
|
IMAGE_NAME_RE = re.compile(r"^f(\d{4})-(\d{2})\.jpg$")
|
||||||
|
|
||||||
|
|
||||||
|
def extract_book_page(image_src: str) -> "int | None":
|
||||||
|
m = IMAGE_NAME_RE.match(image_src)
|
||||||
|
return int(m.group(1)) if m else None
|
||||||
|
|
||||||
|
|
||||||
|
def build_pdf_page_index(pdf_ocr: dict) -> "dict[int, dict]":
|
||||||
|
"""Map bookPage → {lines, confidence, pdfIndex}.
|
||||||
|
|
||||||
|
Strategy: use chapter-start alignments as anchors. For each chapter N,
|
||||||
|
anchor[N] = (pdf_idx_where_chapter_starts, book_page_where_chapter_starts).
|
||||||
|
Between anchors we interpolate page-by-page (pages run sequentially within
|
||||||
|
a chapter in this textbook's layout).
|
||||||
|
"""
|
||||||
|
pages: "dict[int, dict]" = {}
|
||||||
|
sorted_keys = sorted(pdf_ocr.keys(), key=lambda k: int(k))
|
||||||
|
|
||||||
|
# --- Detect chapter starts in the PDF OCR ---
|
||||||
|
pdf_ch_start: "dict[int, int]" = {}
|
||||||
|
for k in sorted_keys:
|
||||||
|
entry = pdf_ocr[k]
|
||||||
|
lines = entry.get("lines", [])
|
||||||
|
if len(lines) < 2:
|
||||||
|
continue
|
||||||
|
first = lines[0].strip()
|
||||||
|
second = lines[1].strip()
|
||||||
|
if first.isdigit() and 1 <= int(first) <= 30 and len(second) > 5 and second[0:1].isupper():
|
||||||
|
ch = int(first)
|
||||||
|
if ch not in pdf_ch_start:
|
||||||
|
pdf_ch_start[ch] = int(k)
|
||||||
|
|
||||||
|
# --- Load EPUB's authoritative book-page starts ---
|
||||||
|
import re as _re
|
||||||
|
from bs4 import BeautifulSoup as _BS
|
||||||
|
epub_root = HERE.parents[2] / "epub_extract" / "OEBPS"
|
||||||
|
book_ch_start: "dict[int, int]" = {}
|
||||||
|
for ch in sorted(pdf_ch_start.keys()):
|
||||||
|
p = epub_root / f"ch{ch}.xhtml"
|
||||||
|
if not p.exists():
|
||||||
|
continue
|
||||||
|
soup = _BS(p.read_text(encoding="utf-8"), "lxml")
|
||||||
|
for span in soup.find_all(True):
|
||||||
|
id_ = span.get("id", "") or ""
|
||||||
|
m = _re.match(r"page_(\d+)$", id_)
|
||||||
|
if m:
|
||||||
|
book_ch_start[ch] = int(m.group(1))
|
||||||
|
break
|
||||||
|
|
||||||
|
# Build per-chapter (pdf_anchor, book_anchor, next_pdf_anchor) intervals
|
||||||
|
anchors = [] # list of (ch, pdf_start, book_start)
|
||||||
|
for ch in sorted(pdf_ch_start.keys()):
|
||||||
|
if ch in book_ch_start:
|
||||||
|
anchors.append((ch, pdf_ch_start[ch], book_ch_start[ch]))
|
||||||
|
|
||||||
|
for i, (ch, pdf_s, book_s) in enumerate(anchors):
|
||||||
|
next_pdf = anchors[i + 1][1] if i + 1 < len(anchors) else pdf_s + 50
|
||||||
|
# Interpolate book page for each pdf index in [pdf_s, next_pdf)
|
||||||
|
for pdf_idx in range(pdf_s, next_pdf):
|
||||||
|
book_page = book_s + (pdf_idx - pdf_s)
|
||||||
|
entry = pdf_ocr.get(str(pdf_idx))
|
||||||
|
if entry is None:
|
||||||
|
continue
|
||||||
|
if book_page in pages:
|
||||||
|
continue
|
||||||
|
pages[book_page] = {
|
||||||
|
"lines": entry["lines"],
|
||||||
|
"confidence": entry.get("confidence", 0),
|
||||||
|
"pdfIndex": pdf_idx,
|
||||||
|
}
|
||||||
|
return pages
|
||||||
|
|
||||||
|
|
||||||
|
def merge_ocr(epub_lines: list, pdf_lines: list) -> list:
|
||||||
|
"""EPUB per-image OCR is our primary (targeted, no prose bleed). PDF
|
||||||
|
page-level OCR is only used when EPUB is missing. Per-line accent repair
|
||||||
|
is handled separately via `repair_accents_from_pdf`.
|
||||||
|
"""
|
||||||
|
if epub_lines:
|
||||||
|
return epub_lines
|
||||||
|
return pdf_lines
|
||||||
|
|
||||||
|
|
||||||
|
import unicodedata as _u
|
||||||
|
|
||||||
|
def _strip_accents(s: str) -> str:
|
||||||
|
return "".join(c for c in _u.normalize("NFD", s) if _u.category(c) != "Mn")
|
||||||
|
|
||||||
|
|
||||||
|
def _levenshtein(a: str, b: str) -> int:
|
||||||
|
if a == b: return 0
|
||||||
|
if not a: return len(b)
|
||||||
|
if not b: return len(a)
|
||||||
|
prev = list(range(len(b) + 1))
|
||||||
|
for i, ca in enumerate(a, 1):
|
||||||
|
curr = [i]
|
||||||
|
for j, cb in enumerate(b, 1):
|
||||||
|
cost = 0 if ca == cb else 1
|
||||||
|
curr.append(min(prev[j] + 1, curr[j - 1] + 1, prev[j - 1] + cost))
|
||||||
|
prev = curr
|
||||||
|
return prev[-1]
|
||||||
|
|
||||||
|
|
||||||
|
def repair_accents_from_pdf(epub_lines: list, pdf_page_lines: list) -> "tuple[list, int]":
|
||||||
|
"""For each EPUB OCR line, find a near-match in the PDF page OCR and
|
||||||
|
prefer the PDF version. Repairs include:
|
||||||
|
1. exact accent/case differences (e.g. 'iglesia' vs 'Iglesia')
|
||||||
|
2. single-character OCR errors (e.g. 'the hrother' -> 'the brother')
|
||||||
|
3. two-character OCR errors when the target is long enough
|
||||||
|
"""
|
||||||
|
if not epub_lines or not pdf_page_lines:
|
||||||
|
return (epub_lines, 0)
|
||||||
|
# Pre-normalize PDF lines for matching
|
||||||
|
pdf_cleaned = [p.strip() for p in pdf_page_lines if p.strip()]
|
||||||
|
pdf_by_stripped: dict = {}
|
||||||
|
for p in pdf_cleaned:
|
||||||
|
key = _strip_accents(p.lower())
|
||||||
|
pdf_by_stripped.setdefault(key, p)
|
||||||
|
|
||||||
|
out: list = []
|
||||||
|
repairs = 0
|
||||||
|
for e in epub_lines:
|
||||||
|
e_stripped = e.strip()
|
||||||
|
e_key = _strip_accents(e_stripped.lower())
|
||||||
|
# Pass 1: exact accent-only difference
|
||||||
|
if e_key and e_key in pdf_by_stripped and pdf_by_stripped[e_key] != e_stripped:
|
||||||
|
out.append(pdf_by_stripped[e_key])
|
||||||
|
repairs += 1
|
||||||
|
continue
|
||||||
|
# Pass 2: fuzzy — find best PDF line within edit distance 1 or 2
|
||||||
|
if len(e_key) >= 4:
|
||||||
|
max_distance = 1 if len(e_key) < 10 else 2
|
||||||
|
best_match = None
|
||||||
|
best_d = max_distance + 1
|
||||||
|
for p in pdf_cleaned:
|
||||||
|
p_key = _strip_accents(p.lower())
|
||||||
|
# Only match lines of similar length
|
||||||
|
if abs(len(p_key) - len(e_key)) > max_distance:
|
||||||
|
continue
|
||||||
|
d = _levenshtein(e_key, p_key)
|
||||||
|
if d < best_d:
|
||||||
|
best_d = d
|
||||||
|
best_match = p
|
||||||
|
if d == 0:
|
||||||
|
break
|
||||||
|
if best_match and best_match != e_stripped and best_d <= max_distance:
|
||||||
|
out.append(best_match)
|
||||||
|
repairs += 1
|
||||||
|
continue
|
||||||
|
out.append(e)
|
||||||
|
return (out, repairs)
|
||||||
|
|
||||||
|
|
||||||
|
def vocab_lines_from_pdf_page(
|
||||||
|
pdf_page_entry: dict,
|
||||||
|
epub_narrative_lines: set
|
||||||
|
) -> list:
|
||||||
|
"""Extract likely vocab-table lines from a PDF page's OCR by filtering out
|
||||||
|
narrative-looking lines (long sentences) and already-known EPUB content."""
|
||||||
|
lines = pdf_page_entry.get("lines", [])
|
||||||
|
out: list = []
|
||||||
|
for raw in lines:
|
||||||
|
line = raw.strip()
|
||||||
|
if not line:
|
||||||
|
continue
|
||||||
|
# Skip lines that look like body prose (too long)
|
||||||
|
if len(line) > 80:
|
||||||
|
continue
|
||||||
|
# Skip narrative we already captured in the EPUB
|
||||||
|
if line in epub_narrative_lines:
|
||||||
|
continue
|
||||||
|
# Skip page-number-only lines
|
||||||
|
if re.fullmatch(r"\d{1,4}", line):
|
||||||
|
continue
|
||||||
|
# Skip standalone chapter headers (e.g. "Nouns, Articles, and Adjectives")
|
||||||
|
out.append(line)
|
||||||
|
return out
|
||||||
|
|
||||||
|
|
||||||
|
def main() -> None:
|
||||||
|
chapters_data = load(CHAPTERS_JSON)
|
||||||
|
answers = load(ANSWERS_JSON)["answers"]
|
||||||
|
epub_ocr = load(OCR_JSON)
|
||||||
|
pdf_ocr_raw = load(PDF_OCR_JSON) if PDF_OCR_JSON.exists() else {}
|
||||||
|
pdf_pages = build_pdf_page_index(pdf_ocr_raw) if pdf_ocr_raw else {}
|
||||||
|
paired_vocab = load(PAIRED_VOCAB_JSON) if PAIRED_VOCAB_JSON.exists() else {}
|
||||||
|
print(f"Mapped {len(pdf_pages)} PDF pages to book page numbers")
|
||||||
|
print(f"Loaded bounding-box pairs for {len(paired_vocab)} vocab images")
|
||||||
|
|
||||||
|
# Build a global set of EPUB narrative lines (for subtraction when pulling vocab)
|
||||||
|
narrative_set = set()
|
||||||
|
for ch in chapters_data["chapters"]:
|
||||||
|
for b in ch["blocks"]:
|
||||||
|
if b["kind"] == "paragraph" and b.get("text"):
|
||||||
|
narrative_set.add(b["text"].strip())
|
||||||
|
|
||||||
|
book_chapters = []
|
||||||
|
all_vocab_cards = []
|
||||||
|
pdf_hits = 0
|
||||||
|
pdf_misses = 0
|
||||||
|
merged_pages = 0
|
||||||
|
|
||||||
|
for ch in chapters_data["chapters"]:
|
||||||
|
out_blocks = []
|
||||||
|
current_section_title = ch["title"]
|
||||||
|
|
||||||
|
for bi, block in enumerate(ch["blocks"]):
|
||||||
|
k = block["kind"]
|
||||||
|
|
||||||
|
if k == "heading":
|
||||||
|
current_section_title = block["text"]
|
||||||
|
out_blocks.append(block)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if k == "paragraph":
|
||||||
|
out_blocks.append(block)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if k == "key_vocab_header":
|
||||||
|
out_blocks.append(block)
|
||||||
|
continue
|
||||||
|
|
||||||
|
if k == "vocab_image":
|
||||||
|
src = block["src"]
|
||||||
|
epub_entry = epub_ocr.get(src)
|
||||||
|
epub_lines = epub_entry.get("lines", []) if epub_entry else []
|
||||||
|
epub_conf = epub_entry.get("confidence", 0.0) if epub_entry else 0.0
|
||||||
|
|
||||||
|
book_page = extract_book_page(src)
|
||||||
|
pdf_entry = pdf_pages.get(book_page) if book_page else None
|
||||||
|
pdf_lines = pdf_entry["lines"] if pdf_entry else []
|
||||||
|
|
||||||
|
# Primary: EPUB per-image OCR. Supplementary: PDF page OCR
|
||||||
|
# used only for accent/diacritic repair where keys match.
|
||||||
|
if pdf_lines:
|
||||||
|
pdf_hits += 1
|
||||||
|
else:
|
||||||
|
pdf_misses += 1
|
||||||
|
repaired_lines, repairs = repair_accents_from_pdf(epub_lines, pdf_lines)
|
||||||
|
merged_lines = repaired_lines if repaired_lines else pdf_lines
|
||||||
|
merged_conf = max(epub_conf, pdf_entry.get("confidence", 0) if pdf_entry else 0.0)
|
||||||
|
if repairs > 0:
|
||||||
|
merged_pages += 1
|
||||||
|
|
||||||
|
# Prefer bounding-box pairs (from paired_vocab.json) when
|
||||||
|
# present. Fall back to the block-alternation heuristic.
|
||||||
|
bbox = paired_vocab.get(src, {})
|
||||||
|
bbox_pairs = bbox.get("pairs", []) if isinstance(bbox, dict) else []
|
||||||
|
heuristic = build_vocab_cards_for_block(
|
||||||
|
{"src": src},
|
||||||
|
{"lines": merged_lines, "confidence": merged_conf},
|
||||||
|
ch, current_section_title, bi
|
||||||
|
)
|
||||||
|
|
||||||
|
if bbox_pairs:
|
||||||
|
cards_for_block = [
|
||||||
|
{"front": p["es"], "back": p["en"]}
|
||||||
|
for p in bbox_pairs
|
||||||
|
if p.get("es") and p.get("en")
|
||||||
|
]
|
||||||
|
# Also feed the flashcard deck
|
||||||
|
for p in bbox_pairs:
|
||||||
|
if p.get("es") and p.get("en"):
|
||||||
|
all_vocab_cards.append({
|
||||||
|
"front": p["es"],
|
||||||
|
"back": p["en"],
|
||||||
|
"chapter": ch["number"],
|
||||||
|
"chapterTitle": ch["title"],
|
||||||
|
"section": current_section_title,
|
||||||
|
"sourceImage": src,
|
||||||
|
})
|
||||||
|
pair_source = "bbox"
|
||||||
|
else:
|
||||||
|
cards_for_block = [{"front": c["front"], "back": c["back"]} for c in heuristic]
|
||||||
|
all_vocab_cards.extend(heuristic)
|
||||||
|
pair_source = "heuristic"
|
||||||
|
|
||||||
|
out_blocks.append({
|
||||||
|
"kind": "vocab_table",
|
||||||
|
"sourceImage": src,
|
||||||
|
"ocrLines": merged_lines,
|
||||||
|
"ocrConfidence": merged_conf,
|
||||||
|
"cardCount": len(cards_for_block),
|
||||||
|
"cards": cards_for_block,
|
||||||
|
"columnCount": bbox.get("columnCount", 2) if isinstance(bbox, dict) else 2,
|
||||||
|
"source": pair_source,
|
||||||
|
"bookPage": book_page,
|
||||||
|
"repairs": repairs,
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
|
||||||
|
if k == "exercise":
|
||||||
|
ans = answers.get(block["id"])
|
||||||
|
# EPUB image OCR (if any image refs)
|
||||||
|
image_ocr_lines: list = []
|
||||||
|
for src in block.get("image_refs", []):
|
||||||
|
ee = epub_ocr.get(src)
|
||||||
|
if ee:
|
||||||
|
image_ocr_lines.extend(ee.get("lines", []))
|
||||||
|
# Add PDF-page OCR for that page if available
|
||||||
|
bp = extract_book_page(src)
|
||||||
|
if bp and pdf_pages.get(bp):
|
||||||
|
# Only add lines not already present from EPUB OCR
|
||||||
|
pdf_lines = pdf_pages[bp]["lines"]
|
||||||
|
for line in pdf_lines:
|
||||||
|
line = line.strip()
|
||||||
|
if not line or line in image_ocr_lines:
|
||||||
|
continue
|
||||||
|
if line in narrative_set:
|
||||||
|
continue
|
||||||
|
image_ocr_lines.append(line)
|
||||||
|
|
||||||
|
prompts = [p for p in block.get("prompts", []) if p.strip()]
|
||||||
|
extras = [e for e in block.get("extra", []) if e.strip()]
|
||||||
|
if not prompts and image_ocr_lines:
|
||||||
|
# Extract numbered lines from OCR
|
||||||
|
for line in image_ocr_lines:
|
||||||
|
m = re.match(r"^(\d+)[.)]\s*(.+)", line.strip())
|
||||||
|
if m:
|
||||||
|
prompts.append(f"{m.group(1)}. {m.group(2)}")
|
||||||
|
|
||||||
|
sub = ans["subparts"] if ans else []
|
||||||
|
answer_items = []
|
||||||
|
for sp in sub:
|
||||||
|
for it in sp["items"]:
|
||||||
|
answer_items.append({
|
||||||
|
"label": sp["label"],
|
||||||
|
"number": it["number"],
|
||||||
|
"answer": it["answer"],
|
||||||
|
"alternates": it["alternates"],
|
||||||
|
})
|
||||||
|
|
||||||
|
out_blocks.append({
|
||||||
|
"kind": "exercise",
|
||||||
|
"id": block["id"],
|
||||||
|
"ansAnchor": block.get("ans_anchor", ""),
|
||||||
|
"instruction": clean_instruction(block.get("instruction", "")),
|
||||||
|
"extra": extras,
|
||||||
|
"prompts": prompts,
|
||||||
|
"ocrLines": image_ocr_lines,
|
||||||
|
"freeform": ans["freeform"] if ans else False,
|
||||||
|
"answerItems": answer_items,
|
||||||
|
"answerRaw": ans["raw"] if ans else "",
|
||||||
|
"answerSubparts": sub,
|
||||||
|
})
|
||||||
|
continue
|
||||||
|
|
||||||
|
out_blocks.append(block)
|
||||||
|
|
||||||
|
book_chapters.append({
|
||||||
|
"id": ch["id"],
|
||||||
|
"number": ch["number"],
|
||||||
|
"title": ch["title"],
|
||||||
|
"part": ch.get("part"),
|
||||||
|
"blocks": out_blocks,
|
||||||
|
})
|
||||||
|
|
||||||
|
book = {
|
||||||
|
"courseName": COURSE_NAME,
|
||||||
|
"totalChapters": len(book_chapters),
|
||||||
|
"totalExercises": sum(1 for ch in book_chapters for b in ch["blocks"] if b["kind"] == "exercise"),
|
||||||
|
"totalVocabTables": sum(1 for ch in book_chapters for b in ch["blocks"] if b["kind"] == "vocab_table"),
|
||||||
|
"totalVocabCards": len(all_vocab_cards),
|
||||||
|
"parts": chapters_data.get("part_memberships", {}),
|
||||||
|
"chapters": book_chapters,
|
||||||
|
"sources": {
|
||||||
|
"epub_images_ocr": bool(epub_ocr),
|
||||||
|
"pdf_pages_ocr": bool(pdf_ocr_raw),
|
||||||
|
"pdf_pages_mapped": len(pdf_pages),
|
||||||
|
},
|
||||||
|
}
|
||||||
|
OUT_BOOK.write_text(json.dumps(book, ensure_ascii=False))
|
||||||
|
|
||||||
|
vocab_by_chapter: dict = {}
|
||||||
|
for card in all_vocab_cards:
|
||||||
|
vocab_by_chapter.setdefault(card["chapter"], []).append(card)
|
||||||
|
OUT_VOCAB.write_text(json.dumps({
|
||||||
|
"courseName": COURSE_NAME,
|
||||||
|
"chapters": [
|
||||||
|
{"chapter": n, "cards": cs}
|
||||||
|
for n, cs in sorted(vocab_by_chapter.items())
|
||||||
|
],
|
||||||
|
}, ensure_ascii=False, indent=2))
|
||||||
|
|
||||||
|
print(f"Wrote {OUT_BOOK}")
|
||||||
|
print(f"Wrote {OUT_VOCAB}")
|
||||||
|
print(f"Chapters: {book['totalChapters']}")
|
||||||
|
print(f"Exercises: {book['totalExercises']}")
|
||||||
|
print(f"Vocab tables: {book['totalVocabTables']}")
|
||||||
|
print(f"Vocab cards (derived): {book['totalVocabCards']}")
|
||||||
|
print(f"PDF hits vs misses: {pdf_hits} / {pdf_misses}")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
||||||
232
Conjuga/Scripts/textbook/ocr_all_vocab.swift
Normal file
232
Conjuga/Scripts/textbook/ocr_all_vocab.swift
Normal file
@@ -0,0 +1,232 @@
|
|||||||
|
#!/usr/bin/env swift
|
||||||
|
// Bounding-box OCR over every vocab image, producing Spanish→English pairs.
|
||||||
|
// Much higher accuracy than the flat-OCR block-alternation heuristic because
|
||||||
|
// we use each recognized line's position on the page: rows are clustered by
|
||||||
|
// Y-coordinate and cells within a row are split by the biggest X gap.
|
||||||
|
//
|
||||||
|
// Usage: swift ocr_all_vocab.swift <image_list.json> <oebps_dir> <output.json>
|
||||||
|
|
||||||
|
import Foundation
|
||||||
|
import Vision
|
||||||
|
import AppKit
|
||||||
|
|
||||||
|
guard CommandLine.arguments.count >= 4 else {
|
||||||
|
print("Usage: swift ocr_all_vocab.swift <image_list.json> <oebps_dir> <output.json>")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
let imageListURL = URL(fileURLWithPath: CommandLine.arguments[1])
|
||||||
|
let oebpsDir = URL(fileURLWithPath: CommandLine.arguments[2])
|
||||||
|
let outputURL = URL(fileURLWithPath: CommandLine.arguments[3])
|
||||||
|
|
||||||
|
guard let listData = try? Data(contentsOf: imageListURL),
|
||||||
|
let imageNames = try? JSONDecoder().decode([String].self, from: listData) else {
|
||||||
|
print("Could not load image list at \(imageListURL.path)")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
|
print("Processing \(imageNames.count) images...")
|
||||||
|
|
||||||
|
struct RecognizedLine {
|
||||||
|
let text: String
|
||||||
|
let cx: Double
|
||||||
|
let cy: Double
|
||||||
|
let confidence: Double
|
||||||
|
}
|
||||||
|
|
||||||
|
struct Pair: Encodable {
|
||||||
|
var es: String
|
||||||
|
var en: String
|
||||||
|
var confidence: Double
|
||||||
|
}
|
||||||
|
|
||||||
|
struct ImageResult: Encodable {
|
||||||
|
var pairs: [Pair]
|
||||||
|
var columnCount: Int
|
||||||
|
var strategy: String
|
||||||
|
var lineCount: Int
|
||||||
|
}
|
||||||
|
|
||||||
|
let spanishAccents = Set<Character>(["á","é","í","ó","ú","ñ","ü","Á","É","Í","Ó","Ú","Ñ","Ü","¿","¡"])
|
||||||
|
let spanishArticles: Set<String> = ["el","la","los","las","un","una","unos","unas"]
|
||||||
|
let englishStarters: Set<String> = ["the","a","an","to","my","his","her","our","their","your"]
|
||||||
|
let englishOnly: Set<String> = ["the","he","she","it","we","they","is","are","was","were","been","have","has","had","will","would"]
|
||||||
|
|
||||||
|
func classify(_ s: String) -> String {
|
||||||
|
let lower = s.lowercased()
|
||||||
|
if lower.contains(where: { spanishAccents.contains($0) }) { return "es" }
|
||||||
|
let first = lower.split(separator: " ").first.map(String.init)?.trimmingCharacters(in: .punctuationCharacters) ?? ""
|
||||||
|
if spanishArticles.contains(first) { return "es" }
|
||||||
|
if englishStarters.contains(first) || englishOnly.contains(first) { return "en" }
|
||||||
|
return "?"
|
||||||
|
}
|
||||||
|
|
||||||
|
func recognize(_ cgImage: CGImage) -> [RecognizedLine] {
|
||||||
|
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
|
||||||
|
let req = VNRecognizeTextRequest()
|
||||||
|
req.recognitionLevel = .accurate
|
||||||
|
req.recognitionLanguages = ["es-ES", "es", "en-US"]
|
||||||
|
req.usesLanguageCorrection = true
|
||||||
|
if #available(macOS 13.0, *) { req.automaticallyDetectsLanguage = true }
|
||||||
|
try? handler.perform([req])
|
||||||
|
var out: [RecognizedLine] = []
|
||||||
|
for obs in req.results ?? [] {
|
||||||
|
guard let top = obs.topCandidates(1).first else { continue }
|
||||||
|
let s = top.string.trimmingCharacters(in: .whitespaces)
|
||||||
|
if s.isEmpty { continue }
|
||||||
|
let bb = obs.boundingBox
|
||||||
|
out.append(RecognizedLine(
|
||||||
|
text: s,
|
||||||
|
cx: Double(bb.origin.x + bb.width / 2),
|
||||||
|
cy: Double(1.0 - (bb.origin.y + bb.height / 2)),
|
||||||
|
confidence: Double(top.confidence)
|
||||||
|
))
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Split a sorted-by-X line group into cells by finding the largest gap(s).
|
||||||
|
/// `desiredCells` = 2 for 2-col, 4 for 2-pair, etc.
|
||||||
|
func splitRow(_ lines: [RecognizedLine], into desiredCells: Int) -> [String] {
|
||||||
|
guard lines.count >= desiredCells else {
|
||||||
|
// Merge into fewer cells: just concatenate left-to-right.
|
||||||
|
return [lines.map(\.text).joined(separator: " ")]
|
||||||
|
}
|
||||||
|
let sorted = lines.sorted { $0.cx < $1.cx }
|
||||||
|
// Find (desiredCells - 1) biggest gaps
|
||||||
|
var gaps: [(idx: Int, gap: Double)] = []
|
||||||
|
for i in 1..<sorted.count {
|
||||||
|
gaps.append((i, sorted[i].cx - sorted[i - 1].cx))
|
||||||
|
}
|
||||||
|
let splitAt = gaps.sorted { $0.gap > $1.gap }
|
||||||
|
.prefix(desiredCells - 1)
|
||||||
|
.map(\.idx)
|
||||||
|
.sorted()
|
||||||
|
var cells: [[RecognizedLine]] = []
|
||||||
|
var start = 0
|
||||||
|
for s in splitAt {
|
||||||
|
cells.append(Array(sorted[start..<s]))
|
||||||
|
start = s
|
||||||
|
}
|
||||||
|
cells.append(Array(sorted[start..<sorted.count]))
|
||||||
|
return cells.map { $0.map(\.text).joined(separator: " ").trimmingCharacters(in: .whitespaces) }
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Cluster lines into rows by Y proximity. Returns rows in top-to-bottom order.
|
||||||
|
func groupRows(_ lines: [RecognizedLine], tol: Double = 0.025) -> [[RecognizedLine]] {
|
||||||
|
let sorted = lines.sorted { $0.cy < $1.cy }
|
||||||
|
var rows: [[RecognizedLine]] = []
|
||||||
|
var current: [RecognizedLine] = []
|
||||||
|
for l in sorted {
|
||||||
|
if let last = current.last, abs(l.cy - last.cy) > tol {
|
||||||
|
rows.append(current)
|
||||||
|
current = [l]
|
||||||
|
} else {
|
||||||
|
current.append(l)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !current.isEmpty { rows.append(current) }
|
||||||
|
return rows
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Detect likely column count: look at how many x-cluster peaks exist across all rows.
|
||||||
|
/// Clusters X-coords from all lines into buckets of 10% width.
|
||||||
|
func detectColumnCount(_ lines: [RecognizedLine]) -> Int {
|
||||||
|
guard !lines.isEmpty else { return 2 }
|
||||||
|
let step = 0.10
|
||||||
|
var buckets = [Int](repeating: 0, count: Int(1.0 / step) + 1)
|
||||||
|
for l in lines {
|
||||||
|
let b = min(max(0, Int(l.cx / step)), buckets.count - 1)
|
||||||
|
buckets[b] += 1
|
||||||
|
}
|
||||||
|
// A peak = a bucket with count > 10% of total lines
|
||||||
|
let threshold = max(2, lines.count / 10)
|
||||||
|
let peaks = buckets.filter { $0 >= threshold }.count
|
||||||
|
// Most tables are 2-col (peaks = 2). Some 4-col (2 ES/EN pairs side by side → peaks = 4).
|
||||||
|
// Roman/decorative layouts may show 1 peak; treat as 2.
|
||||||
|
switch peaks {
|
||||||
|
case 0, 1, 2: return 2
|
||||||
|
case 3: return 3
|
||||||
|
default: return 4
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Merge label-less cells into Spanish→English pairs.
|
||||||
|
/// `cells` is a row's cells (length = columnCount). For N=2, [es, en]. For N=4,
|
||||||
|
/// [es1, en1, es2, en2] (two pairs). For N=3, [es, en_short, en_long] (rare, merge).
|
||||||
|
func cellsToPairs(_ cells: [String], columnCount: Int) -> [(String, String)] {
|
||||||
|
switch columnCount {
|
||||||
|
case 2 where cells.count >= 2:
|
||||||
|
return [(cells[0], cells[1])]
|
||||||
|
case 3 where cells.count >= 3:
|
||||||
|
// 3-col source: es | en | en-alternate. Keep all three by merging EN sides.
|
||||||
|
return [(cells[0], [cells[1], cells[2]].joined(separator: " / "))]
|
||||||
|
case 4 where cells.count >= 4:
|
||||||
|
return [(cells[0], cells[1]), (cells[2], cells[3])]
|
||||||
|
default:
|
||||||
|
if cells.count >= 2 { return [(cells[0], cells.dropFirst().joined(separator: " "))] }
|
||||||
|
return []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Swap pair if orientation is backwards (English on left, Spanish on right).
|
||||||
|
func orientPair(_ pair: (String, String)) -> (String, String) {
|
||||||
|
let (a, b) = pair
|
||||||
|
let ca = classify(a), cb = classify(b)
|
||||||
|
if ca == "en" && cb == "es" { return (b, a) }
|
||||||
|
return pair
|
||||||
|
}
|
||||||
|
|
||||||
|
var results: [String: ImageResult] = [:]
|
||||||
|
var processed = 0
|
||||||
|
let startTime = Date()
|
||||||
|
|
||||||
|
for name in imageNames {
|
||||||
|
processed += 1
|
||||||
|
let url = oebpsDir.appendingPathComponent(name)
|
||||||
|
guard let nsImg = NSImage(contentsOf: url),
|
||||||
|
let tiff = nsImg.tiffRepresentation,
|
||||||
|
let rep = NSBitmapImageRep(data: tiff),
|
||||||
|
let cg = rep.cgImage else {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
let lines = recognize(cg)
|
||||||
|
if lines.isEmpty {
|
||||||
|
results[name] = ImageResult(pairs: [], columnCount: 2, strategy: "empty", lineCount: 0)
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
let columnCount = detectColumnCount(lines)
|
||||||
|
let rows = groupRows(lines, tol: 0.025)
|
||||||
|
var pairs: [Pair] = []
|
||||||
|
for row in rows {
|
||||||
|
guard row.count >= 2 else { continue }
|
||||||
|
let cells = splitRow(row, into: columnCount)
|
||||||
|
let rawPairs = cellsToPairs(cells, columnCount: columnCount)
|
||||||
|
for p in rawPairs {
|
||||||
|
let (es, en) = orientPair(p)
|
||||||
|
if es.count < 1 || en.count < 1 { continue }
|
||||||
|
let avgConf = row.reduce(0.0) { $0 + $1.confidence } / Double(row.count)
|
||||||
|
pairs.append(Pair(es: es, en: en, confidence: avgConf))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
results[name] = ImageResult(
|
||||||
|
pairs: pairs,
|
||||||
|
columnCount: columnCount,
|
||||||
|
strategy: "bbox-row-split",
|
||||||
|
lineCount: lines.count
|
||||||
|
)
|
||||||
|
|
||||||
|
if processed % 50 == 0 || processed == imageNames.count {
|
||||||
|
let elapsed = Date().timeIntervalSince(startTime)
|
||||||
|
let rate = Double(processed) / max(elapsed, 0.001)
|
||||||
|
let eta = Double(imageNames.count - processed) / max(rate, 0.001)
|
||||||
|
print(String(format: "%d/%d %.1f img/s eta %.0fs", processed, imageNames.count, rate, eta))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let enc = JSONEncoder()
|
||||||
|
enc.outputFormatting = [.sortedKeys]
|
||||||
|
try enc.encode(results).write(to: outputURL)
|
||||||
|
let totalPairs = results.values.reduce(0) { $0 + $1.pairs.count }
|
||||||
|
let emptyTables = results.values.filter { $0.pairs.isEmpty }.count
|
||||||
|
print("Wrote \(results.count) results, \(totalPairs) total pairs, \(emptyTables) unpaired")
|
||||||
110
Conjuga/Scripts/textbook/ocr_images.swift
Normal file
110
Conjuga/Scripts/textbook/ocr_images.swift
Normal file
@@ -0,0 +1,110 @@
|
|||||||
|
#!/usr/bin/env swift
|
||||||
|
// OCR every JPG in the given input directory using the macOS Vision framework.
|
||||||
|
// Output: JSON map of { "<filename>": { "lines": [...], "confidence": Double } }
|
||||||
|
//
|
||||||
|
// Usage: swift ocr_images.swift <input_dir> <output_json>
|
||||||
|
// Example: swift ocr_images.swift ../../../epub_extract/OEBPS ocr.json
|
||||||
|
|
||||||
|
import Foundation
|
||||||
|
import Vision
|
||||||
|
import AppKit
|
||||||
|
|
||||||
|
guard CommandLine.arguments.count >= 3 else {
|
||||||
|
print("Usage: swift ocr_images.swift <input_dir> <output_json>")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
let inputDir = URL(fileURLWithPath: CommandLine.arguments[1])
|
||||||
|
let outputURL = URL(fileURLWithPath: CommandLine.arguments[2])
|
||||||
|
|
||||||
|
// Skip images that are icons/inline markers — not real content
|
||||||
|
let skipSubstrings = ["Common", "cover", "title"]
|
||||||
|
|
||||||
|
let fileManager = FileManager.default
|
||||||
|
guard let enumerator = fileManager.enumerator(at: inputDir, includingPropertiesForKeys: nil) else {
|
||||||
|
print("Could not enumerate \(inputDir.path)")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
var jpgs: [URL] = []
|
||||||
|
for case let url as URL in enumerator {
|
||||||
|
let name = url.lastPathComponent
|
||||||
|
guard name.hasSuffix(".jpg") || name.hasSuffix(".jpeg") || name.hasSuffix(".png") else { continue }
|
||||||
|
if skipSubstrings.contains(where: { name.contains($0) }) { continue }
|
||||||
|
jpgs.append(url)
|
||||||
|
}
|
||||||
|
jpgs.sort { $0.lastPathComponent < $1.lastPathComponent }
|
||||||
|
print("Found \(jpgs.count) images to OCR")
|
||||||
|
|
||||||
|
struct OCRResult: Encodable {
|
||||||
|
var lines: [String]
|
||||||
|
var confidence: Double
|
||||||
|
}
|
||||||
|
|
||||||
|
var results: [String: OCRResult] = [:]
|
||||||
|
let total = jpgs.count
|
||||||
|
var processed = 0
|
||||||
|
let startTime = Date()
|
||||||
|
|
||||||
|
for url in jpgs {
|
||||||
|
processed += 1
|
||||||
|
let name = url.lastPathComponent
|
||||||
|
|
||||||
|
guard let nsImage = NSImage(contentsOf: url),
|
||||||
|
let tiffData = nsImage.tiffRepresentation,
|
||||||
|
let bitmap = NSBitmapImageRep(data: tiffData),
|
||||||
|
let cgImage = bitmap.cgImage else {
|
||||||
|
print("\(processed)/\(total) \(name) — could not load")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
|
||||||
|
let request = VNRecognizeTextRequest()
|
||||||
|
request.recognitionLevel = .accurate
|
||||||
|
request.recognitionLanguages = ["es-ES", "es", "en-US"]
|
||||||
|
request.usesLanguageCorrection = true
|
||||||
|
// For the 2020 book, automaticallyDetectsLanguage helps with mixed content
|
||||||
|
if #available(macOS 13.0, *) {
|
||||||
|
request.automaticallyDetectsLanguage = true
|
||||||
|
}
|
||||||
|
|
||||||
|
do {
|
||||||
|
try handler.perform([request])
|
||||||
|
let observations = request.results ?? []
|
||||||
|
var lines: [String] = []
|
||||||
|
var totalConfidence: Float = 0
|
||||||
|
var count = 0
|
||||||
|
for obs in observations {
|
||||||
|
if let top = obs.topCandidates(1).first {
|
||||||
|
let s = top.string.trimmingCharacters(in: .whitespaces)
|
||||||
|
if !s.isEmpty {
|
||||||
|
lines.append(s)
|
||||||
|
totalConfidence += top.confidence
|
||||||
|
count += 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
let avg = count > 0 ? Double(totalConfidence) / Double(count) : 0.0
|
||||||
|
results[name] = OCRResult(lines: lines, confidence: avg)
|
||||||
|
} catch {
|
||||||
|
print("\(processed)/\(total) \(name) — error: \(error)")
|
||||||
|
}
|
||||||
|
|
||||||
|
if processed % 50 == 0 || processed == total {
|
||||||
|
let elapsed = Date().timeIntervalSince(startTime)
|
||||||
|
let rate = Double(processed) / max(elapsed, 0.001)
|
||||||
|
let remaining = Double(total - processed) / max(rate, 0.001)
|
||||||
|
print(String(format: "%d/%d %.1f img/s eta %.0fs", processed, total, rate, remaining))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let encoder = JSONEncoder()
|
||||||
|
encoder.outputFormatting = [.prettyPrinted, .sortedKeys]
|
||||||
|
do {
|
||||||
|
let data = try encoder.encode(results)
|
||||||
|
try data.write(to: outputURL)
|
||||||
|
print("Wrote \(results.count) OCR entries to \(outputURL.path)")
|
||||||
|
} catch {
|
||||||
|
print("Error writing output: \(error)")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
133
Conjuga/Scripts/textbook/ocr_pdf.swift
Normal file
133
Conjuga/Scripts/textbook/ocr_pdf.swift
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
#!/usr/bin/env swift
|
||||||
|
// Rasterize each page of a PDF at high DPI and OCR it with Vision.
|
||||||
|
// Output: { "<pdfIndex>": { "lines": [...], "confidence": Double, "bookPage": Int? } }
|
||||||
|
//
|
||||||
|
// Usage: swift ocr_pdf.swift <pdf_path> <output_json> [dpi]
|
||||||
|
// Example: swift ocr_pdf.swift "book.pdf" pdf_ocr.json 240
|
||||||
|
|
||||||
|
import Foundation
|
||||||
|
import Vision
|
||||||
|
import AppKit
|
||||||
|
import Quartz
|
||||||
|
|
||||||
|
guard CommandLine.arguments.count >= 3 else {
|
||||||
|
print("Usage: swift ocr_pdf.swift <pdf_path> <output_json> [dpi]")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
let pdfURL = URL(fileURLWithPath: CommandLine.arguments[1])
|
||||||
|
let outputURL = URL(fileURLWithPath: CommandLine.arguments[2])
|
||||||
|
let dpi: CGFloat = CommandLine.arguments.count >= 4 ? CGFloat(Double(CommandLine.arguments[3]) ?? 240.0) : 240.0
|
||||||
|
|
||||||
|
guard let pdfDoc = PDFDocument(url: pdfURL) else {
|
||||||
|
print("Could not open PDF at \(pdfURL.path)")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
let pageCount = pdfDoc.pageCount
|
||||||
|
print("PDF has \(pageCount) pages. Rendering at \(dpi) DPI.")
|
||||||
|
|
||||||
|
struct PageResult: Encodable {
|
||||||
|
var lines: [String]
|
||||||
|
var confidence: Double
|
||||||
|
var bookPage: Int?
|
||||||
|
}
|
||||||
|
|
||||||
|
var results: [String: PageResult] = [:]
|
||||||
|
let startTime = Date()
|
||||||
|
|
||||||
|
// Render at scale = dpi / 72 (72 is default PDF DPI)
|
||||||
|
let scale: CGFloat = dpi / 72.0
|
||||||
|
|
||||||
|
for i in 0..<pageCount {
|
||||||
|
guard let page = pdfDoc.page(at: i) else { continue }
|
||||||
|
let pageBounds = page.bounds(for: .mediaBox)
|
||||||
|
let scaledSize = CGSize(width: pageBounds.width * scale, height: pageBounds.height * scale)
|
||||||
|
|
||||||
|
// Render the page into a CGImage
|
||||||
|
let colorSpace = CGColorSpaceCreateDeviceRGB()
|
||||||
|
let bitmapInfo = CGImageAlphaInfo.noneSkipLast.rawValue
|
||||||
|
guard let context = CGContext(
|
||||||
|
data: nil,
|
||||||
|
width: Int(scaledSize.width),
|
||||||
|
height: Int(scaledSize.height),
|
||||||
|
bitsPerComponent: 8,
|
||||||
|
bytesPerRow: 0,
|
||||||
|
space: colorSpace,
|
||||||
|
bitmapInfo: bitmapInfo
|
||||||
|
) else {
|
||||||
|
print("\(i): could not create CGContext")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
context.setFillColor(CGColor(gray: 1.0, alpha: 1.0))
|
||||||
|
context.fill(CGRect(origin: .zero, size: scaledSize))
|
||||||
|
context.scaleBy(x: scale, y: scale)
|
||||||
|
page.draw(with: .mediaBox, to: context)
|
||||||
|
|
||||||
|
guard let cgImage = context.makeImage() else {
|
||||||
|
print("\(i): could not create CGImage")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
|
||||||
|
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
|
||||||
|
let request = VNRecognizeTextRequest()
|
||||||
|
request.recognitionLevel = .accurate
|
||||||
|
request.recognitionLanguages = ["es-ES", "es", "en-US"]
|
||||||
|
request.usesLanguageCorrection = true
|
||||||
|
if #available(macOS 13.0, *) {
|
||||||
|
request.automaticallyDetectsLanguage = true
|
||||||
|
}
|
||||||
|
|
||||||
|
do {
|
||||||
|
try handler.perform([request])
|
||||||
|
let observations = request.results ?? []
|
||||||
|
var lines: [String] = []
|
||||||
|
var totalConfidence: Float = 0
|
||||||
|
var count = 0
|
||||||
|
for obs in observations {
|
||||||
|
if let top = obs.topCandidates(1).first {
|
||||||
|
let s = top.string.trimmingCharacters(in: .whitespaces)
|
||||||
|
if !s.isEmpty {
|
||||||
|
lines.append(s)
|
||||||
|
totalConfidence += top.confidence
|
||||||
|
count += 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
let avg = count > 0 ? Double(totalConfidence) / Double(count) : 0.0
|
||||||
|
|
||||||
|
// Try to detect book page number: a short numeric line in the first
|
||||||
|
// 3 or last 3 entries (typical page-number placement).
|
||||||
|
var bookPage: Int? = nil
|
||||||
|
let candidates = Array(lines.prefix(3)) + Array(lines.suffix(3))
|
||||||
|
for c in candidates {
|
||||||
|
let trimmed = c.trimmingCharacters(in: .whitespaces)
|
||||||
|
if let n = Int(trimmed), n >= 1 && n <= 1000 {
|
||||||
|
bookPage = n
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
results[String(i)] = PageResult(lines: lines, confidence: avg, bookPage: bookPage)
|
||||||
|
} catch {
|
||||||
|
print("\(i): \(error)")
|
||||||
|
}
|
||||||
|
|
||||||
|
if (i + 1) % 25 == 0 || (i + 1) == pageCount {
|
||||||
|
let elapsed = Date().timeIntervalSince(startTime)
|
||||||
|
let rate = Double(i + 1) / max(elapsed, 0.001)
|
||||||
|
let remaining = Double(pageCount - (i + 1)) / max(rate, 0.001)
|
||||||
|
print(String(format: "%d/%d %.1f pg/s eta %.0fs", i + 1, pageCount, rate, remaining))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
let encoder = JSONEncoder()
|
||||||
|
encoder.outputFormatting = [.sortedKeys]
|
||||||
|
do {
|
||||||
|
let data = try encoder.encode(results)
|
||||||
|
try data.write(to: outputURL)
|
||||||
|
print("Wrote \(results.count) pages to \(outputURL.path)")
|
||||||
|
} catch {
|
||||||
|
print("Error writing output: \(error)")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
177
Conjuga/Scripts/textbook/repair_quarantined.swift
Normal file
177
Conjuga/Scripts/textbook/repair_quarantined.swift
Normal file
@@ -0,0 +1,177 @@
|
|||||||
|
#!/usr/bin/env swift
|
||||||
|
// Re-OCR the images referenced in quarantined_cards.json using Vision with
|
||||||
|
// bounding-box info, then pair lines by column position (left = Spanish,
|
||||||
|
// right = English) instead of by document read order.
|
||||||
|
//
|
||||||
|
// Output: repaired_cards.json — {"byImage": {"f0142-02.jpg": [{"es":..., "en":...}, ...]}}
|
||||||
|
|
||||||
|
import Foundation
|
||||||
|
import Vision
|
||||||
|
import AppKit
|
||||||
|
|
||||||
|
guard CommandLine.arguments.count >= 4 else {
|
||||||
|
print("Usage: swift repair_quarantined.swift <quarantined.json> <epub_oebps_dir> <output.json>")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
let quarantinedURL = URL(fileURLWithPath: CommandLine.arguments[1])
|
||||||
|
let imageDir = URL(fileURLWithPath: CommandLine.arguments[2])
|
||||||
|
let outputURL = URL(fileURLWithPath: CommandLine.arguments[3])
|
||||||
|
|
||||||
|
guard let data = try? Data(contentsOf: quarantinedURL),
|
||||||
|
let json = try? JSONSerialization.jsonObject(with: data) as? [String: Any],
|
||||||
|
let cards = json["cards"] as? [[String: Any]] else {
|
||||||
|
print("Could not load \(quarantinedURL.path)")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
var uniqueImages = Set<String>()
|
||||||
|
for card in cards {
|
||||||
|
if let src = card["sourceImage"] as? String { uniqueImages.insert(src) }
|
||||||
|
}
|
||||||
|
print("Unique images to re-OCR: \(uniqueImages.count)")
|
||||||
|
|
||||||
|
struct RecognizedLine {
|
||||||
|
let text: String
|
||||||
|
let cx: CGFloat // center X (normalized 0..1)
|
||||||
|
let cy: CGFloat // center Y (normalized 0..1 from top)
|
||||||
|
let confidence: Float
|
||||||
|
}
|
||||||
|
|
||||||
|
struct Pair: Encodable {
|
||||||
|
var es: String
|
||||||
|
var en: String
|
||||||
|
var confidence: Double
|
||||||
|
}
|
||||||
|
|
||||||
|
struct ImageResult: Encodable {
|
||||||
|
var pairs: [Pair]
|
||||||
|
var lineCount: Int
|
||||||
|
var strategy: String
|
||||||
|
}
|
||||||
|
|
||||||
|
func classify(_ s: String) -> String {
|
||||||
|
// "es" if has accents or starts with ES article; "en" if starts with EN article; else "?"
|
||||||
|
let lower = s.lowercased()
|
||||||
|
let accentChars: Set<Character> = ["á", "é", "í", "ó", "ú", "ñ", "ü", "¿", "¡"]
|
||||||
|
if lower.contains(where: { accentChars.contains($0) }) { return "es" }
|
||||||
|
let first = lower.split(separator: " ").first.map(String.init) ?? ""
|
||||||
|
let esArticles: Set<String> = ["el", "la", "los", "las", "un", "una", "unos", "unas"]
|
||||||
|
let enStarters: Set<String> = ["the", "a", "an", "to", "my", "his", "her", "our", "their"]
|
||||||
|
if esArticles.contains(first) { return "es" }
|
||||||
|
if enStarters.contains(first) { return "en" }
|
||||||
|
return "?"
|
||||||
|
}
|
||||||
|
|
||||||
|
func recognizeLines(cgImage: CGImage) -> [RecognizedLine] {
|
||||||
|
let handler = VNImageRequestHandler(cgImage: cgImage, options: [:])
|
||||||
|
let request = VNRecognizeTextRequest()
|
||||||
|
request.recognitionLevel = .accurate
|
||||||
|
request.recognitionLanguages = ["es-ES", "es", "en-US"]
|
||||||
|
request.usesLanguageCorrection = true
|
||||||
|
if #available(macOS 13.0, *) {
|
||||||
|
request.automaticallyDetectsLanguage = true
|
||||||
|
}
|
||||||
|
do { try handler.perform([request]) } catch { return [] }
|
||||||
|
var out: [RecognizedLine] = []
|
||||||
|
for obs in request.results ?? [] {
|
||||||
|
guard let top = obs.topCandidates(1).first else { continue }
|
||||||
|
let s = top.string.trimmingCharacters(in: .whitespaces)
|
||||||
|
if s.isEmpty { continue }
|
||||||
|
// Vision's boundingBox is normalized with origin at lower-left
|
||||||
|
let bb = obs.boundingBox
|
||||||
|
let cx = bb.origin.x + bb.width / 2
|
||||||
|
let cyTop = 1.0 - (bb.origin.y + bb.height / 2) // flip to top-origin
|
||||||
|
out.append(RecognizedLine(text: s, cx: cx, cy: cyTop, confidence: top.confidence))
|
||||||
|
}
|
||||||
|
return out
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Pair lines by column position: left column = Spanish, right column = English.
|
||||||
|
/// Groups lines into rows by Y proximity, then within each row pairs left-right.
|
||||||
|
func pairByPosition(_ lines: [RecognizedLine]) -> ([Pair], String) {
|
||||||
|
guard !lines.isEmpty else { return ([], "empty") }
|
||||||
|
|
||||||
|
// Cluster by Y into rows. Use adaptive row height: median line gap * 0.6
|
||||||
|
let sortedByY = lines.sorted { $0.cy < $1.cy }
|
||||||
|
var rows: [[RecognizedLine]] = []
|
||||||
|
var current: [RecognizedLine] = []
|
||||||
|
let rowTol: CGFloat = 0.015 // 1.5% of page height
|
||||||
|
for l in sortedByY {
|
||||||
|
if let last = current.last, abs(l.cy - last.cy) > rowTol {
|
||||||
|
rows.append(current)
|
||||||
|
current = [l]
|
||||||
|
} else {
|
||||||
|
current.append(l)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !current.isEmpty { rows.append(current) }
|
||||||
|
|
||||||
|
var pairs: [Pair] = []
|
||||||
|
var strategy = "row-pair"
|
||||||
|
for row in rows {
|
||||||
|
guard row.count >= 2 else { continue }
|
||||||
|
// Sort row by X, split at midpoint; left = Spanish, right = English
|
||||||
|
let sortedX = row.sorted { $0.cx < $1.cx }
|
||||||
|
// Find gap: pick the biggest x-gap in the row to split
|
||||||
|
var maxGap: CGFloat = 0
|
||||||
|
var splitIdx = 1
|
||||||
|
for i in 1..<sortedX.count {
|
||||||
|
let gap = sortedX[i].cx - sortedX[i - 1].cx
|
||||||
|
if gap > maxGap {
|
||||||
|
maxGap = gap
|
||||||
|
splitIdx = i
|
||||||
|
}
|
||||||
|
}
|
||||||
|
let leftLines = Array(sortedX[0..<splitIdx])
|
||||||
|
let rightLines = Array(sortedX[splitIdx..<sortedX.count])
|
||||||
|
let leftText = leftLines.map(\.text).joined(separator: " ").trimmingCharacters(in: .whitespaces)
|
||||||
|
let rightText = rightLines.map(\.text).joined(separator: " ").trimmingCharacters(in: .whitespaces)
|
||||||
|
if leftText.isEmpty || rightText.isEmpty { continue }
|
||||||
|
// Verify language orientation — swap if we got it backwards
|
||||||
|
var es = leftText
|
||||||
|
var en = rightText
|
||||||
|
let lc = classify(es)
|
||||||
|
let rc = classify(en)
|
||||||
|
if lc == "en" && rc == "es" {
|
||||||
|
es = rightText
|
||||||
|
en = leftText
|
||||||
|
}
|
||||||
|
let avgConf = (leftLines + rightLines).reduce(Float(0)) { $0 + $1.confidence } / Float(leftLines.count + rightLines.count)
|
||||||
|
pairs.append(Pair(es: es, en: en, confidence: Double(avgConf)))
|
||||||
|
}
|
||||||
|
|
||||||
|
if pairs.isEmpty { strategy = "no-rows" }
|
||||||
|
return (pairs, strategy)
|
||||||
|
}
|
||||||
|
|
||||||
|
var results: [String: ImageResult] = [:]
|
||||||
|
|
||||||
|
for name in uniqueImages.sorted() {
|
||||||
|
let url = imageDir.appendingPathComponent(name)
|
||||||
|
guard let img = NSImage(contentsOf: url),
|
||||||
|
let tiff = img.tiffRepresentation,
|
||||||
|
let rep = NSBitmapImageRep(data: tiff),
|
||||||
|
let cg = rep.cgImage else {
|
||||||
|
print("\(name): could not load")
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
let lines = recognizeLines(cgImage: cg)
|
||||||
|
let (pairs, strategy) = pairByPosition(lines)
|
||||||
|
results[name] = ImageResult(pairs: pairs, lineCount: lines.count, strategy: strategy)
|
||||||
|
print("\(name): \(lines.count) lines -> \(pairs.count) pairs via \(strategy)")
|
||||||
|
}
|
||||||
|
|
||||||
|
struct Output: Encodable {
|
||||||
|
var byImage: [String: ImageResult]
|
||||||
|
var totalPairs: Int
|
||||||
|
}
|
||||||
|
let output = Output(
|
||||||
|
byImage: results,
|
||||||
|
totalPairs: results.values.reduce(0) { $0 + $1.pairs.count }
|
||||||
|
)
|
||||||
|
|
||||||
|
let enc = JSONEncoder()
|
||||||
|
enc.outputFormatting = [.prettyPrinted, .sortedKeys]
|
||||||
|
try enc.encode(output).write(to: outputURL)
|
||||||
|
print("Wrote \(output.totalPairs) repaired pairs to \(outputURL.path)")
|
||||||
54
Conjuga/Scripts/textbook/run_pipeline.sh
Executable file
54
Conjuga/Scripts/textbook/run_pipeline.sh
Executable file
@@ -0,0 +1,54 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# End-to-end textbook extraction pipeline.
|
||||||
|
#
|
||||||
|
# Requires: Python 3 + lxml/beautifulsoup4/pypdf installed.
|
||||||
|
# macOS for Vision + NSSpellChecker (Swift).
|
||||||
|
#
|
||||||
|
# Inputs: EPUB extracted to epub_extract/OEBPS/ and the PDF at project root.
|
||||||
|
# Outputs: book.json, vocab_cards.json, manual_review.json, quarantined_cards.json
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
SCRIPT_DIR="$(cd "$(dirname "$0")" && pwd)"
|
||||||
|
ROOT="$(cd "$SCRIPT_DIR/../../.." && pwd)"
|
||||||
|
cd "$ROOT"
|
||||||
|
|
||||||
|
echo "=== Phase 1a: parse XHTML chapters ==="
|
||||||
|
python3 "$SCRIPT_DIR/extract_chapters.py"
|
||||||
|
|
||||||
|
echo "=== Phase 1b: parse answer key ==="
|
||||||
|
python3 "$SCRIPT_DIR/extract_answers.py"
|
||||||
|
|
||||||
|
if [ ! -f "$SCRIPT_DIR/ocr.json" ]; then
|
||||||
|
echo "=== Phase 1c: OCR EPUB images (first-time only) ==="
|
||||||
|
swift "$SCRIPT_DIR/ocr_images.swift" "$ROOT/epub_extract/OEBPS" "$SCRIPT_DIR/ocr.json"
|
||||||
|
else
|
||||||
|
echo "=== Phase 1c: EPUB OCR already cached ==="
|
||||||
|
fi
|
||||||
|
|
||||||
|
PDF_FILE="$(ls "$ROOT"/Complete\ Spanish\ Step-By-Step*.pdf 2>/dev/null | head -1 || true)"
|
||||||
|
if [ -n "$PDF_FILE" ] && [ ! -f "$SCRIPT_DIR/pdf_ocr.json" ]; then
|
||||||
|
echo "=== Phase 1d: OCR PDF pages (first-time only) ==="
|
||||||
|
swift "$SCRIPT_DIR/ocr_pdf.swift" "$PDF_FILE" "$SCRIPT_DIR/pdf_ocr.json" 240
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "=== Phase 1e: merge into book.json ==="
|
||||||
|
python3 "$SCRIPT_DIR/merge_pdf_into_book.py"
|
||||||
|
|
||||||
|
echo "=== Phase 2: spell-check validation ==="
|
||||||
|
swift "$SCRIPT_DIR/validate_vocab.swift" "$SCRIPT_DIR/vocab_cards.json" "$SCRIPT_DIR/vocab_validation.json"
|
||||||
|
|
||||||
|
echo "=== Phase 3: auto-fix + quarantine pass 1 ==="
|
||||||
|
python3 "$SCRIPT_DIR/fix_vocab.py"
|
||||||
|
|
||||||
|
echo "=== Phase 3: auto-fix + quarantine pass 2 (convergence) ==="
|
||||||
|
swift "$SCRIPT_DIR/validate_vocab.swift" "$SCRIPT_DIR/vocab_cards.json" "$SCRIPT_DIR/vocab_validation.json"
|
||||||
|
python3 "$SCRIPT_DIR/fix_vocab.py"
|
||||||
|
|
||||||
|
echo ""
|
||||||
|
echo "=== Copy to app bundle ==="
|
||||||
|
cp "$SCRIPT_DIR/book.json" "$ROOT/Conjuga/Conjuga/textbook_data.json"
|
||||||
|
cp "$SCRIPT_DIR/vocab_cards.json" "$ROOT/Conjuga/Conjuga/textbook_vocab.json"
|
||||||
|
ls -lh "$ROOT/Conjuga/Conjuga/textbook_"*.json
|
||||||
|
echo ""
|
||||||
|
echo "Done. Bump textbookDataVersion in DataLoader.swift to trigger re-seed."
|
||||||
156
Conjuga/Scripts/textbook/validate_vocab.swift
Normal file
156
Conjuga/Scripts/textbook/validate_vocab.swift
Normal file
@@ -0,0 +1,156 @@
|
|||||||
|
#!/usr/bin/env swift
|
||||||
|
// Validate every Spanish/English word in vocab_cards.json using NSSpellChecker.
|
||||||
|
// For each flagged word, produce up to 3 candidate corrections.
|
||||||
|
//
|
||||||
|
// Usage: swift validate_vocab.swift <vocab_cards.json> <output_report.json>
|
||||||
|
|
||||||
|
import Foundation
|
||||||
|
import AppKit
|
||||||
|
|
||||||
|
guard CommandLine.arguments.count >= 3 else {
|
||||||
|
print("Usage: swift validate_vocab.swift <vocab_cards.json> <output_report.json>")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
let inputURL = URL(fileURLWithPath: CommandLine.arguments[1])
|
||||||
|
let outputURL = URL(fileURLWithPath: CommandLine.arguments[2])
|
||||||
|
|
||||||
|
guard let data = try? Data(contentsOf: inputURL),
|
||||||
|
let json = try? JSONSerialization.jsonObject(with: data) as? [String: Any],
|
||||||
|
let chapters = json["chapters"] as? [[String: Any]] else {
|
||||||
|
print("Could not load \(inputURL.path)")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
|
|
||||||
|
let checker = NSSpellChecker.shared
|
||||||
|
|
||||||
|
// Tokenize — only letter runs (Unicode aware for Spanish accents)
|
||||||
|
func tokens(_ s: String) -> [String] {
|
||||||
|
let letters = CharacterSet.letters
|
||||||
|
return s.unicodeScalars
|
||||||
|
.split { !letters.contains($0) }
|
||||||
|
.map { String(String.UnicodeScalarView($0)) }
|
||||||
|
.filter { !$0.isEmpty }
|
||||||
|
}
|
||||||
|
|
||||||
|
// Minimal stopword set — names, proper nouns, numeric tokens already filtered
|
||||||
|
let stopES: Set<String> = [
|
||||||
|
"el", "la", "los", "las", "un", "una", "unos", "unas", "del", "al", "de",
|
||||||
|
"a", "en", "y", "o", "que", "no", "se", "con", "por", "para", "lo", "le",
|
||||||
|
"su", "mi", "tu", "yo", "te", "me", "es", "son", "está", "están",
|
||||||
|
]
|
||||||
|
let stopEN: Set<String> = [
|
||||||
|
"the", "a", "an", "to", "of", "in", "and", "or", "is", "are", "was", "were",
|
||||||
|
"be", "been", "my", "his", "her", "our", "their", "your",
|
||||||
|
]
|
||||||
|
|
||||||
|
func checkWord(_ w: String, lang: String, stop: Set<String>) -> [String]? {
|
||||||
|
// Return nil if word is OK, else list of candidate corrections.
|
||||||
|
if w.count < 2 { return nil }
|
||||||
|
if stop.contains(w.lowercased()) { return nil }
|
||||||
|
if w.rangeOfCharacter(from: .decimalDigits) != nil { return nil }
|
||||||
|
|
||||||
|
let range = checker.checkSpelling(
|
||||||
|
of: w,
|
||||||
|
startingAt: 0,
|
||||||
|
language: lang,
|
||||||
|
wrap: false,
|
||||||
|
inSpellDocumentWithTag: 0,
|
||||||
|
wordCount: nil
|
||||||
|
)
|
||||||
|
// Range of `(0, 0)` means no misspelling; otherwise we have a misspelling.
|
||||||
|
if range.location == NSNotFound || range.length == 0 { return nil }
|
||||||
|
|
||||||
|
let guesses = checker.guesses(
|
||||||
|
forWordRange: NSRange(location: 0, length: (w as NSString).length),
|
||||||
|
in: w,
|
||||||
|
language: lang,
|
||||||
|
inSpellDocumentWithTag: 0
|
||||||
|
) ?? []
|
||||||
|
return Array(guesses.prefix(3))
|
||||||
|
}
|
||||||
|
|
||||||
|
struct Flag: Encodable {
|
||||||
|
var chapter: Int
|
||||||
|
var front: String
|
||||||
|
var back: String
|
||||||
|
var badFront: [BadWord]
|
||||||
|
var badBack: [BadWord]
|
||||||
|
var sourceImage: String
|
||||||
|
}
|
||||||
|
struct BadWord: Encodable {
|
||||||
|
var word: String
|
||||||
|
var suggestions: [String]
|
||||||
|
var side: String // "es" or "en"
|
||||||
|
}
|
||||||
|
|
||||||
|
var flags: [Flag] = []
|
||||||
|
var totalCards = 0
|
||||||
|
var totalBadES = 0
|
||||||
|
var totalBadEN = 0
|
||||||
|
|
||||||
|
for ch in chapters {
|
||||||
|
guard let chNum = ch["chapter"] as? Int,
|
||||||
|
let cards = ch["cards"] as? [[String: Any]] else { continue }
|
||||||
|
for card in cards {
|
||||||
|
totalCards += 1
|
||||||
|
let front = (card["front"] as? String) ?? ""
|
||||||
|
let back = (card["back"] as? String) ?? ""
|
||||||
|
let img = (card["sourceImage"] as? String) ?? ""
|
||||||
|
|
||||||
|
var badFront: [BadWord] = []
|
||||||
|
for w in tokens(front) {
|
||||||
|
if let sugg = checkWord(w, lang: "es", stop: stopES) {
|
||||||
|
badFront.append(BadWord(word: w, suggestions: sugg, side: "es"))
|
||||||
|
totalBadES += 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
var badBack: [BadWord] = []
|
||||||
|
for w in tokens(back) {
|
||||||
|
if let sugg = checkWord(w, lang: "en", stop: stopEN) {
|
||||||
|
badBack.append(BadWord(word: w, suggestions: sugg, side: "en"))
|
||||||
|
totalBadEN += 1
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if !badFront.isEmpty || !badBack.isEmpty {
|
||||||
|
flags.append(Flag(
|
||||||
|
chapter: chNum,
|
||||||
|
front: front,
|
||||||
|
back: back,
|
||||||
|
badFront: badFront,
|
||||||
|
badBack: badBack,
|
||||||
|
sourceImage: img
|
||||||
|
))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
struct Report: Encodable {
|
||||||
|
var totalCards: Int
|
||||||
|
var flaggedCards: Int
|
||||||
|
var flaggedSpanishWords: Int
|
||||||
|
var flaggedEnglishWords: Int
|
||||||
|
var flags: [Flag]
|
||||||
|
}
|
||||||
|
let report = Report(
|
||||||
|
totalCards: totalCards,
|
||||||
|
flaggedCards: flags.count,
|
||||||
|
flaggedSpanishWords: totalBadES,
|
||||||
|
flaggedEnglishWords: totalBadEN,
|
||||||
|
flags: flags
|
||||||
|
)
|
||||||
|
|
||||||
|
let encoder = JSONEncoder()
|
||||||
|
encoder.outputFormatting = [.prettyPrinted, .sortedKeys]
|
||||||
|
do {
|
||||||
|
let data = try encoder.encode(report)
|
||||||
|
try data.write(to: outputURL)
|
||||||
|
print("Cards: \(totalCards)")
|
||||||
|
print("Flagged cards: \(flags.count) (\(Double(flags.count)/Double(totalCards)*100.0 as Double)%)")
|
||||||
|
print("Flagged ES words: \(totalBadES)")
|
||||||
|
print("Flagged EN words: \(totalBadEN)")
|
||||||
|
print("Wrote \(outputURL.path)")
|
||||||
|
} catch {
|
||||||
|
print("Error writing output: \(error)")
|
||||||
|
exit(1)
|
||||||
|
}
|
||||||
68
Conjuga/SharedModels/Sources/SharedModels/AnswerGrader.swift
Normal file
68
Conjuga/SharedModels/Sources/SharedModels/AnswerGrader.swift
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
import Foundation
|
||||||
|
|
||||||
|
/// On-device deterministic answer grader with partial-credit support.
|
||||||
|
/// No network calls, no API keys. Handles accent stripping and single-char typos.
|
||||||
|
public enum AnswerGrader {
|
||||||
|
|
||||||
|
/// Evaluate `userText` against the canonical answer (plus alternates).
|
||||||
|
/// Returns `.correct` for exact/normalized match, `.close` for accent-strip
|
||||||
|
/// match or Levenshtein distance 1, `.wrong` otherwise.
|
||||||
|
public static func grade(userText: String, canonical: String, alternates: [String] = []) -> TextbookGrade {
|
||||||
|
let candidates = [canonical] + alternates
|
||||||
|
let normalizedUser = normalize(userText)
|
||||||
|
if normalizedUser.isEmpty { return .wrong }
|
||||||
|
|
||||||
|
for c in candidates {
|
||||||
|
if normalize(c) == normalizedUser { return .correct }
|
||||||
|
}
|
||||||
|
for c in candidates {
|
||||||
|
if stripAccents(normalize(c)) == stripAccents(normalizedUser) {
|
||||||
|
return .close
|
||||||
|
}
|
||||||
|
}
|
||||||
|
for c in candidates {
|
||||||
|
if levenshtein(normalizedUser, normalize(c)) <= 1 {
|
||||||
|
return .close
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return .wrong
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Lowercase, collapse whitespace, strip leading/trailing punctuation.
|
||||||
|
public static func normalize(_ s: String) -> String {
|
||||||
|
let lowered = s.lowercased(with: Locale(identifier: "es"))
|
||||||
|
let collapsed = lowered.replacingOccurrences(of: "\\s+", with: " ", options: .regularExpression)
|
||||||
|
let trimmed = collapsed.trimmingCharacters(in: .whitespacesAndNewlines)
|
||||||
|
let punct = CharacterSet(charactersIn: ".,;:!?¿¡\"'()[]{}—–-")
|
||||||
|
return trimmed.trimmingCharacters(in: punct)
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Remove combining diacritics (á→a, ñ→n, ü→u).
|
||||||
|
public static func stripAccents(_ s: String) -> String {
|
||||||
|
s.folding(options: .diacriticInsensitive, locale: Locale(identifier: "en"))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Standard Levenshtein edit distance.
|
||||||
|
public static func levenshtein(_ a: String, _ b: String) -> Int {
|
||||||
|
if a == b { return 0 }
|
||||||
|
if a.isEmpty { return b.count }
|
||||||
|
if b.isEmpty { return a.count }
|
||||||
|
let aa = Array(a)
|
||||||
|
let bb = Array(b)
|
||||||
|
var prev = Array(0...bb.count)
|
||||||
|
var curr = Array(repeating: 0, count: bb.count + 1)
|
||||||
|
for i in 1...aa.count {
|
||||||
|
curr[0] = i
|
||||||
|
for j in 1...bb.count {
|
||||||
|
let cost = aa[i - 1] == bb[j - 1] ? 0 : 1
|
||||||
|
curr[j] = min(
|
||||||
|
prev[j] + 1,
|
||||||
|
curr[j - 1] + 1,
|
||||||
|
prev[j - 1] + cost
|
||||||
|
)
|
||||||
|
}
|
||||||
|
swap(&prev, &curr)
|
||||||
|
}
|
||||||
|
return prev[bb.count]
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,81 @@
|
|||||||
|
import Foundation
|
||||||
|
|
||||||
|
/// Pure practice-pool filtering (Issue #26).
|
||||||
|
///
|
||||||
|
/// Takes plain value snapshots of the verb + irregular-span data and computes
|
||||||
|
/// the set of verb IDs eligible for practice under the user's selected filters.
|
||||||
|
/// Deliberately decoupled from SwiftData so the same logic is directly testable
|
||||||
|
/// without a ModelContainer.
|
||||||
|
public enum PracticeFilter {
|
||||||
|
|
||||||
|
/// Minimal verb snapshot for filtering.
|
||||||
|
public struct VerbSlot: Sendable, Hashable {
|
||||||
|
public let id: Int
|
||||||
|
public let level: String
|
||||||
|
public init(id: Int, level: String) {
|
||||||
|
self.id = id
|
||||||
|
self.level = level
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Minimal irregular-span snapshot for filtering.
|
||||||
|
public struct IrregularSlot: Sendable, Hashable {
|
||||||
|
public let verbId: Int
|
||||||
|
public let category: IrregularSpan.SpanCategory
|
||||||
|
public init(verbId: Int, category: IrregularSpan.SpanCategory) {
|
||||||
|
self.verbId = verbId
|
||||||
|
self.category = category
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Union of `VerbLevelGroup.dataLevels(for:)` across every user-facing level.
|
||||||
|
/// An empty input produces an empty result; callers decide the empty semantics.
|
||||||
|
public static func dataLevels(forSelectedLevels levels: Set<String>) -> Set<String> {
|
||||||
|
levels.reduce(into: Set<String>()) { acc, level in
|
||||||
|
acc.formUnion(VerbLevelGroup.dataLevels(for: level))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Verb IDs whose `level` falls inside any of the selected level groups.
|
||||||
|
public static func verbIDs(
|
||||||
|
matchingLevels selectedLevels: Set<String>,
|
||||||
|
in verbs: [VerbSlot]
|
||||||
|
) -> Set<Int> {
|
||||||
|
guard !selectedLevels.isEmpty else { return [] }
|
||||||
|
let expanded = dataLevels(forSelectedLevels: selectedLevels)
|
||||||
|
return Set(verbs.filter { expanded.contains($0.level) }.map(\.id))
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Verb IDs that have at least one irregular span in the requested categories.
|
||||||
|
/// Returns an empty set when `categories` is empty — caller decides whether
|
||||||
|
/// that means "no constraint" or "no matches".
|
||||||
|
public static func verbIDs(
|
||||||
|
matchingIrregularCategories categories: Set<IrregularSpan.SpanCategory>,
|
||||||
|
in spans: [IrregularSlot]
|
||||||
|
) -> Set<Int> {
|
||||||
|
guard !categories.isEmpty else { return [] }
|
||||||
|
var ids = Set<Int>()
|
||||||
|
for slot in spans where categories.contains(slot.category) {
|
||||||
|
ids.insert(slot.verbId)
|
||||||
|
}
|
||||||
|
return ids
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Practice pool: verbs at the selected levels, intersected with irregular
|
||||||
|
/// categories when that filter is active.
|
||||||
|
///
|
||||||
|
/// Semantics (Issue #26):
|
||||||
|
/// - `selectedLevels` empty → empty pool (literal).
|
||||||
|
/// - `irregularCategories` empty → no irregular constraint (all verbs at level).
|
||||||
|
public static func allowedVerbIDs(
|
||||||
|
verbs: [VerbSlot],
|
||||||
|
spans: [IrregularSlot],
|
||||||
|
selectedLevels: Set<String>,
|
||||||
|
irregularCategories: Set<IrregularSpan.SpanCategory>
|
||||||
|
) -> Set<Int> {
|
||||||
|
let levelIDs = verbIDs(matchingLevels: selectedLevels, in: verbs)
|
||||||
|
guard !irregularCategories.isEmpty else { return levelIDs }
|
||||||
|
let irregularIDs = verbIDs(matchingIrregularCategories: irregularCategories, in: spans)
|
||||||
|
return levelIDs.intersection(irregularIDs)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,23 @@
|
|||||||
|
import Foundation
|
||||||
|
|
||||||
|
/// A single entry from the curated "100 most common reflexive verbs" list
|
||||||
|
/// (Gitea issue #28). Sourced from spanishwithdaniel.com.
|
||||||
|
///
|
||||||
|
/// `baseInfinitive` is the stem without the reflexive "-se" suffix, used to
|
||||||
|
/// match this entry to the app's Verb records (which store bare infinitives).
|
||||||
|
/// `usageHint` captures trailing prepositions or set-phrase completions — e.g.,
|
||||||
|
/// "a" for `acercarse a`, "de acuerdo" for `ponerse de acuerdo`. Nil when the
|
||||||
|
/// reflexive form has no commonly paired preposition.
|
||||||
|
public struct ReflexiveVerb: Codable, Hashable, Sendable {
|
||||||
|
public let infinitive: String
|
||||||
|
public let baseInfinitive: String
|
||||||
|
public let english: String
|
||||||
|
public let usageHint: String?
|
||||||
|
|
||||||
|
public init(infinitive: String, baseInfinitive: String, english: String, usageHint: String? = nil) {
|
||||||
|
self.infinitive = infinitive
|
||||||
|
self.baseInfinitive = baseInfinitive
|
||||||
|
self.english = english
|
||||||
|
self.usageHint = usageHint
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,86 @@
|
|||||||
|
import Foundation
|
||||||
|
import SwiftData
|
||||||
|
|
||||||
|
/// One chapter of the textbook. Ordered content blocks are stored as JSON in `bodyJSON`
|
||||||
|
/// (encoded [TextbookBlock]) since SwiftData @Model doesn't support heterogeneous arrays.
|
||||||
|
@Model
|
||||||
|
public final class TextbookChapter {
|
||||||
|
@Attribute(.unique) public var id: String = ""
|
||||||
|
public var number: Int = 0
|
||||||
|
public var title: String = ""
|
||||||
|
public var part: Int = 0 // 0 = no part assignment
|
||||||
|
public var courseName: String = ""
|
||||||
|
public var bodyJSON: Data = Data()
|
||||||
|
public var exerciseCount: Int = 0
|
||||||
|
public var vocabTableCount: Int = 0
|
||||||
|
|
||||||
|
public init(
|
||||||
|
id: String,
|
||||||
|
number: Int,
|
||||||
|
title: String,
|
||||||
|
part: Int,
|
||||||
|
courseName: String,
|
||||||
|
bodyJSON: Data,
|
||||||
|
exerciseCount: Int,
|
||||||
|
vocabTableCount: Int
|
||||||
|
) {
|
||||||
|
self.id = id
|
||||||
|
self.number = number
|
||||||
|
self.title = title
|
||||||
|
self.part = part
|
||||||
|
self.courseName = courseName
|
||||||
|
self.bodyJSON = bodyJSON
|
||||||
|
self.exerciseCount = exerciseCount
|
||||||
|
self.vocabTableCount = vocabTableCount
|
||||||
|
}
|
||||||
|
|
||||||
|
public func blocks() -> [TextbookBlock] {
|
||||||
|
(try? JSONDecoder().decode([TextbookBlock].self, from: bodyJSON)) ?? []
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// One content block within a chapter. Polymorphic via `kind`.
|
||||||
|
public struct TextbookBlock: Codable, Identifiable, Sendable {
|
||||||
|
public enum Kind: String, Codable, Sendable {
|
||||||
|
case heading
|
||||||
|
case paragraph
|
||||||
|
case keyVocabHeader = "key_vocab_header"
|
||||||
|
case vocabTable = "vocab_table"
|
||||||
|
case exercise
|
||||||
|
}
|
||||||
|
|
||||||
|
public var id: String { "\(kind.rawValue):\(index)" }
|
||||||
|
public var index: Int
|
||||||
|
public var kind: Kind
|
||||||
|
|
||||||
|
// heading
|
||||||
|
public var level: Int?
|
||||||
|
// heading / paragraph
|
||||||
|
public var text: String?
|
||||||
|
|
||||||
|
// vocab_table
|
||||||
|
public var sourceImage: String?
|
||||||
|
public var ocrLines: [String]?
|
||||||
|
public var ocrConfidence: Double?
|
||||||
|
public var cards: [TextbookVocabPair]?
|
||||||
|
|
||||||
|
// exercise
|
||||||
|
public var exerciseId: String?
|
||||||
|
public var instruction: String?
|
||||||
|
public var extra: [String]?
|
||||||
|
public var prompts: [String]?
|
||||||
|
public var answerItems: [TextbookAnswerItem]?
|
||||||
|
public var freeform: Bool?
|
||||||
|
}
|
||||||
|
|
||||||
|
public struct TextbookVocabPair: Codable, Sendable {
|
||||||
|
public var front: String
|
||||||
|
public var back: String
|
||||||
|
}
|
||||||
|
|
||||||
|
public struct TextbookAnswerItem: Codable, Sendable {
|
||||||
|
public var label: String? // A/B/C subpart label or nil
|
||||||
|
public var number: Int
|
||||||
|
public var answer: String
|
||||||
|
public var alternates: [String]
|
||||||
|
}
|
||||||
@@ -0,0 +1,83 @@
|
|||||||
|
import Foundation
|
||||||
|
import SwiftData
|
||||||
|
|
||||||
|
/// Per-prompt grading state recorded after the user submits an exercise.
|
||||||
|
public enum TextbookGrade: Int, Codable, Sendable {
|
||||||
|
case wrong = 0
|
||||||
|
case close = 1
|
||||||
|
case correct = 2
|
||||||
|
}
|
||||||
|
|
||||||
|
/// User's attempt for one exercise. Stored in the cloud container so progress
|
||||||
|
/// syncs across devices.
|
||||||
|
@Model
|
||||||
|
public final class TextbookExerciseAttempt {
|
||||||
|
/// Deterministic id: "<courseName>|<exerciseId>". CloudKit-synced models can't
|
||||||
|
/// use @Attribute(.unique); code that writes attempts must fetch-or-create.
|
||||||
|
public var id: String = ""
|
||||||
|
public var courseName: String = ""
|
||||||
|
public var chapterNumber: Int = 0
|
||||||
|
public var exerciseId: String = ""
|
||||||
|
|
||||||
|
/// JSON-encoded per-prompt state array.
|
||||||
|
/// Each entry: { "number": Int, "userText": String, "grade": Int }
|
||||||
|
public var stateJSON: Data = Data()
|
||||||
|
|
||||||
|
public var lastAttemptAt: Date = Date()
|
||||||
|
public var correctCount: Int = 0
|
||||||
|
public var closeCount: Int = 0
|
||||||
|
public var wrongCount: Int = 0
|
||||||
|
public var totalCount: Int = 0
|
||||||
|
|
||||||
|
public init(
|
||||||
|
id: String,
|
||||||
|
courseName: String,
|
||||||
|
chapterNumber: Int,
|
||||||
|
exerciseId: String,
|
||||||
|
stateJSON: Data = Data(),
|
||||||
|
lastAttemptAt: Date = Date(),
|
||||||
|
correctCount: Int = 0,
|
||||||
|
closeCount: Int = 0,
|
||||||
|
wrongCount: Int = 0,
|
||||||
|
totalCount: Int = 0
|
||||||
|
) {
|
||||||
|
self.id = id
|
||||||
|
self.courseName = courseName
|
||||||
|
self.chapterNumber = chapterNumber
|
||||||
|
self.exerciseId = exerciseId
|
||||||
|
self.stateJSON = stateJSON
|
||||||
|
self.lastAttemptAt = lastAttemptAt
|
||||||
|
self.correctCount = correctCount
|
||||||
|
self.closeCount = closeCount
|
||||||
|
self.wrongCount = wrongCount
|
||||||
|
self.totalCount = totalCount
|
||||||
|
}
|
||||||
|
|
||||||
|
public func promptStates() -> [TextbookPromptState] {
|
||||||
|
(try? JSONDecoder().decode([TextbookPromptState].self, from: stateJSON)) ?? []
|
||||||
|
}
|
||||||
|
|
||||||
|
public func setPromptStates(_ states: [TextbookPromptState]) {
|
||||||
|
stateJSON = (try? JSONEncoder().encode(states)) ?? Data()
|
||||||
|
correctCount = states.filter { $0.grade == .correct }.count
|
||||||
|
closeCount = states.filter { $0.grade == .close }.count
|
||||||
|
wrongCount = states.filter { $0.grade == .wrong }.count
|
||||||
|
totalCount = states.count
|
||||||
|
}
|
||||||
|
|
||||||
|
public static func attemptId(courseName: String, exerciseId: String) -> String {
|
||||||
|
"\(courseName)|\(exerciseId)"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
public struct TextbookPromptState: Codable, Sendable {
|
||||||
|
public var number: Int
|
||||||
|
public var userText: String
|
||||||
|
public var grade: TextbookGrade
|
||||||
|
|
||||||
|
public init(number: Int, userText: String, grade: TextbookGrade) {
|
||||||
|
self.number = number
|
||||||
|
self.userText = userText
|
||||||
|
self.grade = grade
|
||||||
|
}
|
||||||
|
}
|
||||||
17
Conjuga/SharedModels/Sources/SharedModels/VerbExample.swift
Normal file
17
Conjuga/SharedModels/Sources/SharedModels/VerbExample.swift
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
import Foundation
|
||||||
|
|
||||||
|
/// A single Spanish / English example sentence pair for a verb at a specific tense.
|
||||||
|
/// Used by the Verb detail view (Issue #27). Generated at runtime via Foundation
|
||||||
|
/// Models and cached to disk; shape is intentionally simple Codable for easy
|
||||||
|
/// JSON persistence and cross-module sharing.
|
||||||
|
public struct VerbExample: Codable, Hashable, Sendable {
|
||||||
|
public let tenseId: String
|
||||||
|
public let spanish: String
|
||||||
|
public let english: String
|
||||||
|
|
||||||
|
public init(tenseId: String, spanish: String, english: String) {
|
||||||
|
self.tenseId = tenseId
|
||||||
|
self.spanish = spanish
|
||||||
|
self.english = english
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,10 @@
|
|||||||
|
import Foundation
|
||||||
|
|
||||||
|
public extension VerbLevel {
|
||||||
|
/// The highest-ranked `VerbLevel` in `set` per `allCases` ordering.
|
||||||
|
/// Used when a single representative level is required (word-of-day
|
||||||
|
/// widget, AI chat/story scenario generation).
|
||||||
|
static func highest(in set: Set<VerbLevel>) -> VerbLevel? {
|
||||||
|
allCases.last { set.contains($0) }
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,80 @@
|
|||||||
|
import Testing
|
||||||
|
@testable import SharedModels
|
||||||
|
|
||||||
|
@Suite("AnswerGrader")
|
||||||
|
struct AnswerGraderTests {
|
||||||
|
|
||||||
|
@Test("exact match is correct")
|
||||||
|
func exact() {
|
||||||
|
#expect(AnswerGrader.grade(userText: "tengo", canonical: "tengo") == .correct)
|
||||||
|
#expect(AnswerGrader.grade(userText: "Tengo", canonical: "tengo") == .correct)
|
||||||
|
#expect(AnswerGrader.grade(userText: " tengo ", canonical: "tengo") == .correct)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("missing accent is close")
|
||||||
|
func missingAccent() {
|
||||||
|
#expect(AnswerGrader.grade(userText: "esta", canonical: "está") == .close)
|
||||||
|
#expect(AnswerGrader.grade(userText: "nino", canonical: "niño") == .close)
|
||||||
|
#expect(AnswerGrader.grade(userText: "asi", canonical: "así") == .close)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("single-char typo is close")
|
||||||
|
func singleCharTypo() {
|
||||||
|
// deletion
|
||||||
|
#expect(AnswerGrader.grade(userText: "tngo", canonical: "tengo") == .close)
|
||||||
|
// insertion
|
||||||
|
#expect(AnswerGrader.grade(userText: "tengoo", canonical: "tengo") == .close)
|
||||||
|
// substitution
|
||||||
|
#expect(AnswerGrader.grade(userText: "tengu", canonical: "tengo") == .close)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("two-char typo is wrong")
|
||||||
|
func twoCharTypo() {
|
||||||
|
#expect(AnswerGrader.grade(userText: "tngu", canonical: "tengo") == .wrong)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("empty is wrong")
|
||||||
|
func empty() {
|
||||||
|
#expect(AnswerGrader.grade(userText: "", canonical: "tengo") == .wrong)
|
||||||
|
#expect(AnswerGrader.grade(userText: " ", canonical: "tengo") == .wrong)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("alternates accepted")
|
||||||
|
func alternates() {
|
||||||
|
#expect(AnswerGrader.grade(userText: "flaca", canonical: "delgada", alternates: ["flaca"]) == .correct)
|
||||||
|
#expect(AnswerGrader.grade(userText: "flacca", canonical: "delgada", alternates: ["flaca"]) == .close)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("punctuation stripped")
|
||||||
|
func punctuation() {
|
||||||
|
#expect(AnswerGrader.grade(userText: "el libro.", canonical: "el libro") == .correct)
|
||||||
|
#expect(AnswerGrader.grade(userText: "¿dónde?", canonical: "dónde") == .correct)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("very different text is wrong")
|
||||||
|
func wrong() {
|
||||||
|
#expect(AnswerGrader.grade(userText: "hola", canonical: "tengo") == .wrong)
|
||||||
|
#expect(AnswerGrader.grade(userText: "casa", canonical: "perro") == .wrong)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("normalize produces expected output")
|
||||||
|
func normalize() {
|
||||||
|
#expect(AnswerGrader.normalize(" Hola ") == "hola")
|
||||||
|
#expect(AnswerGrader.normalize("ABC!") == "abc")
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("stripAccents handles common Spanish diacritics")
|
||||||
|
func stripAccents() {
|
||||||
|
#expect(AnswerGrader.stripAccents("niño") == "nino")
|
||||||
|
#expect(AnswerGrader.stripAccents("está") == "esta")
|
||||||
|
#expect(AnswerGrader.stripAccents("güero") == "guero")
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("levenshtein computes edit distance")
|
||||||
|
func levenshtein() {
|
||||||
|
#expect(AnswerGrader.levenshtein("kitten", "sitting") == 3)
|
||||||
|
#expect(AnswerGrader.levenshtein("flaw", "lawn") == 2)
|
||||||
|
#expect(AnswerGrader.levenshtein("abc", "abc") == 0)
|
||||||
|
#expect(AnswerGrader.levenshtein("", "abc") == 3)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -0,0 +1,212 @@
|
|||||||
|
import Testing
|
||||||
|
@testable import SharedModels
|
||||||
|
|
||||||
|
// Practice pool = selected levels ∩ selected tenses ∩ selected irregular types
|
||||||
|
// (Issue #26). This suite covers the level + irregular intersection; tense
|
||||||
|
// filtering is a separate concern handled at the review-card layer.
|
||||||
|
|
||||||
|
@Suite("PracticeFilter — level selection")
|
||||||
|
struct PracticeFilterLevelTests {
|
||||||
|
|
||||||
|
@Test("single level expands to that group's data-levels")
|
||||||
|
func singleLevelExpansion() {
|
||||||
|
let expected = VerbLevelGroup.dataLevels(for: "elementary")
|
||||||
|
#expect(PracticeFilter.dataLevels(forSelectedLevels: ["elementary"]) == expected)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("multi-level union merges every group's data-levels")
|
||||||
|
func multiLevelUnion() {
|
||||||
|
let union = PracticeFilter.dataLevels(forSelectedLevels: ["elementary", "intermediate"])
|
||||||
|
#expect(union.isSuperset(of: VerbLevelGroup.dataLevels(for: "elementary")))
|
||||||
|
#expect(union.isSuperset(of: VerbLevelGroup.dataLevels(for: "intermediate")))
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("empty selected-levels produces empty data-levels")
|
||||||
|
func emptyLevelsProducesEmpty() {
|
||||||
|
#expect(PracticeFilter.dataLevels(forSelectedLevels: []).isEmpty)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("unknown level passes through to its raw value")
|
||||||
|
func unknownLevelPassthrough() {
|
||||||
|
// Preserves VerbLevelGroup's fallback contract.
|
||||||
|
#expect(PracticeFilter.dataLevels(forSelectedLevels: ["custom"]) == ["custom"])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@Suite("PracticeFilter — verb IDs by level")
|
||||||
|
struct PracticeFilterVerbLevelTests {
|
||||||
|
|
||||||
|
// Fixtures: four verbs spanning basic + elementary subgroups + intermediate.
|
||||||
|
private let verbs: [PracticeFilter.VerbSlot] = [
|
||||||
|
.init(id: 1, level: "basic"),
|
||||||
|
.init(id: 2, level: "elementary"),
|
||||||
|
.init(id: 3, level: "elementary_2"),
|
||||||
|
.init(id: 4, level: "intermediate_1"),
|
||||||
|
]
|
||||||
|
|
||||||
|
@Test("elementary selection matches elementary base and subgroup levels")
|
||||||
|
func elementarySubgroupMatch() {
|
||||||
|
let ids = PracticeFilter.verbIDs(matchingLevels: ["elementary"], in: verbs)
|
||||||
|
#expect(ids == [2, 3])
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("multi-level selection unions matching verb IDs")
|
||||||
|
func multiLevelUnionIds() {
|
||||||
|
let ids = PracticeFilter.verbIDs(matchingLevels: ["basic", "intermediate"], in: verbs)
|
||||||
|
#expect(ids == [1, 4])
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("empty level selection returns no verbs (literal semantics)")
|
||||||
|
func emptySelectionReturnsEmpty() {
|
||||||
|
let ids = PracticeFilter.verbIDs(matchingLevels: [], in: verbs)
|
||||||
|
#expect(ids.isEmpty)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@Suite("PracticeFilter — verb IDs by irregular category")
|
||||||
|
struct PracticeFilterIrregularTests {
|
||||||
|
|
||||||
|
// Verb 10: regular (no spans).
|
||||||
|
// Verb 20: one spelling-change span.
|
||||||
|
// Verb 30: stem-change + unique-irregular spans.
|
||||||
|
private let spans: [PracticeFilter.IrregularSlot] = [
|
||||||
|
.init(verbId: 20, category: .spelling),
|
||||||
|
.init(verbId: 30, category: .stemChange),
|
||||||
|
.init(verbId: 30, category: .uniqueIrregular),
|
||||||
|
]
|
||||||
|
|
||||||
|
@Test("empty category set returns empty — caller decides the semantics")
|
||||||
|
func emptyCategoriesReturnsEmpty() {
|
||||||
|
let ids = PracticeFilter.verbIDs(matchingIrregularCategories: [], in: spans)
|
||||||
|
#expect(ids.isEmpty)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("single category picks matching verbs only")
|
||||||
|
func singleCategoryMatch() {
|
||||||
|
let ids = PracticeFilter.verbIDs(matchingIrregularCategories: [.spelling], in: spans)
|
||||||
|
#expect(ids == [20])
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("multiple categories union their matches")
|
||||||
|
func multipleCategoriesUnion() {
|
||||||
|
let ids = PracticeFilter.verbIDs(
|
||||||
|
matchingIrregularCategories: [.spelling, .stemChange],
|
||||||
|
in: spans
|
||||||
|
)
|
||||||
|
#expect(ids == [20, 30])
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("a verb with multiple matching spans is returned once")
|
||||||
|
func verbWithMultipleSpansDeduped() {
|
||||||
|
let ids = PracticeFilter.verbIDs(
|
||||||
|
matchingIrregularCategories: [.stemChange, .uniqueIrregular],
|
||||||
|
in: spans
|
||||||
|
)
|
||||||
|
#expect(ids == [30])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@Suite("PracticeFilter — allowedVerbIDs (levels ∩ irregulars)")
|
||||||
|
struct PracticeFilterIntersectionTests {
|
||||||
|
|
||||||
|
// Realistic fixture:
|
||||||
|
// #1 — basic, regular
|
||||||
|
// #2 — basic, spelling-change
|
||||||
|
// #3 — elementary, spelling-change
|
||||||
|
// #4 — elementary, stem-change
|
||||||
|
// #5 — intermediate, unique-irregular
|
||||||
|
private let verbs: [PracticeFilter.VerbSlot] = [
|
||||||
|
.init(id: 1, level: "basic"),
|
||||||
|
.init(id: 2, level: "basic"),
|
||||||
|
.init(id: 3, level: "elementary"),
|
||||||
|
.init(id: 4, level: "elementary_1"),
|
||||||
|
.init(id: 5, level: "intermediate"),
|
||||||
|
]
|
||||||
|
private let spans: [PracticeFilter.IrregularSlot] = [
|
||||||
|
.init(verbId: 2, category: .spelling),
|
||||||
|
.init(verbId: 3, category: .spelling),
|
||||||
|
.init(verbId: 4, category: .stemChange),
|
||||||
|
.init(verbId: 5, category: .uniqueIrregular),
|
||||||
|
]
|
||||||
|
|
||||||
|
@Test("no irregular filter keeps every verb at the selected level")
|
||||||
|
func noIrregularConstraint() {
|
||||||
|
let ids = PracticeFilter.allowedVerbIDs(
|
||||||
|
verbs: verbs,
|
||||||
|
spans: spans,
|
||||||
|
selectedLevels: ["basic"],
|
||||||
|
irregularCategories: []
|
||||||
|
)
|
||||||
|
#expect(ids == [1, 2])
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("Issue #26 worked example: beginner + spelling-change → only #2")
|
||||||
|
func issueWorkedExample() {
|
||||||
|
let ids = PracticeFilter.allowedVerbIDs(
|
||||||
|
verbs: verbs,
|
||||||
|
spans: spans,
|
||||||
|
selectedLevels: ["basic"],
|
||||||
|
irregularCategories: [.spelling]
|
||||||
|
)
|
||||||
|
#expect(ids == [2])
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("filter is an intersection, not a union: level-mismatched spans are excluded")
|
||||||
|
func intersectionExcludesOtherLevels() {
|
||||||
|
// Elementary has a spelling-change verb (#3). Selecting basic + spelling
|
||||||
|
// must NOT leak #3 through the irregular filter alone.
|
||||||
|
let ids = PracticeFilter.allowedVerbIDs(
|
||||||
|
verbs: verbs,
|
||||||
|
spans: spans,
|
||||||
|
selectedLevels: ["basic"],
|
||||||
|
irregularCategories: [.spelling]
|
||||||
|
)
|
||||||
|
#expect(!ids.contains(3))
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("empty level selection produces empty pool regardless of irregular filter")
|
||||||
|
func emptyLevelsLocksOutPractice() {
|
||||||
|
let ids = PracticeFilter.allowedVerbIDs(
|
||||||
|
verbs: verbs,
|
||||||
|
spans: spans,
|
||||||
|
selectedLevels: [],
|
||||||
|
irregularCategories: [.spelling]
|
||||||
|
)
|
||||||
|
#expect(ids.isEmpty)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("multi-level + multi-category pulls every matching pair")
|
||||||
|
func multiLevelMultiCategory() {
|
||||||
|
let ids = PracticeFilter.allowedVerbIDs(
|
||||||
|
verbs: verbs,
|
||||||
|
spans: spans,
|
||||||
|
selectedLevels: ["elementary", "intermediate"],
|
||||||
|
irregularCategories: [.stemChange, .uniqueIrregular]
|
||||||
|
)
|
||||||
|
#expect(ids == [4, 5])
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
@Suite("VerbLevel.highest")
|
||||||
|
struct VerbLevelHighestTests {
|
||||||
|
|
||||||
|
@Test("returns the highest-ranked level in the set")
|
||||||
|
func highestOfMany() {
|
||||||
|
#expect(VerbLevel.highest(in: [.basic, .intermediate, .elementary]) == .intermediate)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("returns the sole element when set is a singleton")
|
||||||
|
func singleton() {
|
||||||
|
#expect(VerbLevel.highest(in: [.advanced]) == .advanced)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("returns nil for the empty set")
|
||||||
|
func empty() {
|
||||||
|
#expect(VerbLevel.highest(in: []) == nil)
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test("ranks expert above advanced above intermediate above elementary above basic")
|
||||||
|
func fullRanking() {
|
||||||
|
#expect(VerbLevel.highest(in: [.basic, .elementary, .intermediate, .advanced, .expert]) == .expert)
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -47,6 +47,8 @@ targets:
|
|||||||
buildPhase: resources
|
buildPhase: resources
|
||||||
- path: Conjuga/course_data.json
|
- path: Conjuga/course_data.json
|
||||||
buildPhase: resources
|
buildPhase: resources
|
||||||
|
- path: Conjuga/reflexive_verbs.json
|
||||||
|
buildPhase: resources
|
||||||
info:
|
info:
|
||||||
path: Conjuga/Info.plist
|
path: Conjuga/Info.plist
|
||||||
properties:
|
properties:
|
||||||
|
|||||||
57
README.md
57
README.md
@@ -1,32 +1,57 @@
|
|||||||
# Conjuga
|
# Conjuga
|
||||||
|
|
||||||
A Spanish verb conjugation learning app for iOS. Practice all 20 tenses with spaced repetition, handwriting recognition, and vocabulary courses.
|
A Spanish-learning iOS app that combines verb conjugation practice, a full textbook reader, an AI conversation partner, offline dictionary lookups, grammar exercises, and more. Runs entirely on-device where possible (Foundation Models, Speech framework, Vision OCR).
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- **Conjugation Practice** — Flashcards, typing, multiple choice, handwriting, sentence building, and full table modes
|
### Verb practice
|
||||||
- **Spaced Repetition** — SM-2 algorithm tracks what you know and surfaces what you don't
|
- **Six conjugation modes** — flashcards, typing, multiple choice, handwriting (Apple Pencil / finger), sentence builder, and full-table (conjugate all persons at once)
|
||||||
- **1,750 Verbs** — From basic to expert, with irregular form highlighting
|
- **Focus modes** — weak verbs (SM-2 spaced repetition), irregularity drills (spelling / stem / unique), common tenses
|
||||||
- **20 Tenses** — Every indicative, subjunctive, conditional, and imperative tense
|
- **1,750 verbs** across 5 levels (Basic → Expert) with 209 K pre-conjugated forms and 23 K irregular-span annotations
|
||||||
- **Grammar Guide** — 20 tense guides with conjugation tables + 20 grammar topic notes (Ser vs Estar, Por vs Para, etc.)
|
- **20 tenses** — every indicative, subjunctive, conditional, and imperative tense, each with character-level irregular highlighting
|
||||||
- **Vocabulary Courses** — Weekly decks with example sentences
|
- **Irregularity filter** — search the verb list by Any Irregular / Spelling Change / Stem Change / Unique Irregular, combinable with level filter
|
||||||
- **Progress Tracking** — Streaks, daily goals, accuracy stats, and achievement badges
|
- **Text-to-speech** on any form
|
||||||
- **CloudKit Sync** — Review progress syncs across devices
|
|
||||||
- **Widgets** — Daily progress, word of the day, and weekly stats
|
### Content & study
|
||||||
- **Text-to-Speech** — Hear any verb pronounced in Spanish
|
- **Textbook reader** — 30 chapters of *Complete Spanish Step-by-Step* with 251 interactive exercises (keyboard + Apple Pencil), 931 OCR'd vocab tables rendered as Spanish→English grids (~3 100 paired cards extracted via bounding-box OCR)
|
||||||
|
- **Course decks** — weekly vocab decks with example sentences, week tests, and cumulative checkpoint exams
|
||||||
|
- **Stem-change toggle** on Week 4 flashcard decks (E-IE, E-I, O-UE, U-UE) showing inline present-tense conjugations
|
||||||
|
- **Grammar guide** — 20 tense guides with usage rules and examples + 20+ grammar topic notes (ser/estar, por/para, preterite/imperfect, etc.), each with 100+ practice exercises
|
||||||
|
- **Grammar exercises** — interactive quizzes for 5 core topics (ser/estar, por/para, preterite/imperfect, subjunctive, personal *a*)
|
||||||
|
|
||||||
|
### AI & speech
|
||||||
|
- **Conversational practice** — on-device AI chat partner (Apple Foundation Models) with 10 scenario types; chat bubbles have tappable words that open dictionary / on-demand AI lookup
|
||||||
|
- **AI short stories** — generated stories with tappable words and comprehension quizzes
|
||||||
|
- **Listening practice** — listen-and-type + pronunciation scoring via the Speech framework
|
||||||
|
- **Pronunciation check** — word-by-word match scoring
|
||||||
|
|
||||||
|
### Vocabulary & tools
|
||||||
|
- **Offline dictionary** — reverse index of 175 K verb forms + 200 common words, cached to disk for instant lookups
|
||||||
|
- **Vocab SRS review** — spaced repetition over course vocabulary with Again / Hard / Good / Easy rating
|
||||||
|
- **Cloze practice** — fill-in-the-blank sentences with distractor generation
|
||||||
|
- **Lyrics practice** — search, translate, and read Spanish song lyrics
|
||||||
|
|
||||||
|
### Tracking & sync
|
||||||
|
- **Progress** — streaks, daily goals, accuracy stats, achievement badges, study-time tracking per day
|
||||||
|
- **CloudKit sync** — review progress, test results, saved stories, conversations, and textbook attempts sync across devices
|
||||||
|
- **Widgets** — combined dashboard + word-of-the-day, refreshed daily and on backgrounding
|
||||||
|
|
||||||
## Architecture
|
## Architecture
|
||||||
|
|
||||||
- **SwiftUI** + **SwiftData** with a dual-store configuration:
|
- **SwiftUI** + **SwiftData** with a dual-store configuration:
|
||||||
- **Local store** — Reference data (verbs, forms, guides) seeded from JSON on first launch
|
- **Local store** (App Group `group.com.conjuga.app`) — reference data: verbs, forms, irregular spans, tense guides, course decks, vocab cards, textbook chapters. Seeded from bundled JSON on first launch. Self-healing re-seeds trigger on version bumps *or* if rows are missing on disk.
|
||||||
- **Cloud store** — User data (review cards, progress, streaks) synced via CloudKit
|
- **Cloud store** (CloudKit `iCloud.com.conjuga.app`, private database) — user data: review cards, course reviews, user progress, test results, daily logs, saved songs, stories, conversations, textbook exercise attempts.
|
||||||
- **SharedModels** package shared between the app and widget extension
|
- **SharedModels** Swift Package shared between the app and widget extension. Widget schema must include every local-store entity or SwiftData destructively migrates the shared store.
|
||||||
|
- **Foundation Models** for on-device AI generation (`@Generable` structs for typed output).
|
||||||
|
- **Vision** framework for OCR of textbook pages and vocabulary images.
|
||||||
|
- **Speech** framework for recognition and pronunciation scoring.
|
||||||
|
- **Textbook extraction pipeline** (`Conjuga/Scripts/textbook/`) — XHTML and answer-key parsers, macOS Vision image OCR + PDF page OCR, bounding-box vocab pair extractor, NSSpellChecker-based validator, and language-aware auto-fixer.
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
- iOS 18+
|
- iOS 18+ (iOS 26 for Foundation Models features)
|
||||||
- Xcode 16+
|
- Xcode 16+
|
||||||
|
|
||||||
## Building
|
## Building
|
||||||
|
|
||||||
Open `Conjuga/Conjuga.xcodeproj` in Xcode and run on a simulator or device. Data seeds automatically on first launch.
|
Open `Conjuga/Conjuga.xcodeproj` in Xcode and run on a simulator or device. Reference data seeds automatically on first launch. To regenerate textbook content, run `Conjuga/Scripts/textbook/run_pipeline.sh` locally — the generated `textbook_data.json` / `textbook_vocab.json` are committed so fresh clones build without the pipeline.
|
||||||
|
|||||||
137
app_features.md
137
app_features.md
@@ -1,6 +1,85 @@
|
|||||||
# Spanish Conjugation Apps — Feature Comparison
|
# Spanish Conjugation Apps — Feature Comparison
|
||||||
|
|
||||||
Side-by-side feature analysis of **ConjuGato** and **Conjuu ES**, based on App Store screenshots and extracted app data.
|
Side-by-side feature analysis of **Conjuga** (this project), **ConjuGato**, and **Conjuu ES**, based on App Store screenshots, extracted app data, and this repository's source.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Conjuga (this project)
|
||||||
|
|
||||||
|
**Platform:** iOS 18+ (iOS 26 required for Foundation Models features)
|
||||||
|
**Monetization:** —
|
||||||
|
**Tech stack:** SwiftUI + SwiftData (dual local / CloudKit stores), SharedModels Swift Package, Foundation Models, Vision, Speech, WidgetKit
|
||||||
|
|
||||||
|
### Practice Modes
|
||||||
|
|
||||||
|
- **Six core conjugation modes** — flashcards, typing, multiple choice, handwriting (Apple Pencil + finger), sentence builder, full table (all persons at once)
|
||||||
|
- **Focus modes** — weak verbs (SM-2 SRS), irregularity drills (spelling / stem / unique, selectable), common tenses
|
||||||
|
- **Quick answer review** with per-form irregular-span highlighting
|
||||||
|
- **Vocab SRS Review** — spaced repetition over course vocabulary with Again / Hard / Good / Easy rating
|
||||||
|
- **Cloze practice** — fill-in-the-blank with distractor generation from vocab pool
|
||||||
|
- **Listening practice** — listen-and-type + pronunciation scoring via Speech framework, word-by-word match
|
||||||
|
- **Lyrics practice** — search Spanish songs, translate line by line
|
||||||
|
- **Conversational practice** — on-device AI chat partner (Apple Foundation Models) with 10 scenarios, tappable words that open dictionary or on-demand AI lookup
|
||||||
|
- **AI short stories** — generated stories with tappable words + comprehension quiz
|
||||||
|
|
||||||
|
### Verb Reference
|
||||||
|
|
||||||
|
- **1,750 verbs**, 5 levels (Basic / Elementary / Intermediate / Advanced / Expert), 209 K pre-conjugated forms
|
||||||
|
- **20 tenses** with character-level irregular-span highlighting (spelling / stem / unique)
|
||||||
|
- **Irregularity filter** — All / Any Irregular / Spelling Change / Stem Change / Unique Irregular — combinable with level
|
||||||
|
- **Per-verb detail** with per-form English translations and tense-grouped table
|
||||||
|
- **Text-to-speech** on any conjugated form
|
||||||
|
|
||||||
|
### Grammar & Content
|
||||||
|
|
||||||
|
- **20 tense guides** with usage rules and examples
|
||||||
|
- **20+ grammar topic notes** (ser/estar, por/para, preterite/imperfect, subjunctive, personal *a*, suffixes, irregular yo forms, stem-changers, etc.) each with 100+ practice exercises
|
||||||
|
- **Grammar exercises** — interactive quizzes for 5 core topics
|
||||||
|
- **Course decks** — weekly vocabulary with example sentences, week tests, cumulative checkpoint exams
|
||||||
|
- **Stem-change toggle** on Week 4 decks (E-IE, E-I, O-UE, U-UE) with inline present-tense conjugations
|
||||||
|
- **Textbook reader** — 30 chapters of *Complete Spanish Step-by-Step* with 251 interactive exercises (keyboard + Apple Pencil), 931 OCR'd vocab tables rendered as paired Spanish→English grids (~3 100 cards)
|
||||||
|
- **Textbook extraction pipeline** — XHTML + answer-key parsers, macOS Vision image OCR, PDF page OCR, bounding-box vocab pair extractor, NSSpellChecker validator, language-aware auto-fixer
|
||||||
|
|
||||||
|
### Offline Dictionary
|
||||||
|
|
||||||
|
- **Reverse index of 175 K verb forms + 200 common words**, cached to disk
|
||||||
|
- **Tappable-word lookup** in chat bubbles and stories; falls back to Foundation Models `@Generable ChatWordInfo` when a word isn't in the dictionary
|
||||||
|
|
||||||
|
### Progress & Sync
|
||||||
|
|
||||||
|
- **SM-2 spaced repetition** for verb review
|
||||||
|
- **Streaks, daily goals, accuracy stats, achievement badges**
|
||||||
|
- **Study-time tracking** per day (foreground time)
|
||||||
|
- **CloudKit private-database sync** — review cards, user progress, test results, daily logs, saved songs, stories, conversations, textbook exercise attempts
|
||||||
|
- **Background app refresh** for widget data
|
||||||
|
|
||||||
|
### Widgets
|
||||||
|
|
||||||
|
- **Combined dashboard** (word of day + stats)
|
||||||
|
- **Word-of-day** standalone
|
||||||
|
- Both refresh daily and on app backgrounding
|
||||||
|
|
||||||
|
### Settings & Filters
|
||||||
|
|
||||||
|
- **Selectable verb level** and **enabled tenses**
|
||||||
|
- **Include vosotros** toggle
|
||||||
|
- **Auto-fill verb stem** toggle for Full Table practice
|
||||||
|
- **Feature reference** page in Settings documenting every feature and which settings affect it
|
||||||
|
|
||||||
|
### Data (in repo)
|
||||||
|
|
||||||
|
| Asset | Count |
|
||||||
|
|-------|-------|
|
||||||
|
| Verbs | 1,750 |
|
||||||
|
| Verb forms (pre-conjugated) | 209,014 |
|
||||||
|
| Irregular span annotations | 23,795 |
|
||||||
|
| Tenses | 20 |
|
||||||
|
| Tense guides | 20 |
|
||||||
|
| Grammar notes | 20+ (each with 100+ exercises) |
|
||||||
|
| Textbook chapters | 30 |
|
||||||
|
| Textbook exercises | 251 |
|
||||||
|
| Textbook vocab pairs | ~3,118 |
|
||||||
|
| Offline dictionary forms | 175,425 |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -184,30 +263,44 @@ Side-by-side feature analysis of **ConjuGato** and **Conjuu ES**, based on App S
|
|||||||
|
|
||||||
## Feature Comparison
|
## Feature Comparison
|
||||||
|
|
||||||
| Feature | ConjuGato | Conjuu ES |
|
| Feature | Conjuga | ConjuGato | Conjuu ES |
|
||||||
|---------|-----------|-----------|
|
|---------|---------|-----------|-----------|
|
||||||
| **Verb count** | 1,750 | 621 |
|
| **Verb count** | 1,750 | 1,750 | 621 |
|
||||||
| **Tenses** | 27 | 20 |
|
| **Tenses** | 20 | 27 | 20 |
|
||||||
| **Practice style** | Flashcard (tap to reveal) | Typing (fill in forms) |
|
| **Practice style** | Flashcards, typing, MC, handwriting, sentence builder, full table | Flashcard (tap to reveal) | Typing (fill in forms) |
|
||||||
| **Input method** | Self-rate (no typing) | Type conjugation |
|
| **Input method** | All four (type / tap / write / speak) | Self-rate (no typing) | Type conjugation |
|
||||||
| **Daily goals** | Numeric counter (25/100) | 4 presets (Casual→Committed) |
|
| **Daily goals** | Numeric counter + streaks + study-time | Numeric counter (25/100) | 4 presets (Casual→Committed) |
|
||||||
| **Spaced repetition** | Yes (star + heart ratings) | Session-based review |
|
| **Spaced repetition** | SM-2 (verb forms + vocab cards) | Yes (star + heart ratings) | Session-based review |
|
||||||
| **Rhyming mnemonics** | Yes (dedicated tab) | Not visible |
|
| **Grammar breakdown** | Character-level irregular span highlighting | Color-coded span highlighting | Ending table + usage rules |
|
||||||
| **Grammar breakdown** | Color-coded span highlighting | Ending table + usage rules |
|
| **Irregularity detail** | 3-level (spelling/stem/unique) + per-verb badges + filter | 3-level with character spans | 4-label (ordinary/irregular/regular/orto) |
|
||||||
| **Irregularity detail** | 3-level (spelling/stem/unique) with character spans | 4-label (ordinary/irregular/regular/orto) |
|
| **Tense explanations** | 20 guides + 20 grammar notes with 100+ exercises each | In-app guides with mnemonics | Modal popups with usage categories |
|
||||||
| **Tense explanations** | In-app guides with mnemonics | Modal popups with usage categories |
|
| **Example sentences** | Per-tense + per-card + per-story + in textbook | Per-tense with audio | Per-tense in guide popups |
|
||||||
| **Example sentences** | Per-tense with audio | Per-tense in guide popups |
|
| **Audio** | TTS on any form + pronunciation scoring | Tap any conjugation + auto-pronounce | Not prominent |
|
||||||
| **Audio** | Tap any conjugation + auto-pronounce | Not prominent |
|
| **Difficulty levels** | 5 levels + irregularity filter (combinable) | Filter by irregularity type | 8 graded word lists |
|
||||||
| **Difficulty levels** | Filter by irregularity type | 8 graded word lists |
|
| **Custom lists** | — | No | Yes (My Lists + Pinboard) |
|
||||||
| **Custom lists** | No | Yes (My Lists + Pinboard) |
|
| **Verb filters** | Level + irregularity category + tense selection + search | Ending type + irregularity + tense | Tense checkboxes + level |
|
||||||
| **Verb filters** | Ending type + irregularity + tense | Tense checkboxes + level |
|
| **AI features** | On-device chat, stories, word lookups (Foundation Models) | — | — |
|
||||||
| **Translations** | 8 languages | 4 languages |
|
| **Textbook** | 30 chapters, 251 exercises, ~3,100 vocab pairs (OCR pipeline) | — | — |
|
||||||
| **Platform** | iOS (Catalyst) | macOS + iOS |
|
| **Listening / speech** | Listen-and-type + pronunciation scoring | — | — |
|
||||||
| **Monetization** | One-time purchase | Free tier + paid |
|
| **Offline dictionary** | 175 K forms | — | — |
|
||||||
| **vosotros toggle** | Yes | Not visible |
|
| **Lyrics** | Search + translate | — | — |
|
||||||
|
| **CloudKit sync** | Yes (private database) | — | — |
|
||||||
|
| **Widgets** | Combined + word-of-day | — | — |
|
||||||
|
| **Platform** | iOS 18+ (iOS 26 for AI) | iOS (Catalyst) | macOS + iOS |
|
||||||
|
| **Monetization** | — | One-time purchase | Free tier + paid |
|
||||||
|
| **vosotros toggle** | Yes | Yes | Not visible |
|
||||||
|
|
||||||
## Unique Strengths
|
## Unique Strengths
|
||||||
|
|
||||||
|
### Conjuga excels at:
|
||||||
|
- **Breadth of modes** — six conjugation practice styles plus cloze, listening, lyrics, chat, stories, textbook, and vocab review in one app
|
||||||
|
- **AI integration** — on-device Foundation Models power the chat partner, story generator, and dictionary fallback lookup with no cloud round-trips
|
||||||
|
- **Textbook reader** — full *Complete Spanish Step-by-Step* textbook with 251 interactive exercises and 3,100 OCR-paired vocab cards, built via an in-repo extraction pipeline
|
||||||
|
- **Combinable filters** — Level × Irregularity × search on the verb list, with per-row category badges
|
||||||
|
- **Multi-input practice** — type, tap, Apple Pencil handwriting, and voice (pronunciation scoring)
|
||||||
|
- **Offline dictionary** — 175 K verb-form reverse index makes word lookups instant and network-free
|
||||||
|
- **CloudKit sync** — progress, tests, saved content, and conversations travel between devices
|
||||||
|
|
||||||
### ConjuGato excels at:
|
### ConjuGato excels at:
|
||||||
- **Breadth** — nearly 3× the verbs
|
- **Breadth** — nearly 3× the verbs
|
||||||
- **Irregularity teaching** — character-level color-coded highlighting showing exactly *why* each form is irregular
|
- **Irregularity teaching** — character-level color-coded highlighting showing exactly *why* each form is irregular
|
||||||
|
|||||||
Reference in New Issue
Block a user