Hi devs,
I'm working on an educational app (Matsorik) where I wanted to move beyond simple text-based chat. I wanted the AI to generate actual video explanations when a student is stuck.
The Challenge: Integrating video generation typically means huge libraries or heavy caching. My goal was to keep the app lightweight (target <20MB) and fast.
The Implementation:
- Socratic Logic: The app first tries to guide the user via text (low bandwidth).
- Fallback to Video: Only if the user fails specific checks, I trigger the video generation API.
- Optimization: [Buraya teknik bir cümle ekleyebilirsin, örneğin: "I used aggressive caching for generated assets to prevent re-fetching."]
Result: The current production build is ~16MB.
I'm curious how you guys handle "heavy" media generation features in your apps? Do you stream everything or try to generate on-device?
I'd love some feedback on the performance if anyone wants to test the video latency.
Link: https://play.google.com/store/apps/details?id=com.matsorik.sokratikzeka
[link] [comments]