Releases
All official versions of Abserny, the Android app, the AbserneyVision model, and upcoming platforms.
Abserny Mobile
Complete voice-first Android application. Gemini 2.0 Flash Lite AI, gesture-driven onboarding, bilingual Arabic/English, four detection modes, and automatic offline ML Kit fallback.
- Spoken onboarding
- Gemini 2.0 Flash Lite
- ML Kit offline fallback
- Arabic & English
- 4 detection modes
- Spoken settings
First Gemini AI integration. Two-mode gesture switching (Scene and Object), basic offline ML Kit fallback, English-only operation. Visual onboarding, replaced in v2.
- Gemini AI (first integration)
- Scene + Object modes
- Basic ML Kit fallback
- English only
Initial prototype establishing the gesture input system, FSM architecture, and native TTS pipeline. No AI yet, rule-based detection. Proved the core interaction model.
- Gesture + FSM architecture
- Native TTS pipeline
- Rule-based detection
AbserneyVision
MobileNetV2 via transfer learning on ImageNet + COCO. Currently 37% validation accuracy across 13 indoor categories. Dataset expansion to 500+ images per class in progress.
- MobileNetV2
- 13 indoor categories
- 37% accuracy (current)
- 80%+ target
iOS & Roadmap
Architecturally iOS-compatible, all Expo SDK 54 modules support iOS with no code changes anticipated. Blocked only by Apple Developer Program enrollment and EAS iOS build configuration.
- Expo cross-platform ready
- No code changes needed
- Pending ADP enrollment
Auto-scan with scene-change detection, on-device LLM integration (Gemma 2B / Phi-3 Mini) for fully offline AI, expanded AbserneyVision classes, and formal usability research with visually impaired participants.
- Auto-scan mode
- On-device LLM
- Expanded model classes
- Usability research