We're thrilled to announce the release of Abserny version 1.0, marking the first stable release of our offline object detection tool designed specifically for visually impaired users.
What's New in v1.0
This release includes a complete set of features that make Abserny a fully functional accessibility tool:
Voice Activation
- Five Arabic trigger words for hands-free activation
- Continuous listening mode with low CPU usage
- High accuracy voice recognition using Vosk
- Customizable trigger word sensitivity
Object Detection
- Real-time detection using YOLOv8 nano model
- Support for 80+ common object categories
- Confidence threshold customization
- Fast processing (typically 1-2 seconds)
Natural Language Output
- Contextual Arabic descriptions of detected objects
- Natural sounding text-to-speech
- Adjustable speech rate and volume
- Clear pronunciation of Arabic terms
Complete Offline Operation
- All processing happens locally
- No internet connection required after setup
- Complete privacy - no data sent to servers
- Works in any environment
Cross-Platform Support
- Windows 10/11
- macOS 10.14 or later
- Linux (Ubuntu 20.04+, Fedora, Arch)
Installation
Installation is straightforward with our comprehensive guide:
- Clone the repository or download the release
- Install Python 3.8+ if not already installed
- Run
pip install -r requirements.txt - Download required models with
python download_models.py - Launch with
python main.py
See the complete installation guide for detailed instructions.
Technical Details
Abserny v1.0 is built with carefully selected technologies:
Core Components
- Vosk - Offline Arabic speech recognition
- YOLOv8 - Fast and accurate object detection
- pyttsx3 - Cross-platform text-to-speech
- OpenCV - Camera capture and image processing
- KivyMD - Accessible user interface framework
Performance
- Detection latency: 1-2 seconds on modern hardware
- Voice recognition latency: < 500ms
- Memory usage: ~500MB during operation
- CPU usage: 15-30% on mid-range processors
What's Next
Version 1.0 is just the beginning. We're already working on exciting improvements:
Version 1.1 (Planned)
- Custom ASR model integration for better accuracy
- Additional trigger words and languages
- Performance optimizations
- Enhanced natural language descriptions
Mobile App (In Development)
- Native Android application
- Optimized for mobile hardware
- Haptic feedback integration
- Expected release: Q4 2024
Try It Today
Download Abserny v1.0 and experience accessible object detection:
Acknowledgments
This release wouldn't be possible without:
- Our beta testers who provided invaluable feedback
- The open-source community for excellent tools and libraries
- Contributors who helped with documentation and testing
- Academic advisors for their guidance and support
Thank you for your interest in Abserny. We're excited to see how it helps visually impaired users navigate their world with greater confidence and independence.