February 1, 2024

Abserny v1.0 Released

First Stable Release

We're thrilled to announce the release of Abserny version 1.0, marking the first stable release of our offline object detection tool designed specifically for visually impaired users.

What's New in v1.0

This release includes a complete set of features that make Abserny a fully functional accessibility tool:

Voice Activation

  • Five Arabic trigger words for hands-free activation
  • Continuous listening mode with low CPU usage
  • High accuracy voice recognition using Vosk
  • Customizable trigger word sensitivity

Object Detection

  • Real-time detection using YOLOv8 nano model
  • Support for 80+ common object categories
  • Confidence threshold customization
  • Fast processing (typically 1-2 seconds)

Natural Language Output

  • Contextual Arabic descriptions of detected objects
  • Natural sounding text-to-speech
  • Adjustable speech rate and volume
  • Clear pronunciation of Arabic terms

Complete Offline Operation

  • All processing happens locally
  • No internet connection required after setup
  • Complete privacy - no data sent to servers
  • Works in any environment

Cross-Platform Support

  • Windows 10/11
  • macOS 10.14 or later
  • Linux (Ubuntu 20.04+, Fedora, Arch)

Installation

Installation is straightforward with our comprehensive guide:

  1. Clone the repository or download the release
  2. Install Python 3.8+ if not already installed
  3. Run pip install -r requirements.txt
  4. Download required models with python download_models.py
  5. Launch with python main.py

See the complete installation guide for detailed instructions.

Technical Details

Abserny v1.0 is built with carefully selected technologies:

Core Components

  • Vosk - Offline Arabic speech recognition
  • YOLOv8 - Fast and accurate object detection
  • pyttsx3 - Cross-platform text-to-speech
  • OpenCV - Camera capture and image processing
  • KivyMD - Accessible user interface framework

Performance

  • Detection latency: 1-2 seconds on modern hardware
  • Voice recognition latency: < 500ms
  • Memory usage: ~500MB during operation
  • CPU usage: 15-30% on mid-range processors

What's Next

Version 1.0 is just the beginning. We're already working on exciting improvements:

Version 1.1 (Planned)

  • Custom ASR model integration for better accuracy
  • Additional trigger words and languages
  • Performance optimizations
  • Enhanced natural language descriptions

Mobile App (In Development)

  • Native Android application
  • Optimized for mobile hardware
  • Haptic feedback integration
  • Expected release: Q4 2024

Try It Today

Download Abserny v1.0 and experience accessible object detection:

Acknowledgments

This release wouldn't be possible without:

  • Our beta testers who provided invaluable feedback
  • The open-source community for excellent tools and libraries
  • Contributors who helped with documentation and testing
  • Academic advisors for their guidance and support

Thank you for your interest in Abserny. We're excited to see how it helps visually impaired users navigate their world with greater confidence and independence.

← Back to all updates