New assistive tech merges AI and mobility to empower visually impaired users

Blind and visually impaired individuals often rely on white canes or guide dogs for navigation. However, these aids typically provide only basic spatial awareness, leaving users vulnerable to missing key information in complex or dynamic environments. SnapStick seeks to address these limitations through an integrated system that combines hardware and AI-powered software.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 13-07-2025 21:07 IST | Created: 13-07-2025 21:07 IST
New assistive tech merges AI and mobility to empower visually impaired users
Representative Image. Credit: ChatGPT

A research team from the Italian Institute of Technology has developed a groundbreaking assistive technology that could transform daily mobility for blind and visually impaired individuals. The project introduces an innovative device called SnapStick, an AI-integrated smart cane system that bridges artificial intelligence and accessible design.

The study, titled “SnapStick: Merging AI and Accessibility to Enhance Navigation for Blind Users”, is published in Technologies. The researchers present SnapStick as a complete, real-time spatial guidance system, offering blind users not only mobility support but also detailed environmental awareness through intelligent scene interpretation and voice feedback.

How does SnapStick address gaps in traditional mobility aids?

Blind and visually impaired individuals often rely on white canes or guide dogs for navigation. However, these aids typically provide only basic spatial awareness, leaving users vulnerable to missing key information in complex or dynamic environments. SnapStick seeks to address these limitations through an integrated system that combines hardware and AI-powered software.

At its core, the SnapStick system features a Bluetooth-enabled cane, bone-conduction headphones, and a smartphone application embedded with the Florence-2 Vision Language Model (VLM). This setup allows users to receive contextual audio feedback without obstructing their natural hearing. The AI model interprets visual scenes in real time, enabling features such as object recognition, optical character reading, public transit identification, and detailed descriptions of surroundings.

This design aims to enhance both mobility and confidence for users, with special attention given to ergonomics, ease of use, and real-world practicality. The hands-free, audio-based feedback system avoids overwhelming the user while offering richer information than traditional tools.

What did user testing reveal about usability and effectiveness?

To assess the real-world performance of the SnapStick system, the team conducted trials with 11 blind participants in urban environments. Each participant engaged in a series of navigational tasks while using SnapStick, and their experiences were evaluated using the System Usability Scale (SUS), a widely accepted benchmark for assessing product usability.

The results demonstrated notable success: SnapStick achieved a high SUS score of 84.7%, indicating excellent ease of use, learnability, and satisfaction among users. In terms of technical performance, the system reached an object recognition accuracy rate of 94%, outperforming many commercially available solutions.

Participants reported marked improvements in their ability to locate landmarks, identify objects, read nearby text (such as signage), and understand the layout of their environment. These enhancements directly translated into increased spatial confidence and independence.

The feedback underscored not only the functional reliability of SnapStick but also its intuitive interface and physical comfort. The use of bone-conduction headphones was especially praised for maintaining users' environmental auditory awareness - a critical safety concern often overlooked in wearable tech.

What are the broader implications for smart accessibility?

By merging advanced AI capabilities with practical human-centric design, SnapStick represents a new class of assistive technologies that go beyond simple mobility support to deliver contextual awareness and active decision assistance.

The integration of the Florence-2 VLM into a mobile interface suggests that AI can serve as a real-time cognitive aid, not just a reactive tool. Rather than automating behavior, SnapStick empowers the user with better information, allowing them to make safer and more confident navigation decisions.

The authors further plan to refine the SnapStick design further to optimize performance across diverse environments and conditions. Potential developments may include enhancements in obstacle classification, integration with public infrastructure data, and personalization based on user behavior or preferences.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback