AI-driven health monitoring sets new standard for 6G hospital networks

The research introduces a VLC-based communication architecture built for 6G networks. VLC uses modulated light from LED fixtures to transmit data, making it naturally immune to EMI, a common issue with RF-based systems in hospitals. However, fluctuating light conditions due to staff movement or patient repositioning can cause distortion in VLC signals.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 26-05-2025 09:42 IST | Created: 26-05-2025 09:42 IST
AI-driven health monitoring sets new standard for 6G hospital networks
Representative Image. Credit: ChatGPT

Researchers at Isik University have unveiled a pioneering approach to intelligent health monitoring using machine learning (ML) techniques integrated with Visible Light Communication (VLC) systems for next-generation Medical Body Sensor Networks (MBSNs). The study, titled “Intelligent Health Monitoring in 6G Networks: Machine Learning-Enhanced VLC-Based Medical Body Sensor Networks,” published in Sensors, introduces a transformative dual-framework system combining Q-learning and Long Short-Term Memory (LSTM) networks for real-time adaptive modulation and channel estimation in dynamic hospital environments.

This model is designed to overcome serious challenges in hospital communication infrastructures, especially the limitations of Radio Frequency (RF) networks in medical environments where electromagnetic interference (EMI) from equipment can disrupt data transmission. The proposed system aims to ensure ultra-reliable, low-latency communication essential for modern healthcare.

How does the model improve communication in hospitals?

The research introduces a VLC-based communication architecture built for 6G networks. VLC uses modulated light from LED fixtures to transmit data, making it naturally immune to EMI, a common issue with RF-based systems in hospitals. However, fluctuating light conditions due to staff movement or patient repositioning can cause distortion in VLC signals.

To address this, the study combines:

  • A Q-learning-based adaptive modulation strategy that dynamically selects the best modulation order based on real-time observations, optimizing for spectral efficiency and minimal error rates.

  • An LSTM-based channel estimator that predicts time-varying channel parameters such as path loss and RMS delay spread, offering accurate data needed for equalization and interference mitigation.

The experimental setup simulated two real hospital environments: an Intensive Care Unit (ICU) and a Family-Type Patient Room (FTPR), using Zemax® for optical modeling and MATLAB® for ML integration. Patient movement paths were mapped, and sensors were mounted at three body locations (shoulder, wrist, ankle) to evaluate variability in data reception under real-world mobility conditions.

In the FTPR, the Q-learning model achieved up to a 304% gain in spectral efficiency compared to static modulation systems. Meanwhile, LSTM recorded significantly lower root mean square errors (RMSE) in signal estimation compared to baseline models like GRU and KNN.

What role do machine learning models play?

Machine learning is central to the system’s adaptability and precision. The Q-learning agent operates without needing prior training data, relying instead on real-time interaction with the environment to refine its decisions. It selects from multiple M-PAM modulation schemes depending on channel feedback and a predefined Symbol Error Rate (SER) threshold.

In the LSTM module, the model is trained using sequences of past channel parameters to predict future trends. The ICU scenario showed the LSTM estimator performing best when predicting path loss at 1.1679 dB and RMS delay spread at 0.8784 ns, outperforming GRU, SVR, and standard RNNs. This capability allows the system to forecast and compensate for fluctuations in signal behavior due to movement or changing lighting conditions.

The use of these ML models reduces the dependence on rigid, pre-calibrated systems. Instead, it introduces an autonomous framework capable of learning and adapting in dynamic, interference-prone hospital settings.

What are the real-world implications?

The study’s proposed model holds vast potential for transforming clinical environments. Key implications include:

  • Safe ICU operations: VLC eliminates EMI risks, making it suitable for sensitive zones with critical medical equipment.
  • Continuous patient monitoring: Wearable photodetectors powered by ML algorithms enable round-the-clock health tracking without sacrificing patient mobility or comfort.
  • Reliable diagnostics and alerts: The low-latency model allows for near-instantaneous health data updates, facilitating quicker medical responses.
  • Privacy and security: As VLC requires line-of-sight and operates within confined spaces, it reduces the likelihood of signal interception, addressing growing privacy concerns in IoMT deployments.

The placement of photodetectors was also found to be significant. In the FTPR scenario, ankle-mounted sensors (D3) produced the highest RMSE, while in ICU settings, shoulder-mounted sensors (D1) showed the widest variability. These insights suggest future systems may benefit from dynamic sensor placement optimization.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback