0 Datasets
0 Files
Get instant academic access to this publication’s datasets.
Yes. After verification, you can browse and download datasets at no cost. Some premium assets may require author approval.
Files are stored on encrypted storage. Access is restricted to verified users and all downloads are logged.
Yes, message the author after sign-up to request supplementary files or replication code.
Join 50,000+ researchers worldwide. Get instant access to peer-reviewed datasets, advanced analytics, and global collaboration tools.
✓ Immediate verification • ✓ Free institutional access • ✓ Global collaborationJoin our academic network to download verified datasets and collaborate with researchers worldwide.
Get Free AccessIn recent times, there has been a notable rise in the utilization of Internet of Medical Things (IoMT) frameworks particularly those based on edge computing, to enhance remote monitoring in healthcare applications.Most existing models in this field have been developed temperature screening methods using RCNN, face temperature encoder (FTE), and a combination of data from wearable sensors for predicting respiratory rate (RR) and monitoring blood pressure.These methods aim to facilitate remote screening and monitoring of Severe Acute Respiratory Syndrome Coronavirus (SARS-CoV) and COVID-19.However, these models require inadequate computing resources and are not suitable for lightweight environments.We propose a multimodal screening framework that leverages deep learninginspired data fusion models to enhance screening results.A Variation Encoder (VEN) design proposes to measure skin temperature using Regions of Interest (RoI) identified by YoLo.Subsequently, the multi-data fusion model integrates electronic records features with data from wearable human sensors.To optimize computational efficiency, a data reduction mechanism is added to eliminate unnecessary features.Furthermore, we employ a contingent probability method to estimate distinct feature weights for each cluster, deepening our understanding of variations in thermal and sensory data to assess the prediction of abnormal COVID-19 instances.Simulation results using our lab dataset demonstrate a precision of 95.2%, surpassing stateof-the-art models due to the thoughtful design of the multimodal data-based feature fusion model, weight prediction factor, and feature selection model.
Achyut Shankar, Rizwan Patan, M. S. Mekala, Eyad Elyan, Amir Gandomi, Carsten Maple, Joel J. P. C. Rodrigues (2024). A Multimodel-Based Screening Framework for C-19 Using Deep Learning-Inspired Data Fusion. IEEE Journal of Biomedical and Health Informatics, pp. 1-10, DOI: 10.1109/jbhi.2024.3400878.
Datasets shared by verified academics with rich metadata and previews.
Authors choose access levels; downloads are logged for transparency.
Students and faculty get instant access after verification.
Type
Article
Year
2024
Authors
7
Datasets
0
Total Files
0
Language
English
Journal
IEEE Journal of Biomedical and Health Informatics
DOI
10.1109/jbhi.2024.3400878
Access datasets from 50,000+ researchers worldwide with institutional verification.
Get Free Access