Featured Projects
Autonomous Oil Palm Harvesting Robot
This proposal focuses on the research and development of a novel autonomous oil palm harvesting robot. In our previous research work related to this prototype proposal, we developed a fruit harvesting robot with functionalities of SLAM, navigation, and servoing for outdoors, and port and optimise the functionalities onto a four-wheel-differential-drive AGV with a cutting plier’s manipulator. In this proposal, we reuse the robot’s functionalities of SLAM, navigation and servoing for outdoors and apply them to an oil palm fresh fruit bunch (FFB) harvesting robot. This project proposes to develop a specialised cutter head that is detachable from the robot body and climbs the oil palm tree for FFB harvesting. In this work, we will modify the manual track dump truck with the mechanisation of manual human steering control for AI navigation automation. We also propose to use 6D image processing for effective FFB detection and cutting for the cutter head. The objective of the research is to build the similar or better efficiency performance of a fully automated oil palm robot harvester when compared to a human harvester.
Strategic Research Project 12R299, United Arab Emirates University (UAEU): Enhancing Rainfall Forecasting Performance Using Deep Learning Models Across Various Climate Change Scenarios
Understanding climate change impacts on rainfall patterns is critical for effective water resource management. Accurate forecasting of rainfall distribution and intensity supports decision-makers in planning and sustainability efforts. Existing studies show climate change significantly affects spatial and temporal rainfall patterns but often fail to provide reliable short- and long-term forecasts. This research aims to develop an accurate rainfall forecasting model for various climatic zones using deep learning. An innovative training method with non-overlapping moving windows and cross- validation will ensure reliable performance. The main contribution is providing consistent multi-lead rainfall forecasts for diverse environmental zones, including arid and tropical regions.
FloodIntel: Drones and AI-driven Framework to Optimize Flood Disaster Response in Malaysia
Flood disasters remain one of the most pressing global challenges, with Malaysia being highly vulnerable due to its geographical conditions. This research focuses on enhancing the response phase of flood disaster management by integrating drones and artificial intelligence (AI) into a unified framework called FloodIntel. The study addresses critical gaps, including the lack of structured training frameworks, limited flood image datasets, and challenges in algorithm efficiency under diverse imaging conditions. Using YOLO-based deep learning models and Design-Expert-driven experiments, the research investigates key parameters such as flight altitude and image resolution. Ultimately, FloodIntel aims to deliver a scalable, automated system that supports rapid and resource-efficient flood response operations.
ELYRA – Top 20 Startup Nominee, Malaysia Startup League 2025
ELYRA—an AI-based monitoring system that ensures appropriate attire compliance, specifically detecting sleeved shirts, long pants, and covered shoes. ELYRA is designed to run efficiently on edge devices, using Raspberry Pi 5 for real-time processing without relying on cloud infrastructure. Compared to conventional systems, ELYRA is lightweight, cost- effective, and scalable, requiring significantly less computational resources. Its compact design makes it ideal for deployment in various settings such as factories, labs, and offices where dress code enforcement is needed in a simple, automated, and reliable way.
Federated Edge Intelligence for Scalable and Sustainable Water Quality Monitoring in Tropical Remote Lakes
The project develops and deploys a federated edge-intelligence system for water- quality monitoring at Tasik Chini, Malaysia’s UNESCO Biosphere Reserve. Building on IoT sensor nodes and LoRaWAN connectivity, the system moves analytics to the edge and trains models collaboratively via Federated Learning—reducing bandwidth, preserving data privacy, and improving resilience in remote, infrastructure-poor terrain. A proof-of-concept across seven stations will optimize communication reliability, energy efficiency, and model accuracy in field conditions. Outcomes will deliver scalable monitoring that supports biodiversity protection, indigenous livelihoods, and sustainable water management, advancing Malaysia MADANI priorities and UN SDGs, including SDG 6 (Clean Water) and SDG 13 (Climate Action).
Federated Learning for Environmental Monitoring
This project delivers a federated edge-intelligence platform for environmental monitoring in remote tropical landscapes. Models are trained collaboratively across IoT stations without moving raw data, improving privacy, resilience, and real-time insight while reducing bandwidth. A proof-of-concept integrates LoRaWAN/LoRa P2P with UAV-assisted aggregation to sustain communications over large areas and intermittent links. The system targets reliable, energy-efficient sensing with scalable deployment and robust field workflows—advancing environmental stewardship and national sustainability goals.
TinyML for Environmental Monitoring
This project advances TinyML for environmental monitoring by deploying real-time, low-power intelligence on microcontrollers. We demonstrate two prototypes: (1) a bioacoustic system that classifies Malaysian hornbill calls on an Arduino Nano 33 BLE Sense to support wildlife conservation, and (2) an edge ozone-prediction node using low-cost CO, temperature, and pressure sensors. Models are trained and optimized with Edge Impulse, then deployed for on-device inference to reduce bandwidth, energy use, and cloud dependence while enabling field ready operation in remote settings.
Implementing VTOL Drone Technology and Hyperspectral Imaging to Assess Plant Health in Rural Mangrove Ecosystems in Sarawak (Japan APT Category II)
In the current project (Category II), the team aims to obtain assess the rural mangrove plant health through the implementation of VTOL drone and hyperspectral imaging. Through a series of on-site test flights, our team established multiple flight missions with a range of parameters. Additionally, our findings identified the optimal altitude (70m and 120m) for capturing high-quality data for VTOL drone, which will guide future hyperspectral data collection efforts. In the meantime, we are testing the previous model using the new dataset obtained from KWNP which demonstrated a promising accuracy. Simultaneously, we are actively involved in the process of training YOLOv5 to detect multiple species of mangroves. This dual approach allows us to enhance the model's accuracy and adapt it to the specific characteristics of the new dataset, contributing to more precise and comprehensive species detection. In addition to the machine learning approach above, Category II introduced spectroscopy method by using hyperspectral sensor of drone-mounted type and handy spectrometer for classification of species and monitoring plant health. We've successfully made a classification of Bakau and Api Api with accuracy of 92% and paved the way to establishment of d-MRV for “Blue Carbon” credit. This approach has therefore allowed the creation of new approaches to ascertain plant health using spectrum data from the mangrove tree. =
A Multi-Model, Human-in-the-Loop algorithm for Automated and Scalable Microscopy Diagnostics in Clinical Laboratories
Microscopy remains a labor-intensive bottleneck in diagnostics, unlike other automated lab workflows. This project proposes a multi-model AI framework with human-in-the-loop learning to automate and scale microscopy analysis. By combining vision models like GPT-based tools, DeepSeek-Vision, and Gemini, the system uses cross-model consensus to boost accuracy. A robotic module will handle slides, capturing images for AI analysis and integration with LIS. Human validation ensures reliability and continuous learning. Targeting mid-sized and resource-
limited labs, the project reduces technologist workload, supports SDG3,
and will deliver a prototype, validated pipeline, and high-impact
publications from Ƶ University.
AI-enabled Robotics for Medical waste Sorting Robots
This paper reviews the role of the YOLO algorithm in advancing healthcare applications, addressing challenges in disease screening, surgery and patient monitoring, food inspection, and medical waste management. By integrating YOLO with hospital machines and the Internet of Medical Things (IoMT), a comprehensive system can be achieved to enhance medical efficiency and patient care. The study highlights opportunities, challenges, and practical considerations of applying YOLO in real medical contexts. This review supports SDG3 (Good Health and Well-Being), emphasizing AI’s potential to strengthen healthcare management, enable early disease detection, and improve overall health outcomes.
Investigating The Synergistic Effects Of Haptic And Olfactory Cues In Immersive Virtual Reality Environments To Improve Emotional Well-Being
This study investigates the synergistic potential of combining haptic and olfactory cues in immersive virtual reality (VR) environments to enhance emotional well-being. While most VR interventions rely heavily on visual and auditory stimuli, the integration of touch and smell remains underexplored, despite their proven influence on human perception and emotional states. By synchronizing tactile sensations with contextually relevant scents, this research aims to create more immersive, emotionally resonant VR experiences. This multimodal approach holds significant promise for advancing mental health interventions, particularly in stress reduction, anxiety management, and relaxation therapies, by addressing critical gaps in current multisensory VR research.
A Controllable, Adaptable, And Annotation-Efficient Generative Model for Object Detection
This project develops an image-based Generative AI (GenAI) framework to enhance the performance and generalizability of YOLO object detectors. By focusing on annotation efficiency and controllable synthetic data generation, the framework reduces reliance on costly manual labeling while enabling the creation of challenging training samples. The approach not only maximizes the value of existing datasets but also
strengthens YOLO’s detection accuracy across multiple domains, including disaster rescue management, environmental monitoring, and biodiversity studies, where data scarcity often limits progress.