Robotics & Computer Vision Engineer specialising in SLAM, machine learning, computer vision, and AI. MSc Robotics, University of Bristol.
I'm a robotics engineer working at the intersection of machine learning, computer vision, and autonomous systems - across the full stack, from sensor calibration and SLAM pipelines to training and deploying ML models in real environments.
My MSc dissertation at Bristol built a dynamic visual SLAM system integrating YOLOv8, DeepSORT, and MiDaS with ORB-SLAM, achieving robust localisation and 3D mapping in environments with moving objects. I also hold six peer-reviewed publications (Wiley, RSC, SSRN) across antenna design, deep learning, and federated AI governance.
Previously Associate Software Engineer at DeGould, Jr. Drone Systems Engineer at HVN Labs, Research Fellow at CMET, and Research Intern at HEMRL, DRDO - India's defence research organisation.




Automotive manufacturers need fast, accurate, automated defect localisation across vehicle bodies. Manual inspection is slow and inconsistent. CV-based systems must handle complex 3D geometry and be precisely calibrated.
Designed and deployed image segmentation pipelines to precisely isolate defect regions across complex vehicle surfaces, improving detection accuracy and repeatability in production.
Built a Blender plugin automating 3D model processing workflows. A 2-day manual task compressed to under 1 hour (16× improvement), deployed in live inspection pipelines.
Implemented 3D model + 2D image hybrid fusion for pose estimation, significantly reducing reliance on expensive LiDAR hardware while maintaining accuracy across inspection stations.
Developed calibration and localisation algorithms across multiple inspection pipelines, reducing manual intervention and enabling scalable deployment.
Improved codebase architecture and version control workflows, enabling faster, more reliable production deployments across the R&D team.
Autonomous drones need to land precisely in GPS-degraded, dynamic environments. Single-sensor approaches fail in practice. The system must fuse heterogeneous sensors in real-time on embedded hardware.
Designed and implemented a reverse landing system using GPS + Raspberry Pi camera fusion, enabling precise autonomous landings in dynamic conditions.
Developed stereo camera-based depth estimation to replace monocular approaches, providing sub-metre altitude control for low-altitude navigation.
Built LiDAR + camera sensor fusion pipeline for robust drone detection and accurate landing zone identification in cluttered environments.
Integrated MAVLink protocol and Wi-Fi communication for seamless ground-to-drone control and real-time telemetry transmission.
Building a coaxial drone system for synchronised lighting displays requires tight control system integration and reliable autonomous landing in GPS-noisy environments.
Contributed to coaxial drone R&D: assembly, control system tuning, and performance testing across varied flight conditions for synchronised display applications.
Developed a precision autonomous landing system using GPS and camera data fusion for reliable operation in dynamic, GPS-noisy environments.

Designing compact, high-accuracy microwave antennas for 5G requires novel substrate materials and precise simulation. Existing designs were too large and imprecise for next-generation communication standards.
Optimised antenna designs using CST Microwave Studio, achieving 20% size reduction and 70% accuracy improvement for GPS, Wi-Fi, Bluetooth and 5G frequencies.
Operated 3D printers (FDM, Inkjet) and characterisation tools (XRD, VNA) for antenna prototyping and dielectric material testing.
Fabricated a biodegradable-ink strain sensor via 3D inkjet printing - a novel approach to sustainable flexible electronics with defence and wearable applications.
Research led to 3 journal publications in Wiley and RSC journals, including novel dielectric nanocomposite materials for microwave applications.

Defence applications require reliable object tracking and velocity estimation from video footage under variable lighting, without access to additional sensors or GPS, on constrained hardware.
Designed and deployed a MATLAB-based video tracking system using Gaussian Mixture Model and point tracking for moving object recognition in defence research contexts.
Achieved robust accuracy under variable lighting conditions through adaptive background subtraction and GMM model tuning.
Executed feature extraction and depth estimation using projective geometry across multiple camera planes for accurate 3D reconstruction.
Validated precise velocity measurement across 20 independent video datasets, with reliable real-world performance.
I'm open to roles in Robotics, Machine Learning, Computer Vision, and AI. If you're working on autonomous systems, perception, or hard engineering problems — reach out directly.