The core idea behind my Ph.D. was not to build isolated lab prototypes, but to make advanced crop monitoring deployable at scale by reusing mature sensing technologies already developed for other industries.
In practical terms, this means transforming existing agricultural machines into data-driven platforms: lower adoption cost, faster field integration, and measurable reliability through metrology and real-world validation.
From a business perspective, the value is clear: better agronomic decisions, reduced operational uncertainty, and a realistic path to precision agriculture for farms that need robust solutions rather than custom one-off systems.
LANGUAGES
COMPUTER VISION & ML
3D & MAPPING
EMBEDDED & HARDWARE
SOFTWARE ENGINEERING
TOOLS & INFRA
IoT home automation system: remote gate control, environmental monitoring, crypto portfolio tracker, and self-updating deployment — all managed via Telegram bot on Raspberry Pi.
Embedded control system for automated larvae cultivation as sustainable fish feed — valve/pump scheduling, safety interlocks, and sensor-driven cycles.
Problem: Measuring vine woody volume and monitoring spring buds in real field conditions is difficult with manual inspections and inconsistent visual assessments.
Method: Developed an RGB-D and deep-learning pipeline for 3D vine volume estimation, plus automated bud detection, tracking, recognition, and counting during early seasonal growth.
Result: The wood-volume pipeline achieved 2.1 cm3 RMSE with 1.8 cm3 mean deviation, while the fine-tuned YOLOv8 bud detector reached an F1-score of 0.79 on a custom vineyard dataset.
Problem: Accurate 3D orchard reconstruction is expensive and fragile in harsh field environments.
Method: Low-cost multi-sensor fusion (RGB-D, GNSS, IMU) with a custom SLAM approach tailored for agriculture.
Result: Validated 3D reconstructions and geometric measurements with edge-compatible acquisition workflows.
GitHub Repository: Hierarchy-Robust-SLAM
Problem: Low-cost depth estimation from a single moving camera is noisy and lacks formal measurement uncertainty — a gap for agricultural end-users who need reliable readings without expensive LiDAR or stereo hardware.
Method: Simplified optical-flow model relating image-plane pixel speed to real-world depth via a single calibration parameter K. Validated in laboratory with a UR10e robot at five speeds (0.25–0.97 m/s), four camera-target distances, and five ArUco markers at different depths. Monte Carlo uncertainty propagation with 10,000 synthetic realizations per data point. Window-based moving average filter (effective 20 fps from 60 fps) to reduce noise.
Result: Best-case depth uncertainty of 4 cm (filtered, 0.50 m/s) and 7 cm (filtered, 0.75 m/s). Image speeds above 500–800 px/s keep uncertainty below 20 cm. Two practical examples provided for end-users to select camera and vehicle speed. Code publicly released on GitHub.
Problem: Distinguishing crop from weed in real-time on agricultural machinery requires efficient onboard intelligence.
Method: Embedded computer-vision system using deep neural networks optimized for constrained edge hardware.
Result: Practical field-ready segmentation workflow to support precision agriculture operations.
Problem: Manual supervision of surgical handwashing is inconsistent and can increase infection risk.
Method: Vision-based hand and gesture recognition system with machine-learning analysis for procedure monitoring.
Result: Automated and objective assessment pipeline for handwashing compliance in clinical workflows.
Problem: Reliable automatic recognition of exercise gestures is challenging in practical gym environments.
Method: Vision-based pose estimation and signal-processing pipeline for movement analysis and repetition evaluation.
Result: Functional smart-mirror prototype supporting real-time feedback for fitness training tasks.
Topics: Computer Vision, Metrology, Predictive Maintenance, Sensor Fusion, Statistics, Modal Analysis, Python, Matlab.
Guest lecture: Probabilistic Sensor Fusion — from Naive Bayes to Kalman Filter (2023).
Co-supervised M.Sc. and B.Sc. theses in computer vision and industrial metrology.
Email: bernardo.lanza.tech@gmail.com
LinkedIn: linkedin.com/in/bernardo-lanza-554064163
Publications: orcid: 0009-0005-3561-754X
Publications(G-Scholar): Publications-Google-Scholar
GitHub: GitHub Bernardo Lanza