Venkataraman M
@Venkat
DATA ENGINEER II at Honeywell
Bengaluru, Karnataka, India
Data Engineer with 3 years of experience at Honeywell specializing in MLOps, ETL, and Azure. Expert in optimizing model inference with ONNX/Parquet and building scalable microservices that reduced manual workloads by 80%.
Experience
DATA ENGINEER II
Honeywell
Working as a Data Engineer II responsible for designing and maintaining scalable data pipelines and MLOps workflows that support machine learning and automation platforms. Developed an incremental model training pipeline that ingests production data from PostgreSQL into Delta Lake, enabling continuous model retraining. Optimized ML inference performance by converting models to ONNX format and using Parquet-based storage for faster data access. Architected middleware services within a stateless microservices environment using FastAPI and Celery to support Sentence Transformer-based applications. Deployed and managed APIs on Azure Kubernetes Service (AKS) with autoscaling using KEDA and integrated Redis for caching and message handling. Built automated performance, scalability, and reliability testing pipelines using PySpark and Databricks to simulate production workloads, automate data provisioning, and validate API performance. Focused on building reliable, high-performance data systems that improve scalability and significantly reduce manual operational effort.
Software Engineer I
Honeywell
Worked as a Software Engineer focused on building data engineering solutions and ETL pipelines to support machine learning–driven automation systems. Designed and implemented a robust ETL pipeline to extract and process complex JSON data from a Neo4j graph database using Cypher queries, enabling structured data ingestion into PostgreSQL and Delta Lake tables. Leveraged PySpark, Pandas, and SQLAlchemy to develop scalable data processing workflows and optimize large-scale data transformations. Contributed to the development of a building management system onboarding tool by integrating machine learning models that achieved 85% accuracy, helping reduce onboarding time by nearly 80%. Collaborated with cross-functional teams to design efficient data storage solutions, implement versioned datasets, and ensure data quality and reliability across pipelines. Focused on building scalable and maintainable data processing systems that support enterprise-level automation and analytics.
Bachelors Software Engineer Intern
Honeywell
Worked as a Software Engineer focused on building data engineering solutions and ETL pipelines to support machine learning–driven automation systems. Designed and implemented a robust ETL pipeline to extract and process complex JSON data from a Neo4j graph database using Cypher queries, enabling structured data ingestion into PostgreSQL and Delta Lake tables. Leveraged PySpark, Pandas, and SQLAlchemy to develop scalable data processing workflows and optimize large-scale data transformations. Contributed to the development of a building management system onboarding tool by integrating machine learning models that achieved 85% accuracy, helping reduce onboarding time by nearly 80%. Collaborated with cross-functional teams to design efficient data storage solutions, implement versioned datasets, and ensure data quality and reliability across pipelines. Focused on building scalable and maintainable data processing systems that support enterprise-level automation and analytics.
Education
Amrita University
Bachelors of Technology
Computer Science