Default profile banner
Chinmay BhardwajCB

Chinmay Bhardwaj

@ChinmayBhardwaj

Data Engineer at YodaPlus

India

https://www.linkedin.com/in/bhardwaj-chinmay

YodaplusMumbai University

Data Engineer with 3 years of experience designing and building scalable data pipelines, ETL workflows, and cloud data platforms on AWS. Proficient in processing and transforming large-scale datasets using PySpark, AWS Glue, Apache Airflow, and Informatica. Experienced in building data lakes on Amazon S3 and developing cloud data warehouses using Snowflake and Amazon Redshift. Strong expertise in SQL, Python, and distributed data processing, delivering high-quality analytics-ready datasets, improving data pipeline performance, and enabling reliable business intelligence and reporting solutions.

Experience

Data Engineer

Yodaplus

Full-timeMar 2025 - PresentMumbai

Designed and deployed end-to-end ETL pipelines to ingest data from multiple enterprise sources into an AWS S3-based Data Lake, enabling scalable analytics and reporting. Architected a multi-layered data lake (Landing, Transient, Curated) using AWS Glue and PySpark, performing complex transformations to enable structured data processing and downstream analytics. Built optimized Spark-based transformation pipelines, improving data processing performance by 40% through partitioning, caching, and job tuning. Delivered curated datasets to Amazon Redshift using Glue Data Catalog and Redshift Spectrum, enabling low-latency BI queries and dashboarding. • Implemented Apache Airflow DAGs to orchestrate full and incremental data loads, leveraging config-driven dynamic pipelines to reduce manual intervention by 80%. Developed data validation, logging, and observability frameworks, improving pipeline reliability, monitoring, and failure recovery. Implemented real-time pipeline alerting using AWS SNS, ensuring faster incident detection and operational stability. Optimized S3 lifecycle policies and storage tiers, reducing data storage costs while maintaining compliance and retention requirements. Secured sensitive credentials using AWS Secrets Manager and enforced least-privilege IAM access controls for enterprise-grade security. Collaborated with product owners, analytics teams, and business stakeholders in an Agile/Scrum environment, translating business needs into scalable data solutions. Tech Stack: AWS S3, AWS Glue, PySpark, Python, SQL, Amazon Redshift, Apache Airflow, AWS SNS, AWS Secrets Manager

Data Engineer

Wipro

Full-timeApr 2022 - May 2024Mumbai, Maharashtra, India

Designed and implemented end-to-end ETL pipelines to extract data from Oracle Fusion and loaded it into Azure Data Lake Storage. Developed a multi-layered data architecture (Landing, Transient, Curated) in ADLS using Azure Databricks, performing complex transformations to enable structured data processing and downstream analytics. Pushed curated data into Azure SQL Database, enabling seamless Power BI connectivity for reporting and analysis. Orchestrated end-to-end data pipelines in Azure Data Factory for both full and incremental loads, ensuring efficient and reliable data integration. Used Azure Key Vault to securely manage and access credentials and secrets in data pipelines. Interacted with clients to understand business functioning, gathered requirements and resolved queries. Designed and optimized database indexes to enhance query efficiency and overall system performance. Collaborated with business stakeholders to gather requirements and delivered multiple Power BI reports, following an agile approach with continuous feedback and iterations to meet evolving business needs

Education

Mumbai University

Bachelor of Engineering

Grade: 7.6

CDAC

PG Diploma in Big Data Analytics

Big Data

Grade: 6.5

Licenses & Certifications

AWS Academy Graduate - AWS Academy Cloud Data Pipeline Builder

Snowflake

SQL (Intermediate)

Skills

Python
Cloud
SQL
BigData
Pyspark
AI/ML
PowerBI
Tablue
Databricks
Airflow
AWS
Azure