Default profile banner
BM

Bhukya Mohan

@bhukyamohan

Data Engineer at Intelliflo (Invesco)

Hyderabad, Telangana

Intelliflo (Invesco)National Institute of Technology Hamirpur

Bhukya is a proficient Data Engineer skilled in SQL, Snowflake, Spark, and various ETL/ELT processes. He handles end-to-end data integration, extracting data from APIs and FTP servers, and transforming it into structured, queryable tables using cloud storages like AWS S3. He possesses strong analytical skills and a proven ability to deliver accurate data solutions while optimizing performance using advanced techniques like PySpark and DBT.

Experience

Data Engineer

Intelliflo (Invesco)

•May 2024 - Present•Hyderabad, Telangana

Developed dynamic python scripts to extract various QuickSight Asset definitions and transformed them using Recursive Python function to facilitate for Cross deployment. Engineered robust transformation logic to process and normalize JSON data, preparing it for seamless integration into Snowflake tables.

Data Engineer

Deloitte

•Jan 2022 - Apr 2024•Hyderabad, Telangana

Successfully designed, developed, and managed API-driven ETL pipelines, integrating CI/CD pipelines for automated deployment, testing, and monitoring of data workflows. Automated multi-source data extraction from Cloud Storage buckets, API Servers, SFTP Servers, incorporating Python's Multithreading, Multi-processing features for optimal concurrency, cutting down processing time by 14 hours. Streamlined transformation of JSON data to CSV leveraging Python's and Snowflake's Flatten Functions for Snowflake warehouse ingestion, enhancing data portability and accessibility. Conceptualized and Developed a dynamic DQ Framework which leverages Complex Snowflake Queries, reducing manual effort of data profiling time from 16 hours to 30 minutes and built Tableau dashboards on top of it. This entailed comprehensive checks and compliance with business constraints. Developed an ETL audit and logging framework using Snowpark, Even Tables and Python that automatically sends out alerts based on severity level, ensuring data accuracy and minimizing errors. Engineered advanced Data Models using DBT on Snowflake, resulting in a 30% improvement in query performance and data processing speed. Developed advanced SQL scripts for data analysis, reduced report generation time by 40% through optimization of long running SQL queries and efficient use of temporary tables and CTEs, Clustering and MV's. Python libraries such Pandas, Openpyxl, xlsxwriter, snowflake.connector, Json, Shutil, OS, RE, SYS, concurrent.futures were used. Developed configuration driven Talend frameworks to ensure safe data migration from Amazon s3 Buckets to Snowflake Warehouse. Engineered large-scale data transformations with Apache Spark and PySpark, leveraging Spark DataFrames and RDDs to handle massive, distributed datasets efficiently and optimize batch processing times. Optimized Spark workloads by fine-tuning cluster configurations (e.g., executors, partitions) and employing techniques like caching and broadcast

Education

National Institute of Technology Hamirpur

Bachelor of Technology

Computer Science

Aug 2020

Licenses & Certifications

Snowflake Snowpro

Snowflake

• No expiration

Skills

SQL
Python
Snowflake
AWS Services
Apache Spark
PySpark
Talend (ETL)
Airflow
DBT
ETL/ELT Processes
Distributed Computing Concepts
Pandas
Snowpark