Default profile banner
SK

Sravani Katragadda

@sravanikatragadda

Data Engineer at HCLTech

Chennai, Tamil Nadu, India

HCLTechQIS College of Engineering And Technology

Data Engineer with 3+ years of experience in building high-performance ETL pipelines using Spark and Ab Initio. Proficient in SQL, Python, and AWS services (Redshift, S3, Lambda, EMR, Glue) for data transformation and analytics. Skilled in data modeling, OLAP, and metadata management to support business intelligence and reporting. Passionate about optimizing big data workflows and enabling self-service analytics

Experience

Data Engineer

HCLTech

PresentChennai

On-Premise to Cloud-Scaled Migration Client: Commonwealth Bank of Australia. Managed job execution in AWS by establishing necessary prerequisites to ensure smooth operational flow. Conducted data reconciliation between on-premises systems ensuring data integrity and accuracy. Experience with SQL queries and Unix commands, creating the reports based on the requirement, filtering, grouping and modifying the data. Identified and documented defects related to data discrepancies, effectively tracking them to resolution. Led and contributed to the successful end-to-end migration of enterprise data infrastructure in on-premises servers enhancing scalability, reliability, and cost efficiency. Orchestrated large-scale data migration workflows utilizing custom Spark ETL scripts. Achieved a 40–60% improvement in system performance while significantly reducing operational overhead.

ETL Developer

HCLTech

Chennai, IN

Client: Commonwealth Bank of Australia. Collaborated with Technical BA to analyze Business Requirements Document (BRD) and address queries before initiating build activities. Developed data ingestion pipelines from various source systems, transitioning file-based and batch processing files. Hands on Experience in Big data technologies mainly in Hive. Created SQL queries, Spark objects, and packages, automating jobs with AutoSys and configuring the existing Spark ETL framework within IntelliJ IDE. Optimized data structures for analytics and reporting, resulting in a 40% improvement in query performance. Implemented ETL pipelines on spark and using autosys for scheduling workflows based on project requirements. Validated and transformed data from diverse systems, ensuring alignment with business logic before loading it into Amazon Aurora Database. Merged code into the master branch to establish a production-ready version after completing unit testing, system testing, end-to-end testing, and securing testing sign-off.

Education

QIS College of Engineering And Technology

Bachelor of Technology

Electronic And Communication Engineering

Licenses & Certifications

Certified - Snowflake SNOWPRO CORE [COF-C02]

Snowflake

• No expiration

Credential ID: COF-C02

Certified - AWS Cloud Practitioner

AWS

• No expiration

Certified - AWS Data Engineer

AWS

• No expiration

Certified - GCP Professional Data Engineer

Google

• No expiration

Skills

SQL
PL/SQL
Python
Spark
Shell Basics
Unix commands
Hive
Oracle
MySQL
HDFS
PySpark
Scala
Zeppelin
Hue
Putty
WinSCP
Azure Databricks
AWS Glue
AWS EMR
AWS Athena
Autosys
Control Center
Airflow
GitHub
IntelliJ
AWS Redshift
AWS S3
AWS Lambda
Snowflake