Post Job Free

Resume

Sign in

Data Engineer Aws

Location:
Toronto, ON, Canada
Posted:
March 29, 2024

Contact this candidate

Resume:

SAI CHAND DEVARAPALLI

AWS Data Engineer

Email: ad4ny1@r.postjobfree.com

Phone: +1-647-***-****

LinkedIn: https://www.linkedin.com/in/saidevarapalli/

Location: Toronto, Canada

PROFESSIONAL SUMMARY:

•Around 4+ Years of experience as an AWS data engineer in designing, building, and maintaining data pipelines on the AWS platform.

•Skilled in using various AWS services such as EC2, S3, Redshift, Glue, Athena, EMR, and Lambda to develop and manage big data solutions.

•Proficient in programming languages like Python, Scala, and SQL for data processing and analysis leveraging them to develop sophisticated data engineering solutions

•Utilized Amazon S3 as a scalable and cost-effective storage solution for hosting data lakes and storing raw and processed data

•Leveraged Amazon EMR for distributed data processing and analysis, including running Apache Spark and Hadoop jobs, resulting in a 20% improvement in data processing speed

•Orchestrated and managed complex workflows using AWS Step Functions to coordinate multiple AWS services, including Lambda functions, Glue jobs, and more reducing workflow execution time

•Built real-time data streaming applications using Amazon Kinesis for ingesting and processing streaming data leading to a 32% reduction in time-to-insight

•Implemented data governance and security policies using AWS Lake Formation to manage access controls and auditing for data lakes

•Managed and optimized Amazon RDS databases, including MySQL, PostgreSQL, and others, for various applications and workloads, resulting in improvement in database performance

•Designed and developed applications using Amazon DynamoDB for NoSQL database requirements, ensuring high availability and scalability.

•Created and optimized queries in Amazon Athena for ad-hoc analysis and reporting on data stored in Amazon S3 resulting in a 30% reduction in query execution time

•Set up and configured CloudWatch for monitoring AWS resources and applications, creating custom dashboards, and setting up alarms for proactive issue detection

•Worked in an agile environment and have good insight into agile methodologies and Lean working techniques. Participated in Agile ceremonies and Scrum Meetings

SKILLS

•Programming Languages Python, SQL

•Cloud AWS

•Big Data Technologies Hadoop, Spark

•AWS Cloud Stack S3, EC2, Lambda, Glue, Athena, Redshift, EMR

•ETL SSIS, Apache Spark, AWS Glue

•Database Technologies MySQL, PostgreSQL

•Data Modelling Tools ER/Studio, Visio

•Data Visualization Tools Tableau, Power BI

•Atlassian Tools Jira, Confluence

EDUCATION

•BSC Computer Science, Bhavan’s Vivekananda College, 2012–2015

PROFESSIONAL EXPERIENCE

AWS Data Engineer

Moneris, Toronto, Canada Aug 2021 – Till Date

•Developed and implemented end-to-end data pipelines, enabling seamless data flow from various sources to storage and analytical systems, resulting in improvement in data availability and increase in data accuracy

•Developed and managed ETL workflows using tools like AWS Glue, Apache Spark, and Apache Airflow, reducing data processing time

•Designed and built data storage solutions like data lakes, data warehouses, and data marts on AWS

•Designed and built real-time data streaming solutions using technologies like Kinesis and Lambda, leading to reduction in time-to-insight

•Worked in data governance and data quality frameworks and implemented them in AWS environments.

•Implemented error handling and retry mechanisms within Amazon Step Functions to ensure robust workflow execution, and reduce workflow failures

•Created and maintained scalable and fault-tolerant data architectures in AWS using services such as S3, Redshift, and RDS.

•Developed and deployed data models and schema designs for various applications and services, ensuring data accuracy and consistency.

•Optimized Amazon DynamoDB tables by carefully utilizing on-demand capacity mode, and implementing data retention policies which resulted in reduction in operational costs, while maintaining optimal performance and data durability

•Developed and executed data migration strategies from on-premises systems to AWS cloud-based systems, resulting in improved data accessibility and reduced infrastructure costs.

•Utilized AWS Machine Learning services such as Sage Maker and Recognition to develop predictive models and image recognition solutions.

ETL Developer

Optum Global Solutions, India Apr 2016 – Mar 2020

•Developed and maintained data pipelines and ETL processes using tools such as Apache Spark, ensuring data accuracy and consistency across various systems.

•Collaborated with cross-functional teams to identify key performance indicators (KPIs) and develop data-driven strategies to improve business operations.

•Developed and maintained complex ETL pipelines on AWS using services such as AWS Glue, achieving efficient data processing, transformation, and loading from diverse sources into data lakes

•Led a successful data migration project, transferring 5 years of historical customer data from an outdated legacy system to a modern CRM platform. The project resulted in zero data loss, minimal downtime, and improved data accessibility for the sales team

•Developed and maintained a customer segmentation model using clustering Techniques, resulting in more targeted marketing campaigns and increased customer retention

•conduct thorough code reviews, focusing on ensuring code quality, adherence to standards, reusability, and ease of maintenance. Skillfully facilitate Operational Readiness Reviews, supporting gating processes and review signoffs to validate solution designs

•Analyzed and interpreted large datasets from multiple sources to provide insights into customer behavior and preferences, resulting in increase in sales revenue

•Designed and executed ETL pipelines using Apache Spark to process and transform over 10 terabytes of raw data, resulting in reduction in data processing time and improved data quality

•Designed and documented new processes and standard operating procedures (SOPs) using flowcharts and process diagrams, ensuring consistency and clarity across teams.

•Played a pivotal role in data migration by performing thorough data mapping and transformation activities. Collaborated with cross-functional teams to analyze and address data discrepancies



Contact this candidate