Ravindra Reddy Gayam
*************@***********.*** +1-726-***-**** TX, USA LinkedIn
SUMMARY
Results-driven Data Engineer with 3+ years of experience architecting, developing, and optimizing large-scale data pipelines, analytics platforms, and cloud-native solutions across AWS, Azure, and GCP environments. Skilled in Python, SQL, Hadoop, and Spark, with proven expertise in automating ETL workflows, improving data quality, and enabling real-time analytics. Adept at delivering regulatory-compliant, high-performance data solutions for banking, manufacturing, and marketing domains. Strong collaborator with cross-functional teams, translating business requirements into actionable, data-driven insights that accelerate strategic decision-making and operational efficiency. TECHNICAL SKILLS
Languages & Frameworks: Python, SQL, PL/SQL, Shell Scripting, Scala, R, Apache Kafka, Airflow, NiFi, Alteryx, Hadoop, Apache Spark, Hive, Pig, GraphQL, Go, TypeScript, C#, REST API, FastAPI, Flask, Django Databases: MySQL, PostgreSQL, Aurora, DynamoDB, Oracle, Redis, Snowflake, Amazon Redshift, NoSQL (MongoDB, Cassandra), Azure SQL Data Warehouse, Google BigQuery, Elasticsearch, Snowpipe Tools & Technology: Git, GitHub, Airflow, NiFi, Alteryx, Docker, Kubernetes, Kafka, CI/CD, Jenkins, Ansible, Terraform, Tableau, Power BI, Looker, JIRA, Microsoft Azure (Event Hubs, HDInsight, Synapse, Databricks, ADF, AML, Data Lake, Cosmos DB), AWS (S3, EC2, SageMaker, Glue, Lambda, Redshift), Google Cloud Platform (GCP), Apache Beam, dbt, Matillion Development Methodologies: Agile, Waterfall, Test Driven Development (TDD), Scaled Agile (SAFe), End-to-End Testing
(Regression), and Performance Testing
Others: Leadership, Communication, Decision- making, Cross - functional collaboration, Stakeholder management, Problem Solving, Critical Thinking, Time Management, Adaptability, Creativity, Presentation Skills PROFESSIONAL EXPERIENCE
Data Engineer, DXC Technology Aug 2024 – Present Remote, USA
Engineered automated data ingestion pipelines using AWS Glue and Apache NiFi, integrating multiple APIs and third-party sources, enabling real-time reporting and cutting business insight generation time by 40%.
Optimized Databricks and Delta Lake workflows to unify streaming and batch data processing, reducing infrastructure complexity and increasing analytics reliability for marketing and audience segmentation initiatives.
Developed advanced Python-based data profiling frameworks to detect anomalies proactively, streamlining root cause analysis and improving data accuracy for executive dashboards across multiple industry verticals.
Executed complex data migrations between GCP and Azure using Terraform automation and custom CI/CD pipelines, enhancing interoperability and reducing critical application downtime by 25%.
Implemented event-driven architectures with Kafka Streams to process IoT sensor data, enabling predictive maintenance and reducing unplanned operational disruptions by 15% in manufacturing operations.
Led Agile, cross-functional teams in creating Power BI and Looker dashboards, accelerating requirements gathering and delivering client-facing BI solutions 30% faster.
Data Engineer, TCS Nov 2020 – Dec 2022 Remote, India
Led the design and deployment of a banking data modernization project for a leading global financial institution, migrating legacy ETL systems to cloud-native architecture using Python and SQL, reducing processing latency by 35%.
Built and optimized big data workflows with Hadoop and Spark to consolidate multi-source customer, transaction, and compliance datasets, ensuring scalable, high-performance data transformation across business units.
Devised data quality governance protocols with Informatica and Talend, ensuring 99.8% accuracy for critical regulatory reports and enabling full Basel III and AML compliance.
Automated reconciliation, risk reporting, and operational dashboards using Apache Airflow and Tableau, reducing manual intervention by 40% and enabling near real-time monitoring of fraud patterns.
Developed a real-time transaction monitoring system using AWS CloudWatch and Apache Kafka, reducing fraud detection response time by 30% and enhancing customer trust.
Engineered a secure data warehouse on Snowflake integrated with Amazon Redshift for advanced analytics, enabling predictive customer behavior models that increased cross-sell opportunities by 18%.
Collaborated with DevOps teams to containerize data ingestion pipelines via Docker and orchestrate deployments with Kubernetes, improving deployment efficiency and scalability.
Delivered executive-level dashboards highlighting revenue trends, fraud risk hotspots, and regulatory compliance status, supporting data-driven decision-making at the C-suite level. EDUCATION
Master of Science, Webster University Jan 2023 – Dec 2024 Missouri, USA Information Technology Management