Shashidhar Reddy Gun reddy
Email: *************@*****.***
Mobile: +1-224-***-****
Senior Azure Data Engineer
PROFESSIONAL SUMMARY:
Experienced construction professional with a strong background in project management and engineering, seeking a Field Engineer role at Black Box. Proven ability to manage and execute complex projects, ensuring timely completion and adherence to quality standards. Exceptional communication and collaboration skills, adept at coordinating teams and managing stakeholder expectations.
TECHNICAL SKILLS:
Databases:
Oracle, MySQL, Hive, SQL Server, HBase, Cassandra, MongoDB
Big Data Technologies:
HDFS, Hive, PySpark, Map Reduce, Pig, YARN, Sqoop, Oozie, Zookeeper, Flume
Programming Languages:
Python, Java, SQL, R, PL/SQL, Scala, JSON, XML, C#
Cloud Services:
Azure, Cosmos, Blob storage, Kubernetes, Azure Synapse Analytics(DW), Azure Data Lake, Databricks, DWH, Data Factory, Snowflake
Data Warehousing:
Snowflake, Azure Synapse Analytics
Data Integration & ETL:
Informatica PowerCenter, Azure Data Factory, SSIS, Apache Airflow, Talend
Data Visualization & Reporting:
Tableau, Power BI, Looker, Reporting Services
Techniques:
Datamining, Clustering, Data Visualization, Data Analytics
Methodologies:
Agile/Scrum, UML, Design Patterns, Waterfall
Container Platform:
Docker, Kubernetes, CI/CD, Jenkins
Tools & Utilities:
JIRA, GitHub, PowerShell, Control-M, Azure Monitor, Azure Log Analytics, Splunk, AWS CloudWatch
Project Management:
Project scheduling, resource allocation, risk management, stakeholder communication
Construction Management:
Estimating, Constructability review, Value engineering, Design review, RFI processing, Change order requests, Safety protocols, Quality control
Software & Applications:
Microsoft Office Suite (Outlook, Word, Excel, PowerPoint, Visio), Bluebeam, Procore, Textura
Certifications:
OSHA-30 Hour
Other:
Data Governance, Data Security, Data Lineage, Metadata Management, Kafka, Apache Flink, Hadoop, BigQuery, Google Cloud Storage (GCS), Pub/Sub, AWS (EC2, S3, IAM, RDS, Glue), GraphQL
PROFESSIONAL EXPERIENCE:
Nomi Health Jan 2023 – Present
Sr. Azure Data Engineer
Responsibilities:
Successfully designed and implemented data pipelines for efficient ETL processes, leveraging Azure Data Factory and other cloud services. Experience includes data migration strategies for various database systems (SQL Server, MySQL, Oracle, PostgreSQL) to cloud platforms.
Proficient in developing and optimizing ETL workflows using Spark and Hive on Azure Databricks. Demonstrated expertise in data warehousing solutions (Snowflake, Azure Synapse Analytics), designing and implementing robust data models.
Extensive experience with big data technologies (Hadoop, Spark, Hive) and real-time data processing (Kafka, Apache Flink). Successfully migrated on-premises solutions to Azure, resulting in improved performance and scalability.
Contributed significantly to data quality, integrity, and security through implementing data governance measures, validation, and monitoring mechanisms (Azure Monitor, Azure Log Analytics).
Collaborated effectively with cross-functional teams, including data scientists and business stakeholders, to meet project requirements and deliver data-driven insights.
Proficient in data visualization tools (Tableau, Power BI) and reporting services, creating comprehensive reports for stakeholders.
Costco May 2021 – Jan 2023
Data Engineer
Responsibilities:
Designed and implemented scalable data ingestion pipelines using Azure Data Factory, efficiently ingesting data from diverse sources such as SQL databases, and REST APIs.
Developed robust data processing workflows leveraging Azure Databricks and Spark for distributed data processing and transformation tasks.
Ensured data quality and integrity through comprehensive data validation, cleansing, and transformation operations performed using Azure Data Factory and Databricks.
Leveraged Azure Synapse Analytics to seamlessly integrate big data processing and analytics capabilities, empowering data exploration and insights generation.
Automated data pipelines and workflows by configuring event-based triggers and scheduling mechanisms, streamlining data processing and delivery.
Implemented comprehensive data lineage and metadata management solutions, ensuring end-to-end visibility and governance over data flow and transformations.
TD Bank Jan 2020 – May 2021
Bigdata Engineer
Responsibilities:
Successfully extracted and ingested large volumes of customer data from various sources into Azure Blob Storage for data lake storage.
Developed and optimized Spark applications in Java to improve data processing efficiency and reduce workload on SQL databases.
Leveraged Hive on Spark to enhance performance and efficiency in data processing and management.
Integrated Databricks with various data sources for seamless data ingestion and real-time updates.
Implemented robust monitoring and logging for Azure Synapse to proactively identify and resolve data issues, optimizing data pipeline performance.
Educational Details:
Master's in Computer Science
Campbellsville University