Post Job Free

Resume

Sign in

Data Engineer Processing

Location:
Indianapolis, IN
Posted:
September 08, 2023

Contact this candidate

Resume:

PROFESSIONAL SUMMARY

Data Engineer with *+ years of extensive experience in data integration and analytics, leveraging expertise in AWS Redshift and Tableau to optimize data processing by 30% and deliver dynamic business intelligence dashboards. Demonstrated technical leadership, problem-solving acumen, and a strong commitment to ensuring high-quality work, tackling challenging architectural and scalability problems. Aiming to contribute my skills and achievements to drive team success and elevate organizational performance.

EMPLOYMENT HISTORY

Data Engineer, Eli Lilly, Indianapolis, IN

03/2023 - Present

• Oversee and provide guidance for end-to-end data-related activities, primarily focused on integrating data from AWS S3 buckets using AWS Redshift.

• Apply data modeling and data engineering techniques in SQL and Redshift to optimize data processing and storage efficiency.

• Utilize Tableau and Power BI to create comprehensive and visually appealing business intelligence dashboards for end users.

• Worked on leveraging the Power BI NLP capabilities.

• Govern and enforce data quality measures, ensuring the development team adheres to the client's implementation and coding standards.

• Provide technical leadership and mentorship to the development team, leveraging extensive technical acumen and industry best practices.

• Facilitate knowledge transfer and training sessions on the client's standards and technologies, promoting continuous learning within the team.

• Rapidly absorb and apply complex technical concepts, consistently demonstrating high learning agility in new environments.

• Analyze data-related issues, identify root causes, and present well-structured short- and long-term solutions to address challenges.

• Collaborate with cross-functional teams, including business analysts and stakeholders, to understand pharma industry needs and translate them into effective technical solutions.

• Act as a focal point of contact between technical teams and management, providing regular project updates and reports on progress and key performance indicators.

• Support the team to ensure consistent high-quality work, offering "hands-on" guidance and assistance when needed to maintain project timelines and deliverables.

Programmer Analyst, Cognizant (HUMANA), Chennai, India 12/2019 - 12/2021

• Leveraged Azure Data Factory to design and implement robust data pipelines, facilitating seamless data extraction, transformation, and loading processes.

• Utilized Azure Databricks to perform advanced data analytics, processing large-scale healthcare datasets and providing actionable insights for business stakeholders.

• Implemented Azure SQL Database and Azure Synapse Analytics (formerly Azure SQL Data Warehouse) for efficient and scalable data warehousing solutions, enabling faster data retrieval and analysis.

• Designed and deployed Azure Cosmos DB to manage unstructured healthcare data, ensuring high availability and low- latency access.

• Collaborated with cross-functional teams to identify data requirements and implement Azure-based solutions that aligned with business goals.

• Maintained and monitored Azure-based systems, proactively identifying and resolving performance bottlenecks and ensuring data integrity and security.

TARUN V

Data Engineer Data Analyst

1937 Broadway st, Indianapolis

adzkos@r.postjobfree.com

317-***-****

• Designed and developed comprehensive data models for complex business scenarios, ensuring optimal storage, retrieval, and query performance.

• Designed end-to-end data integration solutions using Azure Data Factory, orchestrating data movement and transformation across various sources and destinations.

• Developed and maintained data processing workflows using Databricks notebooks, applying data transformations, aggregations, and machine learning algorithms.

• Worked closely with data scientists, analysts, and healthcare professionals to understand data needs and translate them into effective data engineering solutions.

• Contributed to the adoption of Azure best practices within the organization, providing training and knowledge sharing to empower team members with Azure skills.

• Creating pipelines, data flows and complex data transformations and manipulations using ADF and PySpark with Databricks.

• Created several Databricks Spark jobs with Pyspark to perform several tables to table operations.

• Performed data flow transformation using the data flow activity.

• Developed streaming pipelines using Apache Spark with Python.

• Created, provisioned multiple Databricks clusters needed for batch and continuous streaming data processing and installed the required libraries for the clusters.

• Working with complex SQL views, Stored Procedures, Triggers, and packages in large databases from various servers.

• Experience in working on both agile and waterfall methods in a fast pace manner.

• Extensively used SQL Queries to verify and validate the Database Updates.

• Suggest fixes to complex issues by doing a thorough analysis of root cause and impact of the defect. EDUCATION

Masters in Computer Science, University of Central Missouri, Lee's Summit, MO 01/2022 - 05/2023

Batchelor of Engineering in ECE, Anna University, Chennai, India 06/2015 - 06/2019

SKILLS

Programming languages: Python, Java, SQL, PL/SQL, HTML, CSS Databases: Mongo DB, SQL server, AWS, Oracle

Data Visualization Tools: Power BI, Tableau, Matplotlib, Seaborn Cloud Technologies: AWS, Azure

ETL Tools: Informatica, AWS Glue

Big data technologies: Spark, Kafka

LINKS

Certification: Microsoft certified Data Engineer

LinkedIn

GitHub

INTERNSHIPS

Data Engineer Intern, Fragma Data Systems

08/2018 - 07/2019

• Participated in hands-on classroom activities and case studies to gain proficiency in Big Data and Hadoop technologies.

• Utilized a diverse set of technologies, including Java, Oracle DB, JDBC, Excel (ODBC), Hadoop, Data Transfer (Sqoop), Pig, and Hive, to prepare and process large datasets for analysis.

• Gained practical experience in extracting, scrubbing, and manipulating real-time and warehouse data using SQL and Python.

• Applied newfound knowledge to effectively define business problems and develop test designs that generated valid and actionable solutions.

• Gained exposure to Project Management methodologies such as Agile and Scrum, along with a thorough understanding of the Software Development Life Cycle (SDLC).



Contact this candidate