Post Job Free

Resume

Sign in

Data Analyst Engineering Manager

Location:
Gaithersburg, MD
Posted:
March 15, 2023

Contact this candidate

Resume:

Maheswara Reddy advxvx@r.postjobfree.com

LinkedIn 301-***-****

Professional Summary

A seasoned IT professional offering 14+ years of IT experience. 8 Years of Experience in Bigdata tech stack.

Having good exposure to AWS, Snowflake, DevOps, Jenkin, Kubernetes docker, oozie, Control M, Airflow and GitHub, Splunk, Talend and PowerBI, Unix Shell Script.

Strong work Experience with Big Data, streaming platforms', HDFS, MapReduce, HBASE, Hive, Kafka, Spark, and PIG.

Worked at client location roles in USA.

Played different roles – DevOps Lead, Tech Lead, Senior Developer, Data Analyst.

Experience in working on Development and Support projects in diverse cultures.

Expertise in designing the data lake design and mappings between raw data vs data-warehouse.

Strong in handling Structured, Semi-Structured on Big-Data platform.

Strong expertise in design and development of Streaming applications.

Experience in handling the offshore/onsite teams.

Experience in migrating Big Data applications from On Perm to AWS cloud.

Ability to build new territories and expand opportunities towards the achievement of stated targets, to lead, motivate and provide effective guidance to a team of professional and support staff.

Define requirements for each customer and co-ordinate with various internal and external resources to deliver service or project to customers.

Good experience in Agile Methodology.

technical skills

Languages

Java, python

Hadoop

HDFS, MapReduce, Hive, HBase, Pig, Spark, Sqoop, Kafka, Oozie, Airflow, Docker, Kubernetes, Control M, Talend.

Hadoop Distributions

Pivotal, MapR and Cloudera

Query Languages

Hive Sql, Spark Sql, HBase, Pig

Build Tools

Jenkins

Databases

Oracle, MySQL, DB2, SqlServer and HBase

Cloud Technologies

AWS, Snowflake

Data Visualization Tools

Power BI, Tableau, Grafana

Service Management

ServiceNow, Rally, HP

Experience Summary

Working as an Engineering manager at Ness from Dec 2021 to Feb 2023.

Worked as a Assoc Software Engineering Manager at UHG from june-2014 to Nov 2021.

Worked as a Lead Product Developer at Symphony-Teleca from aug-2013 to june-2014.

Worked as a Software Engineer at IBM from Feb-2010 to aug-2013.

Educational Qualification

Master of computer application from Acharya Nagarjuna University in 2008.

Trainings and CERTIFICATIONS

Cloudera Certified Developer for Apache Hadoop (CCDH) CCD-410.

AHM-250 Certified.

Microsoft Programming in HTML5 with JavaScript and CSS3 Specialist.

Trained in Talend and Mark-Logic.

Professional experience

Company: Ness

Client: Franklin Templeton, Texas USA

Project: Data Lake (Dec 2021 to Mar 2023)

Role: Software Engineering Manager

Environment: AWS, Snowflake, GitHub, Snowflake, PowerBI and python.

Roles and Responsibilities:

Lead FTT on-perm data to Cloud migration project using AWS S3, Lambda, API, Secret manager, Snowflake and Python.

Roles and Responsibilities:

•Lead migration on-perm process to AWS S3, Lambda, API, Glue, Secret manager.

•Designed and developed python connectors to pull data from on-perm to cloud.

•Designed and developed snowflake pipe, tasks, and dashboards.

•Designed and developed Power Bi.

•Using Jira implemented Agile scrum, designed, and tracked end to end project scope and implementation.

Client: UHG, Minnesota USA

Project: Data Lake, BDAaps and CDSM (Mar 2018 to Nov 2021)

Role: Assoc Software Engineering Manager

Environment: Azure, Databricks, Hive, Spark, Scala, HBase, Rally, ServiceNow, Kafka, Docker, Kubernetes, Talend, GitHub, Snowflake, Spring Boot, Microservice, Power BI, Splunk, Grafana.

Roles and Responsibilities:

•Lead multiple Big Data development and Devops application(s) teams.

•Developed POCs and Apps for migration of Big Data and Kafka Streaming Applications to Azure

•Owned multiple applications in Devops and enhancing them to Devops maturity.

•Developed ETL pipelines using Big Data and Streaming Tech stack.

•Strong hands-on experience in development of Streaming applications (to consume data from Canonicals) on Azure

•Play key role in gathering the requirements interacting with Business lead and end users.

•Designed and developed Power Bi and Grafana dashboards.

•Developed framework for data ingestion for over 60 sources.

•Accountable to collaborate with multiple stakeholders for all 28 applications – support issues, SLAs monitoring, new development activities, enhancements.

•Developed applications in Kafka from scratch and loaded data to snowflake.

•Developed multiple automations to reduce manual efforts.

•Developed self-service scripts and re-usable components.

•Built Release compliance automation setup for all Applications.

•Worked on performance improvements in Spark and Kafka based applications.

•Lead teams to implement engineering practices, bright ideas.

•Involved in Analysis and estimation of Low Level and Detail level Design.

•Hands on coding of the key objects and interfaces that define the architectural boundaries of Hadoop components and services.

•Re-organizing the Hadoop code base to create re-usable components and simplify the build process.

•Regular stakeholder interactions with US counterparts and Business teams

•One of the senior trainers in Optum Guru community, delivered multiple Tech sessions on Big Data and Kafka.

Client: UHG – Hyderabad India

Project: Data Fabric, Bedrock, Lowest Cost Alternative (Jun 2014 to Feb 2018)

Role: Sr Hadoop consultant

Environment: MapR distribution, Hive, Spark, Scala, HBase, Rally, ServiceNow, Kafka, Docker, Kubernetes, Talend, GitHub, Power BI, Splunk, Grafana.

Roles and Responsibilities:

•Involved in review of functional and non-functional requirements.

•Key member and technical lead of operations team - to ensure operational effectiveness.

•Handled around 18 data sources for data ingestion to Data fabric framework.

•Worked on a live 105 nodes Hadoop cluster running on MapR 4.0.2 version.

•Load and transform large sets of structured, semi structured and unstructured data

•Worked with unstructured and semi structured data.

•Developed MapReduce programs to parse the raw data, populate staging tables and store the refined data in partitioned tables.

•Involved in Pivotal to MapR 4 and lead the team in MapR 4 to MapR 4.0.2 upgrade.

•Developed automation to reduce manual efforts.

•Provided UAT, Production support.

Client: Symphony Teleca, Bangalore India

Project: Social Awareness Platform for GM (Aug 2013 to June 2014)

Role: - Lead Product Development

Environment: CDH 4.5, MR, HBase, Nutch, Solr, Hive, MapReduce Java.

Roles and Responsibilities:

•Architectural leadership, broad knowledge of cloud/web services in general, and the Hadoop distributed system.

•Setup CDH 5.0 cluster with 5 Nodes.

•Design HBase schema structure and integrating HBase with solr

•Hands on coding of the key objects and interfaces that define the architectural boundaries of Hadoop components and services.

•Re-organizing the Hadoop code base to create re-usable components and simplify the build process.

•Understand critical performance bottlenecks in those systems, and the ability to design software that avoids them.

•Develop tools (Java/Python) to automate the deployment, administration, and performance monitoring of large Hadoop clusters, at the application, operating system, server and network levels.

•Administer application-level knowledge of back-end systems architecture, as well as contributing to their overall design to monitor and sustain our high standards of availability, security, and performance

Organization: IBM Bangalore India

Project: Sales Metrics (Feb 2010 to Aug 2013)

Role: Senior Software Engineer

Environment: Hadoop v1.3, MapReduce, Hive, Java.

Roles and Responsibilities:

•Coding of the key objects and interfaces that define the architectural boundaries of Hadoop components and services.

•Re-organizing the Hadoop code base to create re-usable components and simplify the build process.

•Build large distributed systems that scale well – to terabytes of data on 25 nodes.

•Understand critical performance bottlenecks in those systems, and the ability to design software that avoids them.

•Develop tools (Java/Python) to automate the deployment, administration and performance monitoring of large Hadoop clusters, at the application, operating system, server.



Contact this candidate