Post Job Free
Sign in

Data Software Developer

Location:
Houston, TX, 77063
Salary:
180000
Posted:
September 14, 2019

Contact this candidate

Resume:

Summary

Over ** years of strong experience in all aspects of Software development methodology including gathering System Requirements, Analyzing the requirements, Designing and developing systems and very good experience in all phases of Software Development Life Cycle (SDLC).

5 years experienced as Big Data/Hadoop developer/Lead/Architect having end to end experience in developing applications in Hadoop ecosystems.

Performed Importing and Exporting data from RDBMS into HDFS and Hive using Sqoop.

Experienced in Spark streaming with Java, Scala, Python, Kafka and Unix Shell Scripting.

Experience in understanding the client’s Big Data business requirements and transform it into Hadoop centric technologies.

Experience in designed architecture for Bigdata application.

Worked on creating Hive and Impala scripts to analyze the data efficiently.

Hands on NoSQL database experience with HBase and Cassandra

Experience in dealing with Apache Hadoop components like HDFS, Map Reduce, HiveQL, Impala, Pig, Big Data and Big Data Analytics.

Experience in integrating Hadoop with Informatica, Talend and working on the pipelines for processing data.

Experience in integrating Hadoop with Informatica BDM for processing data

Experienced in AWS cloud, GCP and Azure computing

Good working knowledge various development tool like: Eclipse IDE, Toad, Oracle-SQL developer tool

Experience in Scrum, Agile and Waterfall models.

Experience in ETL data warehousing and data integration projects using Pentaho tool.

Strong experience in database design, writing complex SQL Queries and Stored Procedures.

Worked knowledge of database such as Oracle 8i/9i/10g, Microsoft SQL Server, DB2.

Possess excellent communication, interpersonal and analytical skills along with positive attitude.

Technical Skills

Big Data Eco Systems : Apache Hadoop, YARN, Hue, HDFS, Hive, Pig, Impala, Sqoop, Oozie

Flume, Kafka, Spark

Language : Java, J2EE, C#.Net, Python, Scala

No Sql : HBase, Cassandra

ETL Tools : Pentaho, Informatica BDM

RDBMS/Databases : SQL Server, Oracle11g and MySQL

Version Controls : VSS, GitHub, SVN

Operating Systems : Windows2003, Unix, Linux

Education

Bachelor of Engineering, Computer Science, VRS College of Engg. & Technology, University of Madras, Chennai, India, 2003.

Employment History

Client: SAIPSIT, TX FROM: Dec 2018 TO: Till date

Role: Solution Architect / Bigdata Lead consultant

Responsibilities:

Requirement gathering from multiple business users and preparing mapping document for business logics

Contributing in design related activities and review complex design on scalability, performance and maintainability

Handling the team and client interfacing for complex reviews. Conducting requirement gathering meetings, status meetings, technical review meetings and Design meetings.

Providing regular status reports to management on application status and other metrics.

Created Hive tables and ingested data into CDL.

Communicating with developers and business customers to ensure that customer requirements are correctly translated to appropriate data design specifications.

Performing design reviews, quality reviews and performance reviews.

HBase table designing to store all required details.

Developing Spark and Scala code to process transformation and insertion into Hbase table.

Implemented Partitioning, Dynamic Partitioning and Bucketing in HIVE.

Interacting with teams like infrastructure, release management, change management, QA, DBA and application.

Technologies: Hadoop echo system, HDFS, Hive, Talend, Spark, Scala, Java, HBase, AWS, Azure

Client: UBS, NJ, USA FROM: Feb 2018 TO: Nov 2018

Role: Bigdata Lead consultant / Technology Architect

Responsibilities:

Developing Spark and Java code to process transformation and insertion into Hbase table and Cassandra tables.

Involved in all module’s development and leading offshore team

Involved in code review and performance tuning activities.

Preparing mapping document for business logics

Contributing in design related activities and review complex design on scalability, performance and maintainability

Handling the team and client interfacing for complex reviews.

Cassandra table designing to store all required details.

Created Hive tables and ingested data into BDR.

Conducting requirement gathering meetings, status meetings, technical review meetings and Design meetings.

Interacting with teams like infrastructure, release management, change management, QA, DBA and application.

Communicating with developers and business customers to ensure that customer requirements are correctly translated to appropriate data design specifications.

Providing regular status reports to management on application status and other metrics.

Performing design reviews, quality reviews and performance reviews.

Using InformaticaBDM transforming and moving data from ASIS layer to Govern to Gold to Provision layers

Implemented Partitioning, Dynamic Partitioning and Bucketing in HIVE.

Technologies: Hadoop, HDFS, Hive, Informatica BDM, Spark, KAFKA, Java, Autosys, Cassandra

Client: Citi, NJ, USA FROM: Feb 2017 TO: Jan 2018

Role: Bigdata Lead consultant / Technology Architect

Responsibilities:

Requirement gathered from multiple product processor teams

Hbase table designed to store all transactions details.

Developed Spark streaming code and consumed kafka topics.

Conducting requirement gathering meetings, status meetings, technical review meetings and Design meetings.

Prepared mapping document for multiple product processors.

Performing design reviews, quality reviews and performance reviews.

Communicating with developers and business customers to ensure that customer requirements are correctly translated to appropriate data design specifications.

Providing technical support for production cycles jobs and troubleshoot issues.

Interacting with teams like infrastructure, release management, change management, QA, DBA and application.

Providing regular status reports to management on application status and other metrics.

Developed Spark sand Scala code to process transformation and insertion into Hbase table

Technologies: Hadoop, HDFS, HBase, Hive, Spark, KAFKA, Java, Autosys, WebServices

Client: Experian, CA, USA FROM: Aug 2016 TO: Jan 2017

Role: Bigdata Lead consultant / Technology Architect

Responsibilities:

Conducting requirement gathering meetings, status meetings, technical review meetings and Design meetings.

Performing design reviews, quality reviews and performance reviews.

Communicating with developers and business customers to ensure that customer requirements are correctly translated to appropriate data design specifications.

Providing technical support for production cycles jobs and troubleshoot issues.

Interacting with teams like infrastructure, release management, change management, QA, DBA and application.

Providing regular status reports to management on application status and other metrics.

Implemented Hbase table to store trade statistics report.

Developed Spark and Scala code to process transformation and insertion into Hbase table

Implemented Partitioning, Dynamic Partitioning and Bucketing in HIVE.

Developed Java API and Sqoop job for handling special characters

Implemented HiveQL for inserting data from raw layer to stage layer.

Implemented Spark SQL queries for processing data from stage layer to processed layer

Technologies: Hadoop, HDFS, HiveQL, Spark, Scala, Eclipse, Sqoop, Java (JDK 1.6), Oozie, ControlM, DB2 Mainframe, JCL, FTP, Web service

Client: Fidelity, RI, USA FROM: Aug 2015 TO: June 2016

Role: Bigdata Lead consultant / Technology Architect

Responsibilities:

Conducting requirement gathering meetings, status meetings, technical review meetings and Design meetings.

Performing design reviews, quality reviews and performance reviews.

Communicating with developers and business customers to ensure that customer requirements are correctly translated to appropriate data design specifications.

Providing technical support for production cycles jobs and troubleshoot issues.

Interacting with teams like infrastructure, release management, change management, QA, DBA and application.

Providing regular status reports to management on application status and other metrics.

Implemented Partitioning, Dynamic Partitioning and Bucketing in HIVE.

Developed Hive queries to process the data and generate the report for PMA team.

Developed Avro format Sqoop job for handling special characters

Technologies: Hadoop, HDFS, HiveQL, Sqoop, Java, Oozie, ControlM, Spark, Impala, Informatica

Client: Kinder Morgan, TX, USA FROM: Oct 2014 TO: June 2015

Role: Lead consultant / Sr. Software developer

Responsibilities:

Developed various data cleansing features like Schema validation, Row Count and data profiling using map reduce jobs.

Using default Map Reduce Input and Output Formats.

Created hive tables for storing the logs, whenever a map reduce job is executed.

Created a hive aggregator to update the hive table after running the data profiling job.

Analyzed the data by performing Hive queries

Implemented Partitioning, Dynamic Partitioning and Bucketing in HIVE.

Developed Hive queries to process the data and generate the data cubes for visualizing.

Technologies: Hadoop echo system, HDFS, MapReduce, HiveQL, Sqoop, Java, Oozie

Client: JP Morgan Chase, OH, USA FROM: Jan 2013 TO: Sep 2014

Role: Associate / Application Developer

Responsibilities:

Involved in code development and code review.

Build flexible data models and seamless integration points

Involved in performance tuning actions

Involved in deployment activities

Coordinate with other developers and software professionals

Streamlined Deployment of Application on Test and Production Server.

Performed complex analysis, designing and programming to meet business requirements.

Closely involved with the database team in developing stored procedures, functions, views, triggers on SQL Server to accomplish the desired functionalities

Environment: SQL server, Oracle, C#, Java, JavaScript, Web Services, VSS

Client: Citi FROM: Oct 2010 TO: Dec 2012

Role: Lead consultant

Responsibilities:

Integrated various components of various key modules of the project

Involved in development and support

Engaged in code review and testing

Involved in documentation activities

Involved in performance tuning actions

Involved in supporting other developers throughout the project phase

Involved in deployment activities

Environment: Java, Oracle, Jquery, C#.Net, SOAP UI, Webservices

Client: British Telecom FROM: Jan 2010 TO: Oct 2010

Role: Sr. Software developer

Responsibilities:

Engaged in coding, review and testing

Responsible for report generation

Integrated various components of various key modules of the project

Supported other developers throughout the project phase

Involved in deployment document creation

Environment: Java, C#.Net, jQuery and Oracle 10g

Project

Employer

From

To

Multiple Projects

ESOFT

August 2005

December 2009

Multiple Projects

American IT Solutions

April 2003

July2005



Contact this candidate