Post Job Free
Sign in

Data Java Developer

Location:
Sunnyvale, CA
Posted:
September 01, 2020

Contact this candidate

Resume:

Chandana Prabandham

+* - *** -***-****

Sunnyvale, CA

*********.**@*****.***

Senior Big Data and Java Engineer

Employment History:

• American Express Global Business Travel : 13th June 2016 - 28th Aug 2020

• CapGemini Financial Services Inc : 08th Dec, 2014 – 9th June,2016

• ITC Infotech India Ltd, Bangalore : 25th Mar, 2013 – 20th Oct, 2014

• Deloitte Consulting, Hyderabad : 04th Oct, 2010 - 18th Mar, 2013

• Accenture Services Pvt Ltd, Mumbai: 04th Dec 2009 - 30th June, 2010.

• Global Travel Solutions, Mumbai: 10th Nov, 2008 – 26th Nov, 2009.

• Cognizant Technology Solutions : 19th May 2007 – 12th Sept 2008. Education & certifications:

• Bachelor of Technology in Electronics and Communications engineering from J.N.T.U Hyderabad, India – 2007.

• Hortonworks Certified Hadoop Java developer -2014

• Cisco Certified Network Associate – 2009

• Sun Certified Java Programmer - 2007

Technical Skills:

Languages Java, XML, SQL, JavaScript

BigData Technologies Hive, SQOOP, Oozie, Kafka, PySpark, Nifi, Java Spark Streaming, Pig Hbase, Phoenix

Big Table, Big Query, HDFS, Google Cloud Platform, Mahout, Informatica BDE, ElasticSearch

Java Technologies Servlets, JSP, SAX/DOM, Web Services (JAX-WS and JAX-RS), SOAP, WSDL, JAXB, Java Mail, HTML, JSON, JNDI, JDBC.

Servers Apache Tomcat, JBoss 5, Glass fish, Google App engine. Operating Systems Linux, Windows 7, 10.

Frameworks Spring 3.1, Hibernate 3.3, Struts 2, Tiles, Google Web Toolkit,, LDAP, JAXB, XML Beans, DOM4j, XML Schema, XSD, JSOUP

Rule Engine JBoss Rules- Drools 5.1

Databases Oracle, MySQL, MS SQL Server, Google Cloud SQL IDE Eclipse 3.0+, Netbeans

Version Control Tools SVN, GIT, IBM

Tools Maven, ANT, Log4J, JUnit

Chandana Prabandham

Experience Summary:

• 7+ years of experience in providing business solutions and product development in Big Data technologies.

• 13+ years of expertise in software engineering, which encompasses complete SDLC, design, development, application development in Big data technologies, Web-based environment, Online Transaction Processing, N- tier architecture and Client/Server architecture

• 3+ years of experience in leading a team of 5-10 through project delivery.

• 2+ years of experience as Scrum master and project manager who is responsible for sprint Grooming, planning, sizing and estimations, delivery and Sprint retrospection.

• 5+ years of experience in communicating with the product owners and other business stake-holders for requirement gathering and architecting solutions for various business needs.

• Represented Amex GBT at the Hortonworks Data Summit San Jose ‘2017 as Big Data Analyst.

• Hands on experience in Designing and developing solutions on Hadoop full-stack, Hive, Hbase, Kafka, Nifi, Pig, PySpark, Spark Streaming, MapReduce and Big Query.

• Hands on experience in SPARK using Java and python and implemented Machine learning and Recommendation engines using pySpark 2.7.

• Won 2nd position in HACKATHON Feb 2020 held at Amex GBT or building an end to end Solution themed on

“Hyperautomation through hyper personalization” using Java webservices, pySpark and Elastic search.

• Won 3rd position in HACKATHON June 2018 held at Amex GBT for building a prototype product on sentiment analytics with Email, voice and chat messages .

• Mentored and trained team on Hadoop to deliver an analytical sandbox solution which integrates with Informatica BDE, Hortonworks HDP 2.2, Qlickview and SAP BO.

Project Roles and Responsibilities:

American Express Global Business Travel

June 2016 to Till Date

Sr. Big Data Engineer

Recommendation Engine for Travelers for Hotel

reservations

Provide Hotel recommendations to travelers during their Air PNR bookings, to improve the sales of the Supplier and GBT through campaigns, rewards and prompt

recommendations.

Tech Stack : Pyspark, Java, HIVE, PIG, Janusgraph

DB

Hotel Name Standardizations and Card-Travel Matching Algo

Algorithm is designed to match the travel invoices with the credit card transactions for reconciliation and to provide insights and reports to the clients and client management team on the yearly spend.

Tech Stack : Hive, Hbase and Pig, Java Spark and

Pyspark

Chandana Prabandham

Key Roles and Responsibilities at AMEX GBT:

• Responsible for leading a development team of 10 members in design, build, test and deployment of projects, both as a tech lead and scrum master.

• Worked at both Onsite (PHX, Arizona) and offshore as a senior big data engineer.

• Responsible for meeting with Product owners and comprehending the business requirements, documenting and designing technical solutions.

• Responsible in monitoring and improving the performance of the jobs and uplifting the technology and infrastructure based on the organization’s need and vision.

• Key role in bridging the gap between the Product owners, UAT team and the offshore development team, by collaborating and working with the members across the globe and through various time-zones.

• Key role in mentoring the team members and providing timely domain specific trainings.

• Received many “Achiever” points as a “People” person and for showcasing “Passion” towards the role. Real-time data ingestion of the PNR created through Kafka and storm

Backbone of GBT is the PNR data of the travelers.

Ingestion of this data into the data lake helps build travel and traveler insights and reports.

Tech Stack : Nifi, Kafka, Storm, Java, XML

Reports and Feeds for Global Supplier Relations (GSR) team and Marketing teams

Reports and Business Insights are required by GSR for reconciliation of the commissions paid to suppliers, Travel agencies, travel agents and third party vendors. Weekly and Monthly reports are generated out of the real-time data ingested to the data lake.

Tech Stack: Hive, PIG, Java for Custom UDFs

AIR and HOTEL Spend BenchMarking Provide various benchmarks and metrics and KPIs based on the client’s spend, industry and size and scale of the client. Provides a Comparative Analytics with the peers in the same industry.

Tech Stack : Hive, Pyspark

Traveler Analytical Record and Integration with GBT Booking tools

Build an intelligent traveler record, that enables automation through traveler personalization that helps GBT’s Booking platforms and traveler counselors to provide solutions to the end -users in a short SLA. Built a monetization ready Traveler Centric Suite.

Tech Stack : Elastic Search, Hive, Hbase, Java

Data Ingestion projects to Data Lake Ingestion of data from various sources in various formats into the data lake. Architecting and designing a fault–tolerant and secure ingestion pipelines, which are both real-time and batch processing.

Tech Stack: Kafka, Nifi, MFT, Webservices, XML,

Json, Hive, SQOOP

Webservices and API for data consumption from the

data lake

The data residing in the lake, is exposed to various internal endpoints through Java REST API, UI on Data Lake and Sqoop export, after standardization and

summarization.

Tech Stack : Angular JS, Spring boot webserver,

Hbase, Phoenix, Elastic Search, Hive, SQOOP

Chandana Prabandham

CapGemini Financial Services ( Farmers New World Life) Jan 2015 to June2016

Sr. Hadoop Java developer

Data Foundation and Data Lake

FNWL has a significant legacy footprint of data sources and data access; with associated management and integration challenges. The project is a multi-year program designed to deliver a future state data foundation layer that significantly reduces dependencies on the legacy infrastructure, makes trusted data available for consumption and positions FNWL on the journey to become a data driven organization. The key objectives of the project are:

• Create a central repository for data related to Life Insurance value chain processing (historical data, agent activities, new business processing, underwriting, administrative transactions, and other sources)

• Establish a Data Governance process that ensures trusted data source

• Provide for scheduled and on-demand reporting and data consumption

• Provide a toolset that supports a self-service model Environment: Mainframes, SQL server, Informatica BDE 9.6.1, Hortonworks HDP 2.3, Mulesoft ESB, Salesforce, SAP Lumira, SAP BO

ITC Infotech Ltd ( Motorola Mobility India Ltd.)

October 2013 to October 2014

Hadoop Java developer

Sentiment Analysis on User Opinion Insights

Responsibilities:

• Key role in designing and developing web based tools and applications for presenting the data as reports to the targeted users based on Google APIs for debugging the data.

• Adapt and deliver in an agile fashion with frequently changing requirements and deliver with at most quality.

• Key role in developing stand-alone Java applications to extract data from the source websites using HTML scrapping or with any third party APIs and classify accordingly.

• Key role in UI design and development of the portal.

• Design and develop JAX-WS RESTful web services consumed by other module users.

• Key role in developing algorithms for text analysis and implementing in Java.

• Lead, guide and coordinate with team on the deliverables and delegate work accordingly.

• Perform unit testing, system integration testing and user acceptance testing.

• Involve in client meetings and requirement gathering. Environment: Google Cloud platform, JSPs, Servlet API 3.0, JQuery, TIKA and MAHOUT, Google JSAPI Other Projects as Java and Identity Management Developer Content Management System

The project targets to revamp the client legacy web applications and e-commerce system by implementing content management tools

Chandana Prabandham

Environment: Java SE 6, Java EE, EJB 3.0, Jboss5.1, XML, XSLT, JDBC, JAXB, SOAP, REST, Adobe CQ5, Hybris, Maven, Junit.

Identity and Access Management System for Liberty Mutual The project targets on end-to-end implementation of CA Identity Manager in the client’s organization and integration with CA Siteminder.

Environment: Java 1.7, CA IdM r12.5 SP 12, CA Siteminder, Microsoft Active Directory, MS SQL server, Sun LDAP Directory, RACF Mainframes

Identity and Access Management Systems for Caterpillar A credit reporting and scoring system for Consumer and Commercial, built on top of Nextgen core system we developed earlier, customized organically based on requirements gathered from law, data providers, domain knowledge experts and clients. It was developed by a 7 member cross-functional agile team. Environment: CA IdM r12.5 SP 7, CA Siteminder, Microsoft Active Directory.



Contact this candidate