Mounika Yalamanchili
*******.***@*****.*** Phone: +1-913-***-****
PROFESSIONAL SUMMARY
●7+ years of IT experience that includes Data Analysis and Hadoop Ecosystem
●Good knowledge of Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node and MapReduce concepts.
●Experience in working with MapReduce programs using Hadoop for working with Big Data.
●Experience in analyzing data using Hive QL, Pig Latin and custom MapReduce programs in Java.
●Experience in importing and exporting data using Sqoop from HDFS to Relational Database Systems and vice-versa.
●Working experience on designing and implementing complete end-to-end Hadoop Infrastructure including HIVE, Sqoop, Oozie, Spark and zookeeper.
●Performed Importing and exporting data into HDFS and Hive using Sqoop.
●Experience in writing shell scripts to dump the shared data from MySQL servers to HDFS.
●Experience in designing both time driven and data driven automated workflows using Oozie.
●Experience in providing support to data analysts in running Hive queries.
●Good Knowledge on RDBMS concepts and writing complex queries to load and transform data.
●Worked in complete Software Development Life Cycle (analysis, design, development, testing, implementation and support) in different application domains involving different technologies varying from Object oriented technology to Internet programming on Windows, Linux and UNIX platforms and RUP methodologies.
●Experience in developing client / server-side web applications, Web API, Windows Services in Microsoft Visual Studio
●Experience in generation BI reports using Power BI
●Sound Understanding and good Experience with Object Oriented Programming Concepts
●Adept in gathering and analyzing requirements, documentation and proven ability solve problems and work on multiple projects simultaneously
●Excellent leadership, interpersonal, problem solving and time management skills.
●Excellent communication skills both written (documentation) and verbal (presentation).
●Very responsible and good team player. Can work independently with minimal supervision.
TECHNICAL STACK:
Programing Languages : C, C++, Java, Python, Scala, UNIX Shell Scripting, SQL, HQL.
Scripting : HTML, JavaScript, CSS
Hadoop Ecosystem : MapReduce, HBASE, Hive, PIG, SQOP, Zookeeper, OOZIE, Flume, HUE, Kafka, SPARK, SQL
Hadoop Distributions : Cloudera
Database : MySQL, NoSQL, Oracle DB,
Data Visualization : Power BI, Tableau
Tools/Applications : Attunity, CA7, GIT, Udeploy, MS-Excel, MS-office, SharePoint.
Methodologies : Agile, SDLC
PROFESSIONAL EXPERIENCE:
Client: PNC BANK, Cleveland – OH (June 2018 - Till Date)
Description: PNC is the multinational Banking and Service Provider, this project is about capturing the customers details which includes login device details, transactional details and processing the data. We use the Hadoop Ecosystem for processing such a large amount of data, once we gather all the information, we run strategies and Jobs on that collected data and load them into Hive tables.
Role: Application/Hadoop Developer
Responsible for Implementation and support of the Hadoop environment
Interacting with business users/clients to understand the requirements and being involved in the functional study of application
Hadoop platform environment support like history reloading, moving the date for the end users as per the business requirement
Understanding existing changes, upgrades, migrations, tickets closing audit/exception & performing the impact analysis for the new/change requirements
Sqoop Import/Export for converting the drivers to secure the connections
Moving and replacing the data to different locations for end-users on Ad-hoc cluster with the help of platform team
Updating the SLA for every release and monitoring the data flow as per SLA standards
Validating the applications by running the sample program after applying the spark upgrade.
Manage and monitor the HDFS File system
Establishing connectivity from Hadoop platform to data sources like Oracle Database, Mainframe system and others (Excel, flat files) for business purposes
Coordinating with offshore teams & updating the status of work to clients
Defining responsibilities, reviewing the code developed by the offshore team and tracking the review comments for closure.
Performance monitoring of the production jobs/Hue workflows
Involved in running hive and impala queries to monitor the data flow.
Preparing CA7 scripts for jobs and incorporation the new codes in the job scheduling
Used GIT from version Controlling of the code and Udeploy to migrate the code from lower environments to production.
Environment: Hadoop Eco System, Jira, Linux, Putty, SecureFX, SQL, python, Java, Udeploy, Scheduling, Attunity, MS-Excel, SharePoint.
Client: LONG ISLAND UNIVERSITY, New York (Sep 2017 – May 2018)
Description: Worked on a research project on Social Media Analytics. Performed data analytics through
R-programing. Performed various techniques to get the accurate results using Twitter Data.
Role: Graduate Assistant
●Conducted research in Social Media Analytics
●Involved in collecting, processing, analyzing and reporting social media data of specific research topic
●Worked on Tracking Community Development from Social Media using R
●Explored Social Media Analysis on Community Development Practices based on the results from R
●Performed data mining, data cleaning & explored data visualization, techniques on a variety of data stored in spreadsheets and text files using R and plotting the same using with R packages
●Hands-on statistical coding using R and Advanced Excel
Environment: R-Studio, RPubs, Java (v1.8), ShinyApps, Excel
Client: EVENTCY, Hyderabad - India (Oct 2011 – Aug 2015)
Role: Hadoop Developer
●Conducted research in Social Media Analytics
●Involved in collecting, processing, analyzing and reporting social media data of specific research topic
●Worked on Tracking Community Development from Social Media using R
●Explored Social Media Analysis on Community Development Practices based on the results from R
●Performed data mining, data cleaning & explored data visualization, techniques on a variety of data stored in spreadsheets and text files using R and plotting the same using with R packages
●Hands-on statistical coding using R and Advanced Excel
●Worked on Hadoop Ecosystem using different big data analytic tools including Hive, Pig.
●Involved in loading data from LINUX file system to HDFS.
●Importing and exporting data into HDFS and Hive using Sqoop.
●Implemented Partitioning, Bucketing in Hive.
●Worked on different file formats (ORACFILE, TEXTFILE) and different Compression Codecs like GZIP, SNAPPY, LZO
●Worked with multiple Input Formats such as Text File, Key Value, and Sequence file input format.
●Experienced in running Hadoop Streaming jobs to process terabytes of json format data.
●Involved in scheduling Oozie workflow engine to run multiple Hive and Pig jobs
●Executed Hive queries on parquet tables stored in Hive to perform data analysis to meet the business requirements.
●Created HBase tables to store various data formats of incoming data from different portfolios.
●Created Pig Latin scripts to sort, group, join and filter the enterprise wise data.
●Developed the verification and control process for daily load.
●Experience in Daily production support to monitor and trouble shoots Hadoop/Hive jobs.
●Worked collaboratively with different teams to smoothly slide the project to production.
Environment: HDFS, Pig, Hive, Sqoop, Shell Scripting, HBase, ZooKeeper, MySQL.
AWARDS AND RECOGNITION
●Membership in Sigma Beta Delta: Academic excellence and a commitment to high principles and superior achievement
●Excellence Award – Dean’s Recognition Ceremony, Class 2018
CERTIFICATIONS:
●Oracle Certified professional, Java SE6 Programmer
●ISTQB- BCS Certified Tester
PROFESSIONAL AFFILIATIONS
Member – Institute of Electrical and Electronics Engineers (IEEE)
Member – Cybernetic Study Association (CSA - LIU)
EDUCATION
Master’s in Computer Science Graduated - 2018
Long Island University
Bachelor’s in Production Engineering Graduated – 2011
Osmania University, India