Post Job Free

Resume

Sign in

Solutions Consultant Business Development

Location:
Brooklyn, NY
Salary:
135k
Posted:
November 08, 2023

Contact this candidate

Resume:

Md. Abu Faruk Rana Rana

Hadoop Administration

Email - ad0yeb@r.postjobfree.com

Cell- +1-646-***-****

OBJECTIVE:

With over 9 years + of IT experience, I would like to work with a dynamic and progressive IT firm where I can apply my technical experience and interpersonal skills efficiently & effectively for the growth of the company.

SUMMARY:

As a business engagement manager :

Support the business in achieving its strategic objectives. The Solutions Consultant is a consultative sales, technology and solution focused specialist who creates great customer confidence with their technical expertise and uncanny ability to drive desirable business outcomes.

In addition to being viewed as a trusted technical advisor to the customer, the Solutions Consultant will also have regional responsibility for solution design, business development, and transition of proposals from pre sales to production, while remaining engaged with the customer post production serving as a thought leader for the next opportunity.

The Solutions Consultant works closely with the sales account manager throughout this process and will also assist in the decision to pursue or pass on an opportunity. A strong vendor relationship is required to ensure a complementary understanding of the partner vendors’ ecosystem. The Solutions Consultant is expected to manage customer proof of concept (POC) initiatives, which will require the involvement of the appropriate resources, and setup and delivery of the POC.

The Solutions Consultant also provides customer feedback to Product Management to champion the development and roll out of new software-as-a-service solutions desired by our customers.

As a Hadoop admin :

Having 9+ years of IT experience in administration, analysis, implementation, support and testing of enterprise-wide applications, data warehouse, client server technologies and web-based applications.

5+ years’ experience in dealing with Bigdata/apache Hadoop related environments (Cloudera CDP/CDH, Hortonworks HDP) and components like HDFS, Data Lake, YARN, MapReduce, Hive, Sqoop, Spark, Impala, Oozie, Solr, ZooKeeper, Ranger, Sentry, Cloudera Navigator, Apache Atlas etc.

* Configure data lakes to be flexible and scalable.Include Big Data Analytics components. Implement access control policies, Provide data search mechanisms. Ensure data movement for any amount of data.Securely store, index, and catalog data.

Experience in dealing with monitoring tools like Cloudera Manager, Apache Ambari, Ganglia, Datadog and data visualization tools like Hue, DB Visualizer etc.

Experience in dealing with job scheduling tools like Oozie, Airflow and terminal accessing tools like Putty, Secure CRT etc.

In depth understanding of Hadoop architecture and its various components like HDFS, Name Node, DataNode, YARN, MapReduce, Resource Manager, Node Manager, Zookeeper, Hive, HBase, Spark.

Experience in setting up High Availability (HA) for Name Node, Resource Manager, HiveServer2, Hive Meta Store, Impala.

Experience in Cloudera cluster installation, configuration, management, monitoring and support of Hadoop clusters.

Assisting OS patching in Hadoop cluster's servers, participating in Cloudera major and minor version upgrades and patching.

Participating in adding new master nodes / data nodes, decommissioning and commissioning of data nodes, performing load balancing in data nodes and re configuring cluster.

Experience in configuring and managing resource pool/queue and Fair/capacity scheduler.

Experience in storing, managing, and analyzing data using HQL and SQL/ MySQL languages, Map Reduce jobs.

Configuring hive/yarn configuration for various use cases and resource availability in a cluster, monitoring batch jobs, analyzing, and troubleshooting hive/spark/Map Reduce jobs using batch logs, service logs, attending triage calls with project teams and creating defect tickets for defects and critical issues/blockers.

Helping development teams to test their codes/scripts and troubleshooting Hadoop platform related issues and limitations.

Monitoring clusters and services, analyzing service/instance failures, working with Cloudera to resolve critical issues and bug fixes.

Worked with Sqoop to import/export relational data from RDBMS into HDFS and hive warehouse using Adhoc load and incremental load Also have knowledge about Datalake.

Developed data pipelines using Talend, Spark that ingests data from multiple data sources and process them in Hive or Spark.

Experience in writing SQL queries to test out data/metadata availability for projects to pinpoint root cause of a malfunctioning implementation.

In depth knowledge of security components like Kerberos for authentication, Sentry/Ranger for authorization, HDFS quota and FACL for access control and Cloudera Navigator (CDH)/Apache Atlas (CDP) for Auditing Big Data Governance.

Providing insights on cluster performance, resource limitations, future enhancements, and critical blockers during stand up calls.

Participating in POCs like testing out new Cloudera CDP/CDH releases, compatibility with company's current projects, testing out new tools/components and documenting steps/suggestions based on findings to help management planning on future enhancements.

Supporting 24/7 on call rotations, assisting developers and project teams to deploy and test out new versions of the projects and collaborating with respective teams to resolve project related and Hadoop cluster related critical failures and issues.

TECHNICAL SKILLS

Big Data/Hadoop platform

Cloudera CDP 7.x, Cloudera CDH 5.x, Hortonworks HDP 2.x, AWS ( EC2, S3 ), EMR, AZure

Hadoop related tools

HDFS, Data Lake, YARN, Map Reduce, Sqoop, Hive, Impala, Spark, Oozie, AUTOSYS Zookeeper, Sentry, Ranger, Solr, HBase

Databases and Data warehouses

Oracle, MySQL, SQL Server, PostgreSQL, DB2, Hive, HBase, RDBMS

Version Control

Git. Git Hub

Project Management Tool

Jira, Autosys & Nexus

3rd Party Tools

Datadog, Airflow, Putty, Secure CRT, Jira, Splunk, Bitbucket, Git, OpenShift, Talend, MicroStrategy etc.

Methodologies

Agile, Waterfall

Operating Systems

MS Windows, Linux, Unix, CentOS

Build Tool

Maven

Languages

Python & Bash Scripting

IDE

Eclipse, IntelliJ, PyCharm, Jupiter Notebook

Continuous Integration Tool

Jenkins

BI services tools

Qliksense, Tableau

PROFESSIONAL EXPERIENCE:

Business engagement Manager

Solution Engineering Department

CITI Bank

From 3rd November 2022 to Till Now

Responsibilities:

Working with below this service :

1)EAP service (HDFS, Impala, Hive, Kudu, HBASE, Hive Map Reduce, Talend Frame work, Python, Microsoft R etc.

2)BI service (Tableau, Qlik sense & Starbrust)

3)Spark Services

4)AI & ML services - AIML Python (Single user & Multi user), AIML Notebook, AIML Dataiku, AIML R etc

5)Ability to negotiate with clients., Ability to communicate effectively with clients,Proficiency in Microsoft suite,Ability to quickly solve problems,A positive upbeat attitude.

Create Pipe line or help the client how to create data pipeline

Discuss about project Description & Expected benefits then offer services

Find out the Project region, project sector, Cluster & Line of business

Take lead on understanding customer technical pains through discovery in order to design, demonstrate, and present innovative solutions that solve business challenges.

Be Hands-on, high-energy, passionate, and a creative problem solver with know how to get things done and ability to lead others to success.

Take the initiative to identify and solve problems - both for the customer and within the organization as we grow

Build highly interactive and engaging customer demonstrations while forming strong customer relationships

Partner with Account Executives to execute pre-sales activities including opportunity qualification, demonstrations, Proof-of-Concept, RFP, design documentation, technical presentations and ennoblement sessions.

Suggest the client about Data ingestion & processing.

Maintain FID detail’s & right document upload in the cluster

Discuss with client, HDFS. Storage Requirements TB, Default YARN compute unit, Default Ram/ Memory (GB via storage), Estimated Impala Compute unit

Meeting with client, suggesting, problem solving finished as early as possible.

Discuss about Funding mechanism

Collaboration,Customer Service, Innovation,Technical, Communication Skills, Consulting Experience, Auto Delivery, Facilitation, Project Management

Face to face Customer service, Assist the customer. Assist architecture as per client demand

Review pipe line & Giving solution engineering Approval

Documentation, Analysis, Forecasting.

Find out who is the right person for finance approval used by Arcadia data platform.

Any discrepancy, we can find out from Internal “Quotes Financial tables- QFT”

JIRA creation & weekly meeting with ops team

Lead the weekly environmental call ( Discuss all the blog PID, data, architecture, JIRA update & other’s

Download the weekly data temperature file & represent with management about Cold data & Raw data as per project, cluster, Region & project ID

Problem, Solution & documentation on CITI share point

Maintain Pipeline tracker & update BD metrics daily basis.

Team member of weekly EIO&T cluster & PBWM cluster meeting

Provide pre-sales technical support and expertise in analyzing customer requirements, in conjunction with the customer’s current contact center capabilities, and ensuring technical solutions will accomplish the customer’s objectives.

Direct engagement with customers, which requires the ability to deliver business use-case demonstrations, scope of work creation, product pricing and RFP/RFI responses. Build strong vendor relationships. Act as a bridge between the customer and our solutions delivery and operations resources.

Lead the weekly meeting with NAM, EMEA, APAC & S Korea team.

Hadoop Administration ( Production Support)

Orthonet,

White plain, NY

August, 2020 to Oct 2022

Responsibilities:

Participated in installing fresh Hadoop cluster using cloudera CDP 7.x and Hadoop components like Cloudera Manager, HDFS, Data lake, YARN, hive, Tez, impala, spark, Ranger, Solar, Zookeeper, sqoop, Oozie, Atlas, DAS (Data analytic studio).

Configuring, managing and tuning Hadoop components using cloudera manager and configuration files based on company use cases and environments.

Performance tuning of YARN, Hive and spark considering nature of jobs and resource availability.

Worked on HDFS management like node commissioning/decommissioning, load balancer, Quota allocation.

Ensuring cluster security by deploying and managing authentication using Kerberos. Troubleshooting Kerberos related access issues, creating and managing local key tabs for service accounts using kt util tools.

Enabling, monitoring, and managing the comprehensive data security across the Hadoop platform using Apache Ranger.

Deploying data at transit encryption using TLS/SSL, managing certificates, and troubleshooting service/user access issues regarding TLS/SSL.

Implementing cluster level high availability (HA) using Active/Standby Name Node, Quorum Journal Manager (Journal Nodes, zkfc) and ZooKeeper.

Implementing service level load balancing and high availability for services like Hive, Yarn Resource Manager, Hive MetaStore, Impala, Hue using CM and HA Proxy.

Monitoring service performances and jobs using Cloudera Manager, Ambari Charts, Dashboards, Email and splunk alerts.

Analyzing log files using diagnostics, service logs, project logs and figuring out root causes and triaging the issues with respective teams to remove project blockers and thus ensuring better client experience.

Documenting deployment steps, defects/bugs specific to environments in respective Wiki pages and triaging with Cloudera teams to make the cluster more stable and optimized.

Involved in implementing applications in Dev and test environments ensuring access assets are being created.

Managing cluster resource using yarn components like resource manager, resource pool and scheduler.

Participated on CDP and CM major and minor versions upgrade and patching.

Installing and managing python packages and their versions depending on project requirement.

Created Hive External tables and loaded the data into tables and query data using HQL, Impala, Hue and 3rd party tools like DB Visualizer and so on.

Experience in importing and exporting data using Sqoop from HDFS and hive to Relational Database Systems.

Deployed workflows in Oozie, Airflow, Autosys and assisted BAs to run the dags and training dag related issues with respective maintenance teams on daily basis.

Engaged in L1 & L2 support calls for any project specific training, 24/7, on demand.

Environment:

Cloudera CDP 7.x, HDFS, YARN, Hive, Tez, Impala, Spark, Zookeeper, HBase, Cloudera Manager, Sqoop, Hue, Kerberos, Ranger, Talend.

Bigdata Admin/Architect

Sterling National Bank

400 Rella Boulevard

Montebello, NY - 10901

USA

March,2018-July 2020

Responsibilities:

●Worked on Multi node Clustered environment and set up Cloudera in Hadoop Echo-System.

●Performed basic Hadoop Administration responsibilities including software installation, configuration, software upgrades, backup and recovery, commissioning and decommissioning data nodes, cluster setup, cluster performance and monitoring on a daily basis.

●Involved in analyzing system failures, identifying the root causes, and recommending actions to be taken.

●Created user accounts and set user’s access in the Hadoop cluster.

●Configuring Hadoop Ecosystem tools including Hive, HBase, Sqoop, Kafka, Oozie, Zookeeper and Spark in Cloudera Environment.

●Ensuring cluster security by deploying and managing authentication using AD Kerberos. Troubleshooting Kerberos related access issues, creating and managing local keytabs for service accounts using ktutil tools.

●Enabling, monitoring, and managing the comprehensive data security across the Hadoop platform using Sentry.

●Performed on Capacity Planning, Performance Tuning, Cluster Monitoring as well as Troubleshooting.

●Creating Hive tables, loading data and writing hive queries which will run internally in Map Reduce way.

●Working experience on importing and exporting data into HDFS and Hive using Sqoop.

●Creating and managing the database objects such as tables, indexes and views.

●Experience on importing & exporting data using Sqoop from MySQL to Hive.

●Troubleshooting many cloud issues such as Data Node down, Network failures and data block missing.

●Implementing Kerberos to authenticate all services in Hadoop cluster and manage security.

●Managed Hadoop cluster and various components such as Resource Manager, Node Manager, Name Node, Data Node and Map Reduce concepts.

●Developing Chef recipes to configure, deploy and maintain software components of the existing infrastructure

●Assign permission on topics to different consumers and groups, manage Spark RDD, working with DataFrame and DataSet, save different data as a Hive table using HCatalog server.

●Manage different file format for Hive table like text, RC, ORC, Sequence, Parquet and Avro.

●Understanding of AWS cloud computing platform and related services.

●Environment:

Cloudera CDH 5.x, CDP, Apache Yarn, HDFS,Data lake, Impala, HBase, Hive, Hue, Sqoop, Oozie, Talend, Zookeeper, Spark, Kerberos, Sentry, LDAP, Jenkins, Git, Agile, Jira, MySQL, Java.

Hadoop Admin

Boss

Cranbury, NJ

September 2015 to February 2018

Responsibilities:

Installed Hadoop clusters, adding nodes, decommissioning/commissioning data nodes for application development and implementation teams.

Configured and tuned services like YARN, HDFS, Hive, Tez, spark, Jupiter notebook, Zookeeper, HBase, Sqoop, Ranger, Apache Atlas.

Monitoring and maintenance of cluster services, participating in cluster architecture design changes and blue prints for big data implementation.

Formulating procedures for planning and execution of system upgrades, major/minor version upgrades, patching and vulnerability mitigation.

Evaluated and documented use cases, proof of concepts, participated in learning new tools/components in Big data ecosystem.

Assisting development of projects, ETL frameworks and data migration from external sources to HDFS and Hive Warehouse.

Worked on data lake architecture to collate enterprise data into single place for ease of correlation, data analysis to find operational and functional issues in the enterprise workflow as part of the projects.

Assisting to schedule and monitor ETL pipelines to get data from various sources, transform for further processing and load in Hadoop/HDFS for easy access and analysis by various tools.

Ensuring cluster security by deploying and managing authentication using AD Kerberos. Troubleshooting Kerberos related access issues, creating and managing local keytabs for service accounts using ktutil tools.

Enabling, monitoring, and managing the comprehensive data security across the Hadoop platform using Apache Ranger.

Conduct Hadoop training sessions for newly on boarded developers and engineers as well as project teams to increase awareness and product understanding.

Collaborate with Hortonworks team for technical consultation on business problems and validate the architecture/design proposed.

Configuring High availability of NameNode using QJM and Hive using HAProxy.

Supporting integration of 3rd party BI tools like Talend, Tableau with HDFS, Hive, Spark.

Supporting technical teams for automation, installation, and configuration of various tasks.

Environment:

Hortonworks HDP 3.x, YARN, Map Reduce, Hive, HDFS, Sqoop, oozie, HBase, spark, zookeeper, Jupiter notebook, Ranger, Atlas, MySQL, Oracle, Linux, Jira, Git.

SQL DBA

Direct TV, New York, NY

September 2014 to August 2015

Responsibilities:

Installation, configuration, monitoring and maintenance of Hadoop cluster in CDH5 environment.

Configuration of Cloudera Manager, Hive, Pig, Sqoop, Impala, Oozie, Spark, HBASE, Zookeeper.

Resource management among users and groups using different types of resource schedulers.

Assisting system team to upgrade RHEL from ver. 6.x to 7.x by cross checking dependencies with CDH, JDK and related components versions, planning and provisioning for server downtime, deploying updates, configuration changes needed for the project and keeping continuous communication between system team and Cloudera support team for successful completion of the project.

Creating new users and set roles according to the requirement.

Ensuring cluster security by deploying and managing authentication using AD Kerberos. Troubleshooting Kerberos related access issues, creating and managing local keytabs for service accounts using ktutil tools.

Enabling, monitoring, and managing the comprehensive data security across the Hadoop platform using Sentry.

Ensuring client authentication using Kerberos and authorization using apache sentry.

Monitoring and troubleshooting jobs and log files for seamless performance and prevention of future downtime.

Supporting offshore development team to run queries and jobs using hive and Spark to analyze data, bright out meaningful insights and save the result in HDFS, further import in SQL Server for future use.

Ensuring security and confidentiality of sensitive data by periodical auditing, tracking, and monitoring different client’s activity using Cloudera Navigator.

Monitoring health of HDFS, commissioning and decommissioning data nodes.

Keep track of the health of active name node and standby name node to ensure HA.

Installation and upgrading of daemons and services.

Transferring data in clusters.

Monitoring and troubleshooting ETL jobs of Spark and Hive.

Loading data from different RDBMS like SQL Server to HDFS or Hive warehouse using Sqoop.

Working with other Hadoop administrators to upgrade CDH and related components and daemons versions, to implement important configurations and bug fixing.

Integration of BI tool like Tableau with Cloudera CDH for data analysis and report generation.

Ensuring admin support 24/7 on emergency.

Environment: CDH5.x, HDFS, Map Reduce, YARN, Pig, Hive, Sqoop, Oozie, Zookeeper, Impala, Cloudera Manager, Cloudera Navigator, Kerberos, Apache Sentry, JIRA.

Education:

Masters,Passing Year - 2000, Dhaka, Bangladesh



Contact this candidate