Post Job Free
Sign in

Data Analyst Senior

Location:
United States
Salary:
70
Posted:
June 03, 2025

Contact this candidate

Resume:

Name: Dinesh N

Senior Data Analyst ***********@*****.*** 901-***-**** www.linkedin.com/in/dinesh-n-427ab3309

PROFESSIONAL SUMMARY

Over 10+ years of experience in data analysis, providing actionable insights to drive business decision-making across multiple industries.

Comprehensive data analysis expertise encompassing Python (Pandas, NumPy), R, SQL and advanced analytical tools like TensorFlow, SciPy, and Scikit-learn.

Hands-on experience with cloud platforms (AWS, Azure, GCP), specializing in AWS Redshift, AWS Glue, Azure Data Factory, Google BigQuery, and Google Cloud Dataflow for building and optimizing data pipelines.

Strong proficiency in database management and optimization across SQL Server, MySQL, PostgreSQL, and Oracle, ensuring high availability and performance of enterprise data systems.

Skilled in data modeling (Star Schema, Snowflake Schema) and working with tools like Erwin, Tableau, Power BI, QlikView, and Google Data Studio to create interactive, actionable reports and dashboards.

Expertise in Big Data technologies (Hadoop, Spark, Kafka, PySpark) with proven experience developing and deploying real-time data processing and analytics solutions.

Proven ability to streamline workflows using automation tools such as Terraform and AWS Lambda, ensuring efficient, cost-effective, and scalable data pipelines.

Solid experience in statistical analysis, hypothesis testing, and predictive analytics using tools such as SPSS, MATLAB, and advanced machine learning techniques.

Proven ability to collaborate with cross-functional teams, delivering actionable insights that drive business growth, optimize product strategies, and improve customer satisfaction.

Strong background in Agile, Scrum, and Kanban methodologies, contributing to timely project delivery and improved team collaboration.

Exceptional problem-solving skills focusing on identifying data-driven solutions that optimize business performance and increase operational efficiency.

Delivers clear, actionable data insights enabling effective collaboration with diverse stakeholders.

TECHNICAL SKILLS

Programming Languages: Python, R, SQL, VBA

Libraries & Frameworks: NumPy, Pandas, TensorFlow, SciPy, spaCy, Scikit-learn, ggplot2, dplyr, tidyr

Big Data & Distributed Processing: Hadoop, HDFS, Apache Spark, PySpark, MapReduce, Pig, Databricks, Apache Kafka, Google Cloud Dataflow

Cloud Platforms & Services: Amazon Web Services (AWS): AWS EMR, AWS Glue, AWS Kinesis, AWS Redshift, AWS CloudWatch, AWS EC2, AWS S3, AWS RDS, AWS Lambda, AWS IAM, AWS KMS; Microsoft Azure: Azure Functions, Logic Apps, Azure Data Factory, Azure DevOps; Google Cloud Platform: Google Bigtable, Google BigQuery, Google Cloud KMS, VPC Service Controls

Databases & Data Warehousing: SQL Server, Snowflake, MySQL, PostgreSQL, Oracle

Data Analytics, Visualization & Modeling Tools: SPSS, MATLAB, Tableau, Adobe Analytics, Power BI, OpenRefine, Google Data Studio, Google Analytics, QlikView, SAP BusinessObjects, Erwin, Snowflake Schema, Star Schema Modeling, Trifacta Wrangler, Microsoft Visio

Version Control & Collaboration: Git, GitHub

Infrastructure & Automation: Terraform

Other Tools & Technologies: Collibra, Scrapy, Trello, Lucidchart

Methodologies & Environments: Agile, Scrum, Kanban

EXPERIENCE

Client & Location: - Wells Fargo, Charlotte, NC

Sr. Data Analyst

January 2024 - Present

Roles & Responsibilities:

Improved data processing efficiency through Python-driven automation of advanced data transformations.

Applied NumPy for high-performance arrays and computationally intensive numerical operations, enhancing data analysis.

Utilized Pandas for comprehensive data wrangling, cleansing, and manipulation, ensuring data integrity across large datasets.

Built and maintained predictive analytics models using TensorFlow driving AI-driven decision-making for business solutions.

Managed distributed data processing with Hadoop and HDFS, optimizing data storage and retrieval efficiency.

Utilized Spark SQL to efficiently query and aggregate structured big data, ensuring performance and accuracy.

Developed and maintained ETL workloads using PySpark, transforming and processing large datasets to derive actionable business insights.

Conducted statistical modeling and hypothesis testing using tools like SPSS and MATLAB, ensuring reliable data insights.

Managed AWS EMR clusters for resource allocation and optimized execution time for big data processing.

Automated serverless ETL workflows using AWS Glue, reducing operational costs while ensuring data integrity.

Implemented real-time data ingestion using AWS Kinesis and Spark Streaming, enabling real-time analytics for faster decision-making.

Managed AWS Redshift clusters, ensuring optimal distribution, sort keys, and query optimization for large-scale data storage.

Applied MapReduce and Pig scripts for ETL processing and data transformation in the Hadoop ecosystem.

Integrated Salesforce data with internal systems to enhance customer relationship management and drive business insights.

Implemented metadata management solutions to track data lineage and improve dataset discoverability.

Streamlined data access and ensured regulatory compliance through data cataloging solutions.

Standardized and transformed data using robust logic to create reliable data models.

Designed and implemented star schema models to enhance analytical query performance for business intelligence.

Developed Tableau dashboards to visualize KPIs and business trends, providing stakeholders with clear insights.

Utilized AWS CloudWatch for monitoring and alerting on ETL jobs, identifying anomalies, and ensuring operational efficiency.

Led collaborative development using GitHub for version control, ensuring code integrity and managing distributed teams.

Ensured data security and compliance by managing AWS IAM roles and AWS KMS encryption for sensitive data.

Adhered to data governance policies, ensuring regulatory compliance and maintaining organizational data integrity.

Drove Agile and Scrum methodologies for cross-functional collaboration, ensuring timely delivery of data projects.

Utilized Adobe Analytics to provide actionable insights into web traffic and customer behaviors, improving digital banking experiences for customers.

Developed custom Adobe Analytics dashboards for real-time reporting on digital campaigns, customer journey metrics, and key performance indicators (KPIs).

Managed ETL workflows with Apache Airflow, creating DAGs to automate complex data processing pipelines.

Environment: Python, NumPy, Pandas, TensorFlow, Hadoop, HDFS, Spark SQL, PySpark, SPSS, AWS Kinesis, MATLAB, AWS EMR, AWS Glue, Spark Streaming, AWS Redshift, MapReduce, Pig, Salesforce, Tableau, AWS CloudWatch, GitHub, AWS IAM, AWS KMS, Apache Airflow, Agile, Scrum, JIRA.

Client & Location:- Baptist Health, AUSTIN, TX

Data Analyst

July 2022 – December 2023

Roles & Responsibilities:

Processed big data using Apache Spark, PySpark, and Databricks for distributed data processing in cloud environments.

Automated data workflows and system integrations using Azure Functions, Logic Apps, and other workflow automation tools.

Optimized SQL Server queries and utilized Snowflake for efficient data warehousing, focusing on query optimization for large-scale data.

Implemented machine learning models using Python, SciPy, NumPy, and Pandas, providing predictive analytics for business insights.

Led data integration efforts by designing and implementing ETL workflows and ADF pipelines in Azure Data Factory, ensuring data consistency across systems.

Conducted NLP-based text analytics with spaCy for sentiment analysis, entity recognition, and language processing, deriving insights from unstructured data.

Ensured data security by enforcing data classification frameworks, maintaining regulatory compliance, and managing role-based access controls.

Utilized Apache Hadoop for distributed data processing, optimizing data throughput in batch processing workflows.

Leveraged real-time data streaming with Apache Kafka, ensuring low-latency ingestion and high-speed data processing.

Provided actionable customer insights via Adobe Analytics, driving effective digital marketing strategies.

Created cloud-based analytics workflows to ensure scalability and provisioning efficiency in cloud environments.

Applied Terraform for Infrastructure as Code (IaC), streamlining resource provisioning and improving scalability.

Used ELK Stack (Elasticsearch, Logstash, Kibana) for system monitoring, query performance optimization, and data visualization across cloud-based infrastructure.

Ensured data quality validation by managing automated checks, anomaly detection, and routine data accuracy assessments.

Applied forecasting models and resource optimization techniques to predict demand and improve batch processing workflows.

Managed data governance by ensuring metadata management and data cataloging with platforms like Collibra, while ensuring compliance with industry standards.

Enhanced business intelligence efforts by creating interactive data visualizations in Power BI dashboards, enabling stakeholders to make data-driven decisions.

Performed data cleansing using tools like OpenRefine, improving data accuracy and preparing datasets for analysis and reporting.

Developed CI/CD pipelines in Azure DevOps to automate deployment processes and optimize data workflows.

Built enterprise-grade data models using Erwin and Snowflake Schema, ensuring scalability and optimal performance across large datasets.

Used Adobe Analytics to track healthcare service usage patterns, enhancing the understanding of patient interactions and helping the hospital optimize online appointment bookings and telehealth offerings.

Created dashboards and customized reports in Adobe Analytics to monitor and improve patient experience on the website and mobile app platforms.

Environment: Apache Spark, PySpark, Databricks, Azure Functions, Logic Apps, SQL Server, Snowflake, Python, SciPy, NumPy, Pandas, ADF, spaCy, Apache Hadoop, Apache Kafka, Adobe Analytics, Terraform, ELK Stack, Collibra, Power BI, Open Refine, Azure DevOps, Erwin.

Client & Location:- JPMC, New York, NY

Data Analyst

November 2019 – June 2022

Roles & Responsibilities:

Optimized MySQL databases by implementing indexing strategies and performing query performance tuning, enhancing database optimization and data retrieval times.

Developed and maintained automated ETL pipelines, streamlining data ingestion, flow, and real-time data streaming using Google Cloud Dataflow.

Processed and transformed structured and unstructured data, ensuring accurate summaries and aggregations for actionable business insights.

Performed data wrangling and data cleansing to address missing values, duplicates, and inconsistencies, ensuring data reliability and high quality.

Managed large datasets and implemented scalable storage solutions, ensuring smooth data retrieval and high-speed querying in big data environments like Google Bigtable and BigQuery.

Provided strategic insights on KPIs, web traffic, and digital performance using interactive Google Data Studio dashboards.

Applied machine learning algorithms using Scikit-learn for regression, clustering, and classification, generating actionable insights for forecasting and trend prediction.

Reduced manual effort and ensured consistent results by automating data tasks with Python.

Designed and implemented automated checks and anomaly detection to validate data quality, improving consistency across data systems.

Leveraged NumPy and Pandas for numerical computations, matrix operations, and statistical analyses, supporting business intelligence efforts.

Utilized Apache Spark for distributed data pipelines, enabling parallel processing and improving computational efficiency in large-scale data operations.

Applied data transformations using tools like Trifacta Wrangler, preparing datasets for machine learning models and data analysis.

Provided predictive analytics and forecasting models, supporting business decision-making and improving operational efficiency.

To protect sensitive data, ensure compliance with data governance policies, data privacy regulations, and security standards, including Google Cloud KMS encryption and VPC Service Controls.

Collaborated with teams to maintain data consistency and integrity in high-volume applications, ensuring seamless integration with analytical tools.

Performed data quality validation and ensured accurate, consistent data by resolving anomalies, missing values, and other inconsistencies.

Standardized and normalized datasets, reducing redundancy and ensuring interoperability across multiple data sources for downstream analytics.

Developed and maintained comprehensive Power BI dashboards for financial data analysis, monitoring key performance indicators (KPIs) related to trading, revenue, and operational performance, enabling leadership to make informed business decisions.

Created interactive data visualizations in Power BI to track and monitor financial transactions, profit margins, and client investment performance, helping to identify trends and actionable business insights.

Integrated Tableau with multiple data sources, such as SQL databases, Excel, and other internal financial systems, to create a centralized reporting platform that streamlined reporting processes across departments and enhanced data accuracy.

Leveraged Adobe Analytics for in-depth analysis of user behavior across JPMC’s digital banking and wealth management platforms, ensuring actionable insights for digital product development and optimization.

Designed and deployed Adobe Analytics dashboards that tracked financial product adoption, customer engagement, and website usage metrics.

Automated report generation using Tableau to reduce manual reporting efforts by 40%, providing stakeholders with timely and relevant data on trading performance, risk analysis, and market trends.

Informed business strategy with data reports based on Google Analytics, revealing user behavior and key trends.

Environment: MySQL, Google Cloud Dataflow, Google Bigtable, BigQuery, Google Data Studio, Scikit-learn, Python, NumPy, Pandas, Apache Spark, Poer BI,Trifacta Wrangler, Google Cloud KMS, VPC Service Controls, Google Analytics.

Client & Location:- Belk Inc, Charlotte, NC

Data Analyst

May 2016– October 2019

Roles & Responsibilities:

Managed cloud-based data storage solutions using AWS EC2, S3, and RDS, ensuring scalability, security, and cost efficiency across data pipelines.

Designed and documented data flowcharts and process documentation in Visio, supporting improved workflow and team transparency.

Created and maintained data visualizations, charts, and dashboards using tools like ggplot2 to present actionable insights and track key performance metrics (KPIs).

Conducted regular data transformation and preprocessing activities, aligning data structures with analytical needs.

Utilized R programming and libraries like dplyr and tidyr for data cleaning, preprocessing, and wrangling, ensuring data accuracy and consistency.

Applied machine learning techniques, including Scikit-Learn, to build predictive models for customer behavior, risk assessment, and operational forecasting.

Ensured accurate data reports and supported strategic decisions through collaborative process improvements.

Worked in Agile and Kanban environments, participating in iterative development cycles and providing ongoing insights and analysis for business teams.

Developed and optimized SQL queries and PostgreSQL databases to improve data retrieval and support operational decision-making.

Automated data pipelines using AWS Lambda to streamline data flow and enhance processing efficiency.

Ensured datasets' accuracy, consistency, and integrity through regular audits and validation of data processing workflows.

Delivered high-quality datasets by conducting thorough data quality checks and resolving inconsistencies.

Maintained Git version control for all analytical code and documentation, ensuring reproducibility and collaboration.

Enhanced system performance via query optimization.

Designed data reports and presented findings to stakeholders, ensuring the reports were aligned with business objectives and metrics.

Generated and maintained automated Adobe Analytics reports that helped track marketing campaign effectiveness and overall site performance for marketing teams.

Conducted in-depth Adobe Analytics segmentation analysis, identifying key customer segments and behaviors to inform targeted digital marketing strategies.

Provided actionable insights and recommendations to leadership by analyzing data trends, assisting in decision-making, and improving business processes.

Environment: AWS EC2, AWS S3, AWS RDS, Visio, ggplot2, R, dplyr, tidyr, Scikit-Learn, SQL, PostgreSQL, AWS Lambda, Git.

Client & Location:- Wipro, Bangalore, India

Junior Data Analyst

July 2013 – March 2016

Roles & Responsibilities:

Provided strategic market intelligence, enhancing business performance through identified growth and optimization opportunities.

Implementing robust data pipelines ensured high platform reliability, security, and scalability.

Collaborated cross-functionally with teams to develop effective data visualizations, dashboards, and reports, driving data-driven decision-making and business intelligence.

Leveraged Python, VBA, and SQL to automate processes, clean data, and enhance reporting capabilities for better operational efficiency.

Successfully maintained data accuracy and regulatory compliance by performing detailed data validation and integrity checks.

Optimized SQL queries, performed indexing, and managed partitioning to improve database performance and query efficiency.

Created and maintained reports using Excel, Google Sheets, QlikView, and other business intelligence tools, supporting KPIs and providing actionable insights.

Developed and refined VBA macros for automating repetitive tasks and improving data processing workflows.

Maintained and enhanced SAP BusinessObjects and other reporting systems to ensure effective data governance, compliance, and integration with relational databases like Oracle.

Supported AWS infrastructure to ensure high availability, scalability, and performance tuning of the cloud environment.

Designed and implemented data transformation strategies to ensure seamless data integration, accuracy, and quality.

Applied Scrapy for efficient web scraping and gathering external data for competitive analysis and business intelligence.

Collaborated in project management using tools like Trello and Lucidchart, assisting in workflow visualization and process optimization across teams.

Drove cost reduction initiatives by optimizing data processes and ensuring the most efficient use of resources within the organization.

Regularly performed data cleaning and ensured data reliability to provide accurate and actionable insights for business operations.

Supported strategic decision-making through detailed analysis and reports, using data analytics and market insights to forecast trends and opportunities.

Environment: Python, VBA, SQL, Excel, Google Sheets, QlikView, SAP BusinessObjects, Oracle, AWS, Scrapy, Trello, Lucidchart.



Contact this candidate