Kranthi Kommineni
PROFESSIONAL SUMMARY:
Around 10 years of IT experience in business requirement analysis, design, development Azure Logic Apps, Azure Data Factory, SSIS and Power BI.
Work on Azure stack of technologies like ADF (Azure data factory), Azure Active Directory, Azure Functions, Logic Apps, Azure data Warehouse.
designing and building complete Integration process for moving and transforming data for ODS, Staging, and Data Warehousing using Azure Data Factory.
Hands-on experience of ETL, data pipelines using Databricks.
Design, build and deploy effective Microsoft SSIS packages and implement stored procedures and effectively query and database.
Strong Experience & ETL workflows using Databricks notebooks and Delta Live Tables& for both batch and stream processing
Collaborate with business users to develop data products that align with business domain expectations
Designed, developed, and deployed ETL solutions utilizing Microsoft Synapse and Data Factory to extract, transform, and load data from various sources to the data warehouse.
Implemented Azure Logic Apps and Functions to automate business processes and improve efficiency.
Developed integration solutions using Logic Apps and APIs to connect disparate systems and automate data flows.
Experience designing, building, and maintaining data architecture on Azure, including services like Azure Data Lake, Azure SQL Data Warehouse/Synapse Analytics, and Azure Databricks.
ETL (Informatica)MongoDB (any other NoSQL Database)RedisSource Control Tools - GIT/SVN
Proficiency in developing data pipelines using Azure Data Factory or similar ETL tools.
Strong understanding of data modeling, data partitioning, and data storage best practices specific to Azure cloud services.
Knowledge and implementation of data security and compliance measures on Azure, integrating Azure Active Directory, and using Azure Key Vault.
Demonstrated experience with Azure analytics tools such as Azure Analysis Services and Power BI for creating actionable insights from data.
Experience working with Azure PaaS services Cloud Services, App Services, Functions, Logic Apps, IoT Hub, Device Provisioning Service, Service Bus, Event Hub, SQL Database, Load Tester, Data Factory, Key Vault, and Azure Machine Learning Studio.
Design and Build Pipelines: Develop, manage, and optimize data pipelines using Databricks to support analytics and data science workflows.
Data Integration: Integrate structured and unstructured data sources into Databricks from various on-premises and cloud platforms.
Performance Optimization: Monitor and optimize pipeline performance, implement caching mechanisms, and tune Databricks clusters for cost efficiency.
Data Lake and Delta Lake Management: Manage and maintain Databricks Delta Lake, ensuring data consistency, quality, and governance.
Working knowledge of RESTful and SOAP APIs. Azure Git Repository Management and Deployment in Azure DevOps Understanding of Automated Testing and various cloud offerings.
SKILLS & COMPETENCIES:
Azure/Cloud Platform experience (Azure Data Lake, Data Factory, Database, SQL Server)
Shell scripting
SQL, Python & PySpark
Data Bricks
Delta Lake
Data Pipelining
Quality Assurance/Data Accuracy
Data Integration, Profiling, and Validation
Database Security
Data Warehousing
ETL Process Optimization
AWS Glue
Snowflake
DBT
Data Visualization & Power BI
API Building
TECHNICAL AND DOMAIN EXPEREINCE:
Process Expertise: Analysis, Requirements Management and Planning, Business Process Flows, Solution Assessment, Project Definition, User Acceptance Testing, Change Management, Waterfall and Agile.
QA Expertise: System Testing, Integration Testing, Regression Testing, E2E Testing.
Tools and Technologies: MS Visio, MS Office SSIS, SSRS, SSAS, SSMS, ETL Tools, MS SQL Server 2008R2/2005,
ETL Tools: Microsoft BI tools
PROFESSIONAL EXPERIENCE:
Xylem Inc. May 2024 – Present
Role: Sr Data Engineer
Led a team of 4 data engineers in the migration of on-premises databases to Azure SQL Database, enhancing scalability and reducing infrastructure costs by 15%.
Led development team in creating scalable data pipelines using Python for enhanced data processing.
Developed efficient SQL queries to streamline data retrieval from Azure environments.
Implemented data orchestration workflows in DataBricks, ensuring high-quality transformation outcomes.
Managed ADF pipelines to facilitate data ingesting from multiple sources into Azure.
Hands on Experience using AWS Glue for ETL/ELT process.
Designed Data pipelines using Data base Build tool(DBT).
Designed data model for Snowflake cloud Database.
Utilized ADLS for secure data storage and integration with Azure Data Factory workflows.
Established CI/CD practices in a DevOps environment, leveraging GitHub Actions for automation.
Conducted Python scripting to automate data ingesting from various structured formats.
Design and implement data storage solutions using Azure services such as Azure SQL Database, and Azure Data Lake Storage
Develop and maintain data pipelines using Azure Data Factory and Azure Data bricks
Relevant experience in Data Engineering especially ADF Python PySpark SparkSql Sql and Databricks programming and cloud data ingestion data pipelines handling structured and semi-structured data formats like JSON XML.
Experience in building ETL/ELT Data warehouses/lakes for Real time event driven scenarios. Perform data modeling and schema design for efficient data storage and retrieval
Optimize data processing and storage for performance and cost efficiency.
Design and implemented ETL/ELT pipelines to extract, transform, and load data into Snowflake from various sources (e.g., APIs, databases, flat files).
Develop scalable data pipelines using Databricks and Apache Spark technologies.
Collaborate with data scientists to optimize machine learning models on Databricks platform.
Implement data lake solutions leveraging Delta Lake for efficient data storage.
Automate ETL processes to enhance data processing using DLT for streaming tables.
Experience designing, building, and maintaining data architecture on Azure, including services like Azure Data Lake, Azure SQL Data Warehouse/Synapse Analytics, and Azure Data bricks.
Proficiency in developing data pipelines using Azure Data Factory or similar ETL tools.
Strong understanding of data modeling, data partitioning, and data storage best practices specific to Azure cloud services.
Knowledge and implementation of data security and compliance measures on Azure, integrating Azure Active Directory, and using Azure Key Vault.
Demonstrated experience with Azure analytics tools such as Azure Analysis Services and Power BI for creating actionable insights from data.
Evoqua Water Technologies Jan 2022 – Apr 2024
Role: Azure Sr Data Engineer
Strategized, designed, and implemented data solutions using Azure services that improved data flow.
Led a cross-functional team to deploy Azure data factory solutions, improving data consistency by 30%
Develop and maintain data pipelines using Azure Data Factory and Azure Databricks and Azure Logic apps.
Experience designing, building, and maintaining data architecture on Azure, including services like Azure Data Lake, Azure SQL Data Warehouse/Synapse Analytics, and Azure Databricks.
Proficiency in developing data pipelines using Azure Data Factory or similar ETL tools.
Strong understanding of data modeling, data partitioning, and data storage best practices specific to Azure cloud services.
Employed PySpark to handle big data transformation tasks, ensuring accuracy in analytics.
Enhanced SQL performance for complex queries, integrating solutions within Azure architecture.
Utilized DataBricks notebooks for advanced analytics and collaborative data development.
Created and managed ADF pipelines for seamless data ingesting and movement between systems.
Integrated ADLS with existing data frameworks to support storage and access through Azure Data Factory.
Automated deployment workflows with GitHub Actions to streamline CI/CD in Azure projects.
Led efforts in PySpark scripting for efficient data transformation, enhancing data quality.
Hands on Experience using AWS Glue for ETL/ELT process.
Designed Data pipelines using Data base Build tool(DBT).
Designed data model for Snowflake cloud Database.
Experience with data governance frameworks and data quality measures.
Proficiency in data modeling, data mapping, and integration with ERP/CRM systems.
Design and Build Pipelines: Develop, manage, and optimize data pipelines using Databricks to support analytics and data science workflows.
Data Integration: Integrate structured and unstructured data sources into Databricks from various on-premises and cloud platforms.
Performance Optimization: Monitor and optimize pipeline performance, implement caching mechanisms, and tune Databricks clusters for cost efficiency.
Data Lake and Delta Lake Management: Manage and maintain Databricks Delta Lake, ensuring data consistency, quality, and governance.Developed and optimized ETL pipelines using Azure Data Factory, leading to a 20% increase in data processing efficiency and enabling real-time data insights.
Demonstrated experience with Azure analytics tools such as Azure Analysis Services and Power BI for creating actionable insights from data.
Developed a custom data monitoring and alerting system using Azure Monitor and Log Analytics, improving data reliability and system uptime.
Skilled in data modeling, including relational and dimensional modeling, and storage of structured, semi-structured, and unstructured data.
Experience with various data types (structured, semi-structured, unstructured) and analytics needs from reporting to machine learning.
Worked on Databricks responsible for engineering, architecting, developing, and implementing unique and innovative enterprise-wide solutions.
Led a cross-functional team to deploy Azure data factory solutions, improving data consistency by 30%
Implemented Power BI dashboard to visualize key broker performance metrics, boosting efficiency by 25%
Optimized data manipulation queries, resulting in 35% faster executions and expedited analysis.
Strong knowledge of data structures, algorithms, and designing for performance, scalability, and availability.
5+ years of experience in XSLT and JavaScript frameworks. Experience with J2EE.
Experience with REST and SQL-Databases.
Understanding of architecture modeling and Design Patterns.
Experience with development processes such as code review, unit testing, continuous integration, build, and release.
Ability to produce clear and comprehensive technical documentation.
Experience working in Agile Scrum and DevOps-aligned delivery teams.
Experience working with Azure PaaS services Cloud Services, App Services, Functions, Logic Apps, IoT Hub, Device Provisioning Service, Service Bus, Event Hub, SQL Database, Load Tester, Data Factory, Key Vault, and Azure Machine Learning Studio
Experience working with SaaS services DevOps, PowerBI Embedded, Advisor and Monitor
Experience working with Azure IaaS offerings such as Storage, Private Link and various networking load balancers.
Thorough understanding of Hybrid Cloud Computing: virtualization technologies, Infrastructure as a Service, Platform as a Service and Software as a Service Cloud delivery models and the current competitive landscape
Experience with Azure cloud services to include but not limited to Data Lake Storage Gen2, Synapse Analytics, Data Factory, Databricks, Delta Lake
Leads and/or assists team members and customers with problem solving related to application performance, usage, and general support activities Writes system documentation, user manuals, and develops/conducts user training.
Assemble and consolidate complex data sets from business and reporting systems such as SAP, Salesforce, Dynamics 365, WaterQuote(CPQ),Cloud for Service, Business Objects, and EDW.
Stay up to date with new Azure services and technologies and evaluate their potential for improving data storage and processing solutions.
Evoqua Water Technologies Jan 2020 – Dec 2021
Role: Azure Data Engineer
Define, design and document reference architecture and lead the implementation of BI and analytical solutions.
Drive the implementation of the analytical solution by working with Functional team to understand requirements, data sources and business rules; and leading the development team to build, test and deploy the solution.
Design and implement data storage solutions using Azure services such as Azure SQL Database, Azure Cosmos DB, and Azure Data Lake Storage.
Ensures the maintainability and quality of code. Analyzes existing code, devises logic procedures, prepares flowcharting, performs coding and tests/debugs programs
Write code that adheres to best practices in software development.
Collaborate with team members to ensure that the solutions meet both functional and non-functional requirements.
Contribute to the daily business operations and develop an in-depth understanding of the PIM technology.
Develop and maintain data pipelines using Azure Data Factory and Azure Databricks
Create and manage data processing jobs using Azure HDInsight and Azure Stream Analytics
Perform data modeling and schema design for efficient data storage and retrieval.
Optimize data processing and storage for performance and cost efficiency.
Implement security and compliance measures for data storage and processing.
Collaborate with data scientists and analysts to provide data insights and support data-driven decision making.
Troubleshoot and resolve data processing and storage issues.
Develop and maintain documentation for data storage and processing solutions.
Stay up to date with new Azure services and technologies and evaluate their potential for improving data storage and processing solutions.
InBiz Concepts Inc., Norwood, MA Oct 2018 – Dec 2019
Evoqua Water Technologies
Role: Azure Integration Developer
Work on Azure stack of technologies like ADF(Azure data factory),Azure Active Directory, Azure Functions, Logic Apps, Azure data Warehouse.
Designing and building complete Integration process for moving and transforming data for ODS, Staging, and Data Warehousing using Azure Data Factory.
Design the simple and complex extraction methodologies to retrieve and transfer the data using the Azure Logic Apps for different integrations.
Build, enhance, and support data ingestion pipelines, delta lakes and data warehouses across a variety of infrastructure types, both on premises and cloud.
Collaborate with infrastructure and IT team to design, create and maintain optimal data pipeline architecture and data structures for Enterprise Data Platform.
Perform data integration in Apache Spark-based platform to ensure the technology solutions leverage cutting edge integration capabilities.
Support system availability (tier one support) on a rotating basis.
Collaborate with other teams to identify and analyze data requirements, design data models, and define data structures.
Ensure data quality, data consistency, and data accuracy in all data products.
Create and maintain documentation for all data products.
Support a variety of Business Solutions from ingestions, data models, and reporting.
Develop customized SQL queries for database solutions & business ad hoc requests. maintain SQL queries for data analysis and reporting purposes.
Ensure data integration and data integrity to improve data fidelity for real time analytics.
Navigate and work independently in a large enterprise and distributed set up.
Perform meetings with technical teams to help them understand the requirements to avoid any GAP between business and development team.
Strong knowledge and understanding of SQL development techniques and complementary business layer and front-end technologies.
Perform in depth technical analysis, research, proof of concepts, and evaluations as required to validate that the recommendations Enterprise Architecture makes are sound and will not introduce risk into IT Execution.
Create the over nightly batch job processing that follows the data flow from source systems to SAP and CRM using Azure logic Apps as middleware.
Extracting relevant data from appropriate data sources for the execution of the project
InBiz Concepts Inc. Nov 2017 – Sep 2018
Role: ETL Azure Developer
Involve in the Requirements gathering and database design.
Write, analyze, review, and rewrite programs, using workflow chart and diagram, and applying knowledge of computer capabilities, subject matter, and symbolic logic.
Create complex Stored Procedures, Triggers, Functions (UDF), Indexes, Tables, Views and other T-SQL code and SQL joins for applications following SQL code standards.
Write SQL queries for verifying data integrity in Dimension and Fact tables.
Expertise in writing and optimizing SQL queries, views, stored procedures and functions.
Perform Expert level understanding of SQL server indexing techniques, optimization of locking operations, buffering and caching waits.
Analyze data from multiple sources and creating reports with interactive Dashboards using power BI.
Design, build and deploy ETL/SSIS packages with different data sources (SQL Server, Flat Files, Excel source files, XML files etc.) and then loaded the data into destination tables by performing complex transformations using SSIS/DTS packages.
Experience designing and building complete ETL/SSIS processes moving and transforming data for ODS, Staging, and Data Warehousing.
Identify and test for bugs and bottlenecks in the ETL solutions and ensure the best possible performance and quality in the packages.
Provide support and fix issues to all ETL/SSIS schedule jobs and maintain compliance to same and develop and maintain various standards to perform ETL codes and maintain an effective project life cycle on all ETL processes.
Working with configuring, tuning, and measuring performance, at several levels, optimization at the level of individual SQL statements, entire applications.
Ability to analyze query execution plans, to identify performance bottlenecks and design improvement.
Providing a high level of service to our customers and adhering to our strict SLAs for response and
Restoration times, Problem determination, workaround resolution, root cause analysis, major incident management with in the Production support.
Working with Production Support is a very heavy responsibility job and requires lot of perseverance and patience due to the volume and nature of issues that may occur with the application system.
Work with SSAS, SSRS, SSIS, T-SQL, Reporting and Analytics.
Responsible for Deploying, Scheduling Jobs, Alerting and Maintaining ETL/SSIS packages and Updating and maintaining Visual Source Safe.
Client: Connecticare, Farmington, CT Aug 2016 – Oct 2017
Role: ETL/BI Developer
Description: Transition all Pharmacy interactions to support the Medicare line of business from Emblem’s systems to Amisys. In order to increase the rate of Medicare Part D claim auto-adjudication, ConnectiCare will move Medicare off of Emblem Health Systems to ConnectiCare Systems using AMISYS 7.1. #. The exchange of ConnectiCare Pharmacy data that currently takes place between Express Scripts (ESI) and Emblem Health’s Q-Care system needs to take place between ESI and ConnectiCare’s AMISYS 7.1.
Developed ETL process to extract group and division from Amisys or replicated data and send to a text file for ESI.
Developed ETL process to extract contract and member level membership from Amisys or replicated data and send to a text file for ESI.
Developed ETL process to extract accumulator data from Amisys or replicated data and send to a file for ESI.
Develop a ETL process to accept Pharmacy accumulator data from ESI and load to Amisys.
Develop a process to securely transfer the medical (CCI and Optum) accumulator data from CCI to ESI.
ESI’s system will display CCI membership transactions correctly based on CCI data files.
Developed a process to accept Pharmacy claim data from ESI and load to the data warehouse and or custom Oracle data tables.
Created ETL packages with different data sources (SQL Server, Flat Files, Excel source files, text files etc.) and then loaded the data into destination tables by performing complex transformations using SSIS/DTS packages.
Created Complex ETL Packages using SSIS to extract data from staging tables to partitioned tables with incremental load and created script task to combine header file, detail file and trailer file.
Created packages in SSIS using different OLEDB sources like Amisys and Market prominence according to the requirement and worked with different methods of pulling data in SSIS.
Created packages for Complex ETL Packages using SSIS to extract data from staging tables.
Created different SSIS packages for Full and Transactional files for CCI data files for NKYA and TKYA carriers for member Eligibility.
Developed, Maintained, and Monitored all import/export and Data Transformation into the Staging and Production environments.
Implemented Event Handlers and Error Handling in SSIS packages and notified process results.
Developed, deployed and monitored SSIS Packages for new ETL Processes and upgraded the existing DTS packages to SSIS for the on-going ETL Processes.
Monitored Full/Incremental/Daily Loads and support all scheduled ETL jobs for batch processing.
Hexplora, Hartford, CT Apr 2015 – Jul 2016
Client: Infowave Systems
Role: SSIS/SSRS/SSAS Developer/BI Developer
Product: Hexplora is a comprehensive end-to-end Data Warehousing and Business Intelligence Solution for Health Plans, ACOs, and IPAs. Hexplora has been designed as a foundational platform for healthcare organizations for all their Reporting and Analytics requirements.
Enrollment Analytics: Member demographic summary, membership trend, enrollment trend, provider summary, ACO member assignment.
Claims Analytics: Claims Analytics helps in analyzing the claims data to answer the most complex questions on the business of healthcare. It provides healthcare organization an eagle view to visualize the lowest point of pain and gain and work accordingly based on the analysis from the data. This helps to enhance health care quality and optimize ROI.
Responsibilities:
Created batch files that execute the ETL scripts which invokes SSIS packages.
Created SSIS packages for data Importing, Cleansing, and Parsing etc. Extracted, cleaned and validated.
Data migrations from text files, DB2, excel files to SQL server.
Successfully created database diagrams and physical models for the project.
Experience in creating Tabular reports, Matrix reports, List reports, Parameterized reports, Sub reports (SSRS).
Design and implement comprehensive Backup Plan and Disaster Recovery strategies.
Experience with SQL Server Reporting Services (SSRS) to author, manage, and deliver both paper-based and interactive Web-based reports.
Created SSIS packages to extract data from OLTP to OLAP system and created alerts for successful and unsuccessful completion of schedule jobs.
Experienced solving issues raised by QA, UA for database.
Supported the different UI with back end logics, wrote SQL codes to summarize the data.
Created ETL metadata reports using SSRS, reports include like execution times for the SSIS packages, Failure reports with error description.
Tested, Cleaned and Standardized Data meeting the business standards using Fuzzy lookups.
Writing T-SQL scripts, dynamic SQL, complex stored procedures, functions, triggers, MySQL.
Created complex views to generate the Numerator and Denominator values for calculating Performance rate of each provider to generate SSRS reports.
Debugged and Deployed SSIS packages into file system server and MS SQL Server with the help of manifest file.
Created ad hoc and custom reports based on client requests using SSRS 2008.
EDUCATION:
Masters in Computer Information Technology, Sacred Heart University, Fairfield, CT 2015
Bachelors in Computer Science & Technology, JNTU, India
CERTIFICATIONS:
Databricks: Academy Accreditation-Databricks Lakehouse Fundamentals,
URL: https://api.accredible.com/v1/auth/invite?code=7260ed97fc39318aeabe&credential_id=d4e3ac1d-97bd-4160-9586-7457a31e3b90&url=https%3A%2F%2Fcredentials.databricks.com%2Fd4e3ac1d-97bd-4160-9586-7457a31e3b90&ident=9f59a60d-5058-41d5-9827-d6146dbae12d