Post Job Free
Sign in

Sql Server Software Development

Location:
Irving, TX, 75038
Salary:
95000
Posted:
October 15, 2025

Contact this candidate

Resume:

Suresh Paudel

214-***-****

***************@*****.***

Irving TX, 75038

SUMMARY

Microsoft Professional with 6 years of IT Experience in Database Design, Development and Implementation of Microsoft technologies in Development, Test and Production

Strong experience in using the Microsoft BI Stack (SQL, SSIS, SSAS, SSRS Worked with SQL Server 2022, SQL Server 2019, and SQL Server 2017

Worked with different domains like Insurance, Banking, Telecommunications, Shipping and HealthCare.

Excellent backend skills in creating SQL objects like Tables, Stored Procedures, Views, Indexes, Triggers, Functions, User Defined Datatypes, Rules, And Defaults for data consistency and manipulation.

Good experience and understanding of database design, relational integrity constraints, OLAP, OLTP, Cubes and Normalizations.

Experienced in developing web-based applications using Python, Django, Java, C++, XML, CSS, HTML, DHTML, JavaScript and J query.

Experienced working in the Agile methodology using Scrum which has its primary focus on the management part of the software development, dividing the whole development period into small iterations (of seven days) called sprints.

Good knowledge of RDBMS and Data Warehouse concepts, OLTP & OLAP.

Experience in Azure SQL configuration for the database and automation of tuning, vulnerability Assessment, Auditing and Threat Detection

Understanding of data modeling techniques, particularly star and snowflake schemas, which are common in data mart design

Build data visualizations and reports from DW and OLAP cubes using SSRS and Power BI. Agile Program management, change management, User Management JIRA Agile, Scrum, Bug Cycle management, SDLC Design advice using JIRA.

In-depth knowledge of fundamental and Advance concepts for Working with Amazon cloud Web Services, EC2, S3, RDS, SQS, SNS, EBS, ELB, Cloud front, AWS IAM, Amazon VPC.

Experience in creating configuration files to deploy the SSIS packages across all environments.

Working experience on DTS, SSIS, SSRS, SSAS, Data Cleansing, Data Scrubbing and Data Migration.

Expert in handling parameterized reports in SSRS

Experience in creating Multidimensional cubes using SQL Server Analysis Service (SSAS).

Good knowledge of Teradata Database Architecture, Database Features and Teradata tools.

Worked with Hadoop ecosystem components

Experienced in configuring SSIS packages using package logging, Breakpoints, Checkpoints, and Event Handling to fix the errors. Expert in troubleshooting SSIS packages using a variety of techniques/tools.

Experienced in Extracting, Transforming and Loading ETL data from Excel, Flat file, Oracle to MS SQL Server by using BCP utility and SSIS.

Familiarity with cloud-based Big Data solutions like AWS (EMR, Redshift), Google Cloud (Big Query, Dataflow), or Azure

Expertise in data modeling, Normalization, database design, data loading, data retrieval, modification, and archival techniques.

Worked on the data warehouse design and analyzed various approaches for maintaining different dimensions and facts in the process of building a data warehousing application.

TECHNICAL SKILLS

Databases / RDBMS / NoSQL: SQL Server 2022/2019/2017/2016/2008R2, Oracle 11g/10g/9i, MySQL, PostgreSQL, Teradata, DB2, MongoDB, Cassandra, Snowflake, MS Access, IntelliMatch

ETL / Data Integration Tools: SSIS, Informatica PowerCenter, Talend (Open Studio, TOS DP), IBM DataStage, Oracle Data Integrator (ODI), DTS, BCP, Dataflow, Autosys, Control-M, Azure Data Factory (ADF), AWS Glue

Reporting / Visualization: SSRS, Power BI, Tableau, OBIEE, MicroStrategy, Crystal Reports, QlikView

Cloud Platforms: Microsoft Azure (Azure SQL Database, Data Lake Storage, Data Factory, Databricks), Amazon Web Services (AWS – EC2, S3, RDS, Glue, IAM, SQS, SNS, ELB, EBS, CloudFront, VPC), Google Cloud Platform (Big Query, Dataflow)

Programming / Scripting Languages: SQL, T-SQL, PL/SQL, Python, Java, C#, C++, VB.NET, VBScript, Shell Scripting, XML, HTML, DHTML, CSS, JavaScript, jQuery, REST APIs

Big Data / Data Processing: Hadoop, Apache Spark, Hive, HBase, Kafka, GCP Big Query, AWS EMR, Azure Databricks

Data Modeling / Design Tools: Erwin Data Modeler (v7/4), MS Visio, Star Schema, Snowflake Schema, Data Vault

DevOps / Version Control / CI-CD: GitLab, TFS, JIRA, VersionOne, SQL Server Agent Jobs, DevOps methodologies

Web / Application Development: ASP.NET, ADO.NET, Visual Studio, Java (JDBC, Hibernate, JPA), Django

Analysis / SDLC Methodologies: Agile (Scrum, Kanban), Waterfall, UML, SWOT, Gap Analysis, Cost-Benefit Analysis, Bug Cycle Management

Operating Systems: Windows Server 2003/2008/2012, Windows 10/7/XP/98, UNIX, Linux

Education: Bachelor’s In Science

PROFESSIONAL EXPERIENCE

Client: FedEx Supply Chain

Location: Coppell, TX

Role: ETL Developer

Duration: February 2022 to Present

Responsibilities:

Involved in the complete SDLC- (System Development Life Cycle) from Requirement gathering to actual delivery of the Extracts which include all the loan origination and servicing data across:

Wrote several Teradata SQL Queries using Teradata SQL Assistant for Ad Hoc Data Pull request.

Created extensively involved in Data Extraction, Transformation and Loading (ETL process) from XML to the staging area and from Staging to ODS using Informatica Power Centre.

Designed and implemented complex SQL queries for QA testing and report/ data validation.

Utilized SQL and Big Data tools to extract meaningful insights and generate reports for business stakeholders.

Designed and develop ETL processes using IBM DataStage to extract, transform, and load data from various source systems into data warehouses, data marts, or other target databases.

Contributed to programming complex T-SQL queries and Stored Procedures for generating reports and various business logic / scenarios. Worked on OBIEE.

Integrated and extracted source data using SSIS ETL tool and stored procedures created in SSMS and developed transformation logic and designed ETL packages.

Implemented a data warehouse solution on Big Query, involving the migration of on-premise data to GCP using Dataflow for ETL processes. Optimized query performance for faster analytics.

Designed and developed comprehensive conceptual, logical, and physical data models using data modeling tools to accurately represent business requirements and database architecture.

Created complex DAX calculations to support advanced data analytics and reporting needs in Power BI and SSAS.

Designed, developed, and maintained ETL processes using Talend Open Studio or Talend Data Integration.

Used the major ETL tool- informatica along with the other tools and using Teradata interface to work efficiently in such a huge Data Warehouse.

Created and maintained dataflow processes in collaboration with ETL and data integration teams, ensuring accurate data transformation logic aligned with business rules and reporting requirements.

Developed custom templates and reusable components within the data modeling tool to standardize designs and accelerate the modeling process across projects.

Implemented Star Schema, Snowflake Schema models in Data Warehouses and Data Marts.

Worked on daily basis with lead Data Warehouse developers to evaluate impact on current implementation, redesign of all ETL logic.

Experienced in Migration of Data from PostgreSQL to MSSQL using PG Dump utilization tool and BCP.

Designed, developed, and optimized ETL jobs using AWS Glue to process structured and unstructured data

Implemented ETL processes using SSIS and Azure Data Factory to integrate data from multiple sources.

Created and maintain documentation for data mart structures, ETL processes, data sources, and data definitions.

Extensively worked on the Extraction, Transformation and Load (ETL) process using PL/SQL to populate the tables in database.

Developed and maintained documentation for ETL processes, data models, and technical specifications using Oracle Data Integrator (ODI).

Designed and implemented data engineering solutions using Azure Data Lake Storage, Azure Data Factory, and Azure Databricks.

Created and maintained lookup and reference tables in the IntelliMatch database to support dynamic matching rules and enhance the flexibility of the reconciliation workflows.

Extensively worked with OLAP cubes to generate drill through reports in SSRS with solid understanding of Apache Kafka messaging system and its architecture.

Created MySQL databases and tables to store Hadoop processed data.

Developed tools using Python, Shell scripting, XML to automate some of the menial tasks.

Wrote Java code to interact with databases using JDBC, JPA, Hibernate, or other ORM frameworks.

Designed, implemented, and maintained NoSQL databases, primarily using MongoDB and Cassandra.

Wrote T-SQL and PL/SQL scripts to clean, standardize, and transform marketing data to ensure accuracy in lead-to-account mapping and attribution analysis within the ABM framework.

Wrote SQL queries and scripts to extract data from databases, then transforming the data as needed before sending it to other systems via REST APIs.

Experienced with one or more RDBMS platforms like Oracle, SQL Server, MySQL, PostgreSQL, or DB2.

Lead the development and optimization of ETL workflows using Informatica PowerCenter, ensuring high data quality and consistency

Created SQL Datasets for Power BI, OBIEE, SSRS ad- hoc Reports. Worked with SQL Server 2008R2/2014/2016.

Environment: SQL Server 2022/2019, PL/SQL, Oracle, Teradata, MySQL, Azure Data Factory, AWS (Glue, S3, EC2, RDS), GCP (BigQuery, Dataflow), SSIS, SSRS, SSAS, Power BI, Informatica, Talend, DataStage, OBIEE, Cassandra, MongoDB, IntelliMatch, Java, Python, Shell Scripting, Erwin, GitLab, Autosys, JIRA, Kafka, REST APIs, Data Vault, Data Mart

Client: Wellstar Health System

Location: Marietta, Georgia

Role: SQL Developer

Duration: June 2020 to January 2022

Responsibilities:

Created complex ETL packages to extract data from staging tables to portioned tables with incremental load.

Created SSIS reusable packages to extract data from multi formatted Flat Files, Excel, XML files into UL Databases and DB2 Billing system.

Implemented ETL processes to move data between different systems and databases, using SQL and Java-based solutions.

Developed a Power BI sales analytics dashboard with complex DAX measures to track key performance indicators.

Developed ETL processes to ingest data from various sources into Big Data platforms.

Designed and implemented database structures to handle Medicaid data, including patient records, claims, provider information, and eligibility data.

Implemented source-to-target mappings to ensure accurate data movement and transformation into the Data Vault layers.

Designed Time based usage analytics using Azure Log Analytics and Power BI Services to derive usage pattern of Platform dashboards.

Used REST API endpoints to extract data from external sources and load it into your SQL database.

Experienced in database modelling up to 3 NF and Dimensional modelling with Star and Snowflake schema.

Provided support for existing SQL and Talend processes, including troubleshooting and performance tuning.

Integrated the MicroStrategy reports access with third party application .

Designed and implemented ETL processes using Azure Data Factory to extract, transform, and load data from various sources into Azure SQL Database.

Created SSIS log Tracking database in SSMS containing multiple tables including Execution Status, Business Scope, Processing Log, Error Log, etc.

Developed, optimized, and maintained complex SQL queries, stored procedures, and scripts to extract, transform, and load (ETL) transactional data from multiple source systems into IntelliMatch staging tables, ensuring data accuracy and readiness for reconciliation processes.

Experienced working in Agile Methodology and used Version One and Gitlab for maintaining the stories about project.

Executed reverse engineering on large-scale legacy databases to extract schemas and update data models, facilitating modernization efforts and reducing documentation gaps.

Created and execution of test cases on Oracle Reports and OBIEE.

Implementing High availability solutions using PostgreSQL Migrating from different databases to PostgreSQL.

Created and managed metadata in the AWS Glue Data Catalog for better discoverability and integration with AWS services like Athena and Redshift.

Extensively involved in Oracle SQL, PL/SQL tuning in OLTP and DWH applications using STATSPACK, Explain Plan, TKPROF, Hints in Oracle 9i.

Designed normalized relational schemas and translated business rules into database objects by leveraging Erwin's visual modeling features, which improved communication between technical and non-technical teams.

Designed, monitored, and extended data warehouse and analytics architecture to meet company business analysis and reporting needs.

Worked on Power BI to collect the data and update on the EXCEL reports as per the business requirement.

Created stored logic using SQL to support real-time analytics and data retrieval use cases, ensuring efficient query execution while adhering to Cassandra’s distributed architecture limitations.

Designed and implemented ETL workflows using Oracle Data Integrator (ODI) to extract, transform, and load data from various source systems into target databases.

Used Micro Strategy Suite (Intelligence Server, Desktop, MDX Adapter/ OLAP Provider, Web, Narrowcast Server, Visual Insight, Report Services).

Developed Python scripts and SQL queries to extract, transform, and load data into Big Query.

Migrated data to/from staging table in SQL from multiple data sources including Oracle, Access, Excel, CSV flat files and other SQL servers.

Environment: SQL Server 2019/2017, PL/SQL, Oracle, Azure SQL, Azure Data Factory, SSIS, SSRS, Power BI, Talend, OBIEE, Erwin, MicroStrategy, Python, Java, Autosys, GitLab, REST APIs, IntelliMatch, Data Vault, Data Mart, Agile, TOS DP

Client: Comerica Bank

Location: Austin, Texas

Role: Jr. SQL Developer

Duration: Feb 2019 to May 2020

Responsibilities:

Wrote different T-SQL scripts for automated execution of the tasks.

Managed and optimized Azure Data Lake Storage accounts and data lake storage hierarchies.

Documented database designs, dataflows, and Java codebase interactions with databases.

Implemented data quality checks within ETL processes to ensure data integrity in the Data Vault.

Created a comprehensive customer insights report in Power BI using DAX calculations to analyze customer behavior and trends.

Experienced with unit testing frameworks (e.g., JUnit) for Java and understanding of database testing methodologies.

Interacted with business analysts to develop modeling techniques.

Designed and implemented scalable, high-performance database schemas on GCP.

Performed data migration activities, including extracting data from legacy systems and loading it into new platforms using Talend.

Created the automated processes for the activities such as database backup and SSIS Packages to run sequentially using SQL Server Agent job.

Worked on error handling and logging mechanisms within DataStage to track and resolve data issues

Developed Python scripts to parse XML documents and load the data in database.

Developed DTS packages to extract, transform and load into the Campaign database from OLTP database using of SQL Server 2005 Integration Services (SSIS)

Wrote T-SQL scripts, dynamic SQL, complex stored procedures, functions, triggers, SQLCMD, and SMO.

Created stored procedures for maintaining SQL Server, Written Stored Procedures for Application Developers.

Created Ad-Hoc Reports, Summary Reports, Sub Reports, and Drill-down Reports using SSRS.

Deployed and managed Snowflake on AWS, Azure, and GCP.

Wrote complex stored procedures and functions and integrated it with the front-end application.

Created multi-dimensional cubes with different dimensions and measures in it using SSAS.

Diagnosed and tuned the server optimal performance using SQL Server 2008 SQL Profiler and Database Engine Tuning Wizards.

Prepaid user manual and technical support manual.

Environment: SQL Server 2017/2016, SSIS, SSRS, SSAS, DTS, PL/SQL, Azure Data Lake, GCP (BigQuery), Snowflake, Talend, Python, Java, XML, Power BI, Cassandra, Erwin, Git, JIRA, Data Vault, RDBMS, Data Mart



Contact this candidate