Post Job Free

Resume

Sign in

Data Engineer Processing

Location:
Kansas City, MO, 64118
Posted:
January 12, 2024

Contact this candidate

Resume:

PROFILE

As a Data Engineer with around * years of experience, I have a proven track record in designing, implementing, and maintaining robust data infrastructure and pipelines. Proficient in Python, SQL, and Scala, I specialize in ETL processes, leveraging tools like Apache Spark and Hadoop for distributed data processing. Collaborating with cross- functional teams, I have developed data models, ensured data integrity, and implemented data security measures. SKILLS

Programming Languages & Libraries (Python, SQL, NoSQL, NumPy, Pandas, Scikit-learn, Matplotlib, Seaborn HTML, CSS),

Data Processing & Streaming (PySpark, Apache Kafka, spark, Apache Airflow, Snowflake, ETL, Apache Casandra, SSIS, SSRS, Talend, Hive, Sqoop),

Cloud Technologies (AWS (EC2, Lambda, Redshift, EMR, DynamoDB, S3, Security, Glue), Azure (Data bricks, Blob storage, Active Directory, Data Factory, VM, Azure Synapse, ADLS), GCP Storage, GCP VM), Frameworks & Tools (Flask, Django, MS SQL, MySQL, PostgreSQL, MongoDB, Git, GitHub, Docker, CI/CD pipelines, Jenkins, Statistics, Power BI, REST API, Agile, JIRA, Tableau) PROFESSIONAL EXPERIENCE

Data Engineer, Intelliswift Software Inc Jan 2023 – present MO, USA

•Implement ADF (Azure Data Factory) to automate the ingestion of data from various sources such as databases, APIs, and other files.

•Created end-to-end Data Pipeline to extract data with SFTP server and load it into Data Warehouse from on- premise datasets.

•Developing, constructing, testing, and maintaining databases for large-scale data processing systems.

•Building, maintaining, and optimizing ETL processes and data pipelines to improve data flow and collection efficiency.

•Monitored the scheduled Azure Data Factory Pipelines and configured the alerts to get notifications of failure pipelines

•Ensuring adherence to data privacy regulations, practices and implementing proper security measures in data handling and storage.

•Managed Big Data solutions such as ADF, Azure Data Lake, Azure Synapse and HD Insights extracted and transformed data from multiple sources and loaded it into ADLS.

•Established Azure environments, integrated source control with Azure DevOps and GIT, and automated Spark data pipeline processes using Azure Pipelines.

Data Engineer, Genpact Solutions Jun 2018 – Jul 2021 Telangana, India

•Designed an end-to-end orchestration and developed an Azure Data Factory pipeline in loading the data from On- Prem Teradata to Azure Synapse through different layers of Azure ADLS following an ETL approach.

•Involved in data movement activities, transformations, and control activities like copy data, data flow, get metadata, lookup, stored procedures, execute pipeline.

•Extracted data from relational databases MySQL and flat files and process these raw data using complex MapReduce, HQL queries based on business requirements.

•Efficient in Dimensional Data Modeling for Data Warehouse, identifying Facts and Dimensions, and developing, fact tables, dimension tables, using Slowly Changing Dimensions (SCD).

•Developed an AWS Glue in loading data from On-Premise Teradata to Amazon Redshift through different layers of Amazon

•Creating complex ETL (Extract, Transform, Load) jobs that transform data visually with Amazon data flows and Amazon EMR.

EDUCATION

Master of Science, University of Missouri - Kansas City Aug 2021 – Dec 2022 Missouri, USA Computer Science

Bachelor of Engineering,

VNR Vignana Jyothi Institute of Engineering and Technology Aug 2014 – May 2018 Telangana, India

Civil Engineering

AADIL MOHAMMED Data Engineer

ad2pyi@r.postjobfree.com 816-***-**** Kansas City, MO LinkedIn



Contact this candidate