Post Job Free
Sign in

Power Bi Real-Time

Location:
Overland Park, KS
Salary:
100000
Posted:
October 15, 2025

Contact this candidate

Resume:

PROFILE SUMMARY

*+ years of experience designing, developing, and maintaining scalable data pipelines and architectures for large-scale systems.

Skilled in multi-cloud platforms: AWS (S3, Lambda, Redshift, Glue, EC2), Azure (Data Lake, Synapse, Databricks), GCP (BigQuery, Dataflow, Cloud Storage).

Strong in SQL, Python, and big data frameworks (Apache Spark, Hive, Hadoop) for high-volume data processing.

Experienced in real-time streaming with Kafka, Kinesis, Spark Streaming, and Flink.

Proficient in workflow orchestration and automation using Apache Airflow, Kubernetes, and Jenkins.

Expertise in data warehousing and analytics with Snowflake, Redshift, and Synapse Analytics.

Skilled in advanced analytics and ML integration using Pandas, NumPy, Scikit-learn, TensorFlow, and PyTorch.

Experienced in containerization and IaC with Docker, Kubernetes, Terraform, and CloudFormation.

Proficient in building dashboards and reports using Tableau, Power BI, and QuickSight.

Adept at optimizing pipelines for performance, high availability, and real-time decision-making.

TECHNICAL SKILLS

Programming & Scripting: Python, Java, Scala, SQL, Bash, R

Big Data Technologies: Hadoop, Apache Spark, Hive, Parquet, Avro, ORC, HDFS, Presto, Pig, Flink

ETL/ELT Tools: Informatica, Talend, Apache Nifi, debt, SSIS

Data Warehousing: Snowflake, Redshift, Azure Synapse, Big Query, Teradata, Vertica

Data Integration & Orchestration: Apache Airflow, LangChain, AWS Step Functions, Azure Logic Apps, GCP Data Fusion, CCPA.

Streaming & Real-Time Processing: Apache Kafka, AWS Kinesis, Azure Event Hubs, Google Pub/Sub, Spark Streaming

Database Systems: PostgreSQL, PLSQL, FAISS, RAG, SQL Server, MongoDB, Cassandra, DynamoDB, Cosmos DB, HBase

Business Intelligence Tools: Tableau, Power BI, Looker, AWS Quick Sight, Google Data Studio

DevOps & CI/CD: Jenkins, GitLab CI/CD, Azure DevOps, AWS Code Pipeline, Docker, Kubernetes

Machine Learning Integration: AWS Sage Maker, Gen AI, Azure ML, GCP Vertex AI

Workflow Automation: Terraform, CloudFormation, Ansible

Version Control & Collaboration: Git, GitHub, GitLab, Bitbucket

Cloud Platforms: AWS (Redshift, S3, Glue, Lambda, EMR, Athena, DynamoDB, Kinesis, CloudFormation), Azure (Azure Data Factory, Synapse Analytics, Azure Storage, Databricks, Cosmos DB, Logic Apps, Azure Functions), Google Cloud (Big Query, Dataflow, Pub/Sub, Cloud Storage, Cloud Composer, Cloud Functions).

Methodologies & Other: Agile, End-to-End Solutions, talent development, Data Modeling, Data Structures.

WORK EXPERIENCE

Garmin Ltd., Olathe, Kansas, USA AZURE Data Engineer June 2024 – Present

Key Responsibilities and Achievements:

• Architected Azure Data Lake & Synapse pipelines processing 5TB+ geospatial/wearable data daily.

• Developed real-time data flows via Azure Stream Analytics, cutting GPS insights latency from 15 min to under 1 min.

• Created 12+ Power BI dashboards, enabling device performance monitoring for 500k+ active users.

• Optimized Parquet storage in Azure Data Lake, reducing query runtime by 35% and storage costs by 20%.

• Modeled GPS & fitness data in PySpark/Scala on Databricks, improving wearable performance analysis.

• Integrated Cosmos DB for globally distributed, low-latency device data access.

• Automated workflows with Airflow & LangChain, improving pipeline reliability 25%.

• Redesigned legacy ETL pipelines into cloud-native frameworks, increasing throughput 30%.

• Built ETL flows in ADF & NiFi for centralized cloud warehouse ingestion.

• Integrated Sqoop for bulk on-prem-to-cloud transfers; tuned Impala for high-speed queries.

• Implemented CI/CD pipelines in Jenkins, reducing deployment time by 50%.

Seaboard Corporation, Kansas, USA AWS Data Engineer July 2023 – May 2024

Key Responsibilities and Achievements:

Designed AWS Glue & Lambda pipelines integrating agricultural, energy, and logistics data from 20+ sources.

Implemented real-time streaming with Kinesis & Kafka, improving event processing speed by 60%.

Built Amazon QuickSight dashboards enabling executives to monitor KPIs across 3 global divisions.

Reduced query runtime in Redshift by 40% using schema optimization & Parquet storage in S3.

Orchestrated CI/CD workflows with AWS CodePipeline and Jenkins, slashing deployment timelines from 48 hrs to 4 hrs.

Standardized structured & semi-structured streams from Kinesis into Redshift using custom serialization.

Processed large datasets on EMR with Spark & Hive, driving operational efficiency in analytics.

Deployed ML models in SageMaker/TensorFlow, improving decision-making accuracy by 15%.

Containerized applications with Docker & orchestrated via Kubernetes, boosting scalability.

Kotak Mahindra Bank, Mumbai, India GCP Data Engineer Jan 2022 – Dec 2022

Key Responsibilities and Achievements:

Launched Dataflow & Pub/Sub pipelines processing 2M+ transactions/day for fraud detection & compliance.

Established ML pipelines in TensorFlow for credit risk scoring, improving approval accuracy by 18%.

Systematized Terraform and Google Cloud Build deployments, reducing infrastructure provisioning time by 70%.

Integrated Cloud SQL & BigQuery for analytics, reducing regulatory report generation from 3 hrs to 20 min.

Enhanced data governance compliance (PCI-DSS, RBI) with structured data validation workflows.

Implemented GenAI-based risk assessment and automated loan approval decisioning.

Streamlined Dataproc/Spark jobs, improving customer segmentation efficiency by 25%.

Transferred large RDB datasets into Hadoop HDFS via Sqoop, optimized querying with Impala.

Federal Bank, Mumbai, India Data Engineer June 2020 – Dec 2021

Key Responsibilities and Achievements:

Constructed Spark & Hadoop pipelines processing 4TB+ banking data/day, reducing processing time 30%.

Assembled real-time analytics with Kafka & Azure Stream Analytics for fraud alerts within 2 seconds of detection

Devised ETL processes in Python & Airflow, improving data freshness from daily to hourly updates.

Generated Tableau & Power BI dashboards for credit & risk teams, reducing manual reporting by 80%.

Applied ML models in Azure ML Studio for credit scoring & fraud detection.

Migrated RDB data into Hadoop via Sqoop, optimized analytics with Impala & Flume.

Containerized workloads with Docker & orchestrated via Kubernetes for high availability.

EDUCATION:

University of Central Missouri, Masters in Computer Science, USA from 2023 - 2024

Nikhil Yallabandi Veera

Venkatanagasaidurga (Data Engineer)

Data Engineer with 4+ years of experience designing and scaling cloud-native data pipelines for high-volume, low-latency environments. Proficient across AWS, Azure, and GCP with deep expertise in ETL/ELT, streaming (Kafka, Kinesis, Spark Streaming), and distributed computing (PySpark, Hive, Hadoop). Adept at building data solutions that support AI/ML workloads and accelerate experimentation in agile, product-driven teams.

+1-913-***-****

***********@*****.***



Contact this candidate