Job Description
Salary:
We're Hiring an AWS Data Engineer to build the future of data!
Are you a data-driven engineer with a passion for cloud technologies? Do you want to help shape the way modern businesses collect, process, and gain insights from data? We're looking for a talented AWS Data Engineer to join our teamand be a key player in building scalable, cloud-native data solutions.
In this role, you'll work at the heart of our data infrastructure, designing, developing, and optimizing pipelines that move and transform data securely and efficiently across the AWS ecosystem. Youll collaborate with software engineers, architects, and analysts to power everything from real-time analytics to machine learning initiatives.At our company, we believe in combining deep technical expertise with a strong culture of innovation, teamwork, and growth. This is your chance to work on exciting projects, leverage the full power of AWS, and help clients make smarter, faster, data-informed decisions.
Duties and Responsibilities:
Analyze existing on-premises data sources (databases, data warehouses, and data lakes) to assess cloud readiness.
Define migration strategies, including rehosting, replatforming, and modernizing data workloads for AWS.
Migrate structured and unstructured data to AWS using tools like AWS Database Migration Service (DMS), AWS Glue, and Snowball.
Implement ETL pipelines to transform and load data into AWS data stores such as S3, Redshift, DynamoDB, or RDS.
Validate data integrity, consistency, and quality after migration.
Optimize migrated data pipelines and storage solutions for performance and cost efficiency.
Configure and manage data storage solutions like S3, Redshift, Aurora, DynamoDB, and Athena.
Implement real-time data streaming solutions using AWS Kinesis or Kafka.
Apply AWS best practices for data security, including encryption, access controls, and compliance with data governance policies.
Collaborate with the security team to ensure adherence to regulatory requirements like GDPR or HIPAA.
Set up monitoring and alerts for data pipelines using AWS CloudWatch, DataDog, or similar tools.
Troubleshoot data processing issues and ensure high availability of data solutions.
Work closely with architects, application engineers, and QA teams to support end-to-end data migration activities.
Actively participate in Scrum ceremonies, such as sprint planning, stand-ups, and retrospectives.
Document data migration processes, pipeline configurations, and validation outcomes.
Share best practices and lessons learned to enhance team knowledge.
Required skills:
Minimum of 3+ years of experience in a security engineering role, with experience in network security, application security, and security frameworks.
Bachelor's degree in a related area and/or equivalent experience/training.
Expertise in AWS data services like S3, Redshift, Glue, RDS, DynamoDB, and Athena.
Experience with data migration tools such as AWS DMS and Snowball.
Strong skills in designing, building, and managing ETL/ELT pipelines.
Proficiency in data transformation and processing using tools like Apache Spark or AWS Glue.
Proficiency in Python, SQL, and Shell scripting for data engineering tasks.
Knowledge of Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation.
Experience in designing data models for OLAP and OLTP workloads.
Strong understanding of data warehousing principles and tools like Redshift and Snowflake.
Experience working in Agile or Scrum teams, delivering iterative and incremental solutions.
Strong analytical and problem-solving abilities.
Excellent communication and collaboration skills.
Ability to prioritize tasks in a fast-paced environment.
Excellent computer proficiency, including JIRA.
Nice to have skills:
Familiarity with advanced AWS services like AWS Lake Formation, EMR, and QuickSight.
Understanding of serverless architectures and services like Lambda and Step Functions.
Experience with big data tools and frameworks like Hadoop, Spark, or Kafka on AWS.
Knowledge of machine learning workflows and integration with SageMaker or similar tools.
Expertise in optimizing queries and storage for cost and performance in AWS data stores.
Experience in caching mechanisms using AWS ElastiCache or similar tools.
Familiarity with AWS Glue Data Catalog for metadata management and data lineage tracking.
Experience implementing data classification and lifecycle policies in AWS.
AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect Associate.
Ability to mentor junior data engineers.
Proactive and detail-oriented approach to problem-solving.
What sets INVID apart is our collaborative and flexible work environment. We encourage our team to raise the bar in everything they do while maintaining a healthy work-life balance. With our hybrid work model, team members thrive both in the office and remotely. We foster a culture of mutual respect, autonomy, and accountability, where your voice matters and your growth is supported. From structured career paths and paid professional development to access to industry events, were committed to your success.
Join us at INVID, where innovation meets support, and together we deliver excellence.
Must be a US Citizen, US Resident
Fully bilingual (English and Spanish)
Location: San Juan, Puerto Rico
Background Check Required: Final candidates must be willing to complete a background check as a condition of employment.
EEO
Full-time
Hybrid remote