Data Engineer, MyHealthTeam
Posted 2025-08-23
Remote, USA
Full Time
Immediate Start
<meta><p dir="ltr" style="font-style:normal;font-weight:400;font-size:11pt;line-height:1.38;font-family:" basel="" grotesk",arial,sans-serif;margin-right:0px;padding:0px;margin-top:0px;margin-bottom:0px;"=""><b><strong style="font-size:18pt;white-space:pre-wrap;">About the role</strong></b></p><p style="font-style:normal;font-weight:400;font-size:12pt;line-height:1.38;font-family:" basel="" grotesk",arial,sans-serif;margin-right:0px;padding:0px;margin-top:0px;margin-bottom:0px;"=""><br></p><p dir="ltr" style="font-style:normal;font-weight:400;font-size:12pt;line-height:1.38;font-family:" basel="" grotesk",arial,sans-serif;margin-right:0px;padding:0px;margin-top:0px;margin-bottom:0px;"=""><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">We’re looking for a mid- to senior-level Data Engineer to join our growing Data Team, reporting to the Senior Director of Data Engineering. You’ll design and maintain data pipelines, models, and infrastructure that power analytics, insights, and personalization for millions. Our modern stack includes AWS, Redshift, Databricks, Airflow, and Spark, with Python and SQL as core languages. This is a key role in delivering scalable, reliable data solutions across the organization.</span></p><p style="font-style:normal;font-weight:400;font-size:12pt;line-height:1.38;font-family:" basel="" grotesk",arial,sans-serif;margin-right:0px;padding:0px;margin-top:0px;margin-bottom:0px;"=""><br></p><p dir="ltr" style="font-style:normal;font-weight:400;font-size:11pt;line-height:1.38;font-family:" basel="" grotesk",arial,sans-serif;margin-right:0px;padding:0px;margin-top:0px;margin-bottom:0px;"=""><b><strong style="font-size:18pt;white-space:pre-wrap;">What you'll do</strong></b></p><ul data-pattern="discCircleSquare" data-depth="1" style="margin:16px 0px;font-style:normal;font-weight:400;font-size:11pt;line-height:1.38;font-family:" basel="" grotesk",arial,sans-serif;padding:0px="" 0px="" 32px;list-style-type:disc;"=""><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Develop scalable, cloud-based data pipelines integrated with data warehouses like Redshift or Snowflake.</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Orchestrate data workflows using tools like Airflow, Prefect, and Dagster to ensure reliable pipeline execution.</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Design, optimize, and maintain data models, SQL queries, and Spark-based data processing workflows.</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Work across data lakehouse and warehouse systems to deliver clean, reliable, and analytics-ready data.</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Ensure high standards of data quality, governance, and security.</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Collaborate cross-functionally with the Data Analytics, Product, and Marketing teams to gather and improve data to enable actionable insights.</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Monitor and optimize performance of data systems and troubleshoot issues as needed.</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">This is an individual contributor role reporting to the Senior Director of Data Engineering.</span></li></ul><p dir="ltr" style="font-style:normal;font-weight:400;font-size:11pt;line-height:1.38;font-family:" basel="" grotesk",arial,sans-serif;margin-right:0px;padding:0px;margin-top:0px;margin-bottom:0px;"=""><b><strong style="font-size:18pt;white-space:pre-wrap;">Qualifications</strong></b></p><ul data-pattern="discCircleSquare" data-depth="1" style="margin:16px 0px;font-style:normal;font-weight:400;font-size:11pt;line-height:1.38;font-family:" basel="" grotesk",arial,sans-serif;padding:0px="" 0px="" 32px;list-style-type:disc;"=""><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Minimum five years of experience in data engineering, with a strong focus on ETL/ELT pipelines, cloud platforms, and data modeling</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Hands-on experience designing and managing scalable, cloud-native data pipelines using AWS services such as S3, Redshift, Glue, and Lambda</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Strong background in utilizing Databricks and Apache Spark to perform large-scale data transformation, processing, and analytics</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Strong programming skills in Python and SQL</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Solid understanding of data modeling, warehousing, and pipeline performance tuning</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Experience delivering production-ready ETL/ELT pipelines</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Infrastructure-as-code experience (Terraform, CloudFormation) desired</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Familiarity with containerization (Docker, Kubernetes) desired</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Analytical, detail-oriented, and adaptable with a strong sense of ownership</span></li><li dir="ltr" style="margin-top:0pt;margin-bottom:0pt;color:rgb(0,0,0);font-size:12pt;line-height:1.38;"><span style="color:rgb(0,0,0);font-size:12pt;white-space:pre-wrap;">Skilled in cross-functional collaboration, stakeholder alignment, and clear communication</span></li></ul>