UnitedHealth Groupposted 3 days ago
$132,200 - $226,600/Yr
Full-time • Principal
Remote • Eden Prairie, MN
Insurance Carriers and Related Activities

About the position

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a Principal Engineer with deep expertise in data platforms, building and operating scalable data services. You will be responsible for architecting, designing, and building highly scalable, resilient data platforms & self-service offerings that power analytical and AI/ML workloads in the enterprise. This role requires hands-on experience with Databricks, Snowflake, and open data lakehouse technologies (Apache Iceberg, Delta Lake, Hudi). As a technical leader, you will collaborate with cross-functional teams to align with the data strategy & solve some of the complex data problems. You will play a key role in defining best practices, guiding engineering teams, and ensuring the robustness of our modern data platforms. You'll enjoy the flexibility to work remotely * from anywhere within the U.S. as you take on some tough challenges.

Responsibilities

  • Architect & Develop Scalable Data Platforms: Design and build high-performance, cloud-native data architectures for AI/ML, analytics, and business intelligence
  • AI Agents: Design and build AI Agents in multiple cloud environments, Review AI models and designs and offer solutions to solve related problems
  • Reusable frameworks: Build reusable offerings and services that are scalable, resilient and secure for the enterprise users in Kubernetes, Docker, Azure Functions et al
  • Data Lakehouse & Open Standards: Drive the adoption of open table formats (Apache Iceberg, Delta Lake, Hudi) to create a scalable, vendor-agnostic data lakehouse architecture
  • Cloud & Distributed Systems: Build resilient and scalable data solutions on AWS, Azure, or GCP, leveraging Databricks, Snowflake, and Kubernetes
  • Data Governance & Security: Implement best practices for data governance, lineage, observability, and security in regulated environments (e.g., healthcare, finance)
  • Performance Optimization: Continuously improve the efficiency, reliability, and scalability of data pipelines, query engines, and AI workloads
  • Technical Leadership & Mentorship: Provide technical guidance to engineers, review architecture designs, and contribute to open-source initiatives when applicable

Requirements

  • 10+ years of experience in data engineering, data platforms, or streaming architectures, with at least 3+ years in a principal or lead engineering role
  • 5+ years of experience with Python, Scala, Java, or Go, and familiarity with SQL, Spark, and Flink
  • 3+ years of experience with data pipelines, data management and data security
  • 2+ years of experience with Databricks, Snowflake, and open data lake formats (Iceberg, Delta, Hudi)
  • 2+ years of experience with distributed systems, data modelling, and cloud computing (AWS, Azure, GCP)

Nice-to-haves

  • Bachelor's degree in Software Engineering, Computer Science or related field
  • 2+ years of experience with data governance practices
  • 2+ years of experience with tools such as Jupyter, PyTorch, and GitHub Copilot. Familiarity with LLMs such as GPT, Gemini, Llama and Claude
  • 2+ years of experience with LangChain, AutoGen, Crew AI and frameworks such as Hugging Face

Benefits

  • Comprehensive benefits package
  • Incentive and recognition programs
  • Equity stock purchase
  • 401k contribution
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service