Hewlett Packard Enterprise-posted 4 days ago
$148,000 - $340,500/Yr
Full-time • Senior
San Jose, CA
Craft a resume that recruiters will want to see with Teal's resume Matching Mode

This role has been designed as ‘Hybrid’ with an expectation that you will work on average 2 days per week from an HPE office. Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE.

  • Develop software for highly scalable and fault-tolerant cloud-scale distributed applications.
  • Develop microservices using Python, and/or Go (golang).
  • Develop event-driven systems using Python and Java.
  • Develop software for AIDE's real-time data pipeline and batch processing.
  • Develop ETL pipelines aiding in training and inference of various ML models using big-data frameworks like Apache Spark.
  • Build metrics, monitoring and structured logging into the product enabling fast detection and recovery during service degradation.
  • Write unit, integration and functional tests that make your code safe for refactoring and continuous delivery.
  • Participate in collaborative, DevOps style, lean practices with the rest of the team.
  • Bachelor or Masters degree in Computer science, Computer Engineering or a related field.
  • 10+ years of experience in software engineering with a focus on Python, Go or Java.
  • Strong understanding of RESTful API design and development.
  • 2+ years of Experience working with large scale distributed systems based on either cloud technologies or Kubernetes.
  • 2+ years of experience on event-driven technologies like Kafka and Apache Storm/Flink.
  • 2+ years of experience in Big-data technologies like Apache spark/Databricks.
  • Proficient in working with Redis and databases like Cassandra/Datastax.
  • Knowledge of Enterprise Networking features, WiFi protocols and implementations.
  • Knowledge of microservices architecture, grpc.
  • Experience with distributed systems and large-scale data processing.
  • Knowledge of DevOps principles and practices.
  • Knowledge of ETL pipelines.
  • Knowledge of ML training and Inference.
  • Knowledge of Postgres, Pandas/Duckdb.
  • Knowledge of Linux.
  • Health & Wellbeing: Comprehensive suite of benefits that supports physical, financial and emotional wellbeing.
  • Personal & Professional Development: Programs catered to helping you reach any career goals.
  • Unconditional Inclusion: A commitment to inclusivity and flexibility in managing work and personal needs.
© 2024 Teal Labs, Inc
Privacy PolicyTerms of Service