San Francisco
|
June 30 - July 2, 2014

Job Board

Data Solutions Engineer (Databricks)San Francisco, CA

Date posted: February 23, 2015

Data Solutions Engineer – Databricks
San Francisco, CA

The field engineering team at Databricks is in charge of leading the adoption of Apache Spark and Databricks Cloud. Our team engages with the developer community to train and evangelize Spark, meets with customers to suggest solutions that they can build with the technology, and sees customers through implementing and troubleshooting production systems. Every member of our team is expected to become an Apache Spark expert and to be excellent at interacting with Spark users.

At Databricks we work on some of the most complex distributed processing systems and our customers challenge us with interesting new big-data processing requirements. As the complexity grows for both us and our customers, we are looking to our solution architects to fully understand the architecture that powers our product and how it can be implemented. Our solutions architect not only work with technologies such as Apache Spark, Hadoop and Kafka but they also recommend product features and contribute to open source community. Our solutions architect position is a pre-sales and post-sales position. You will work with customers to identify use cases for Databricks cloud, and see them through using the product. Our customers are highly technical, so you must be too.

Responsibilities

  • Provide technical leadership in a team that helps customers design and architect large-scale data processing systems
  • Advise Clients and Partners in Technology implementation
  • Bootstrap and/or Implement strategic customer projects
  • Build reference architectures and demo applications
  • Contribute to the Apache Spark open source projects
  • Lead a growing technical field organization

Requirements

  • Outstanding verbal and written communication skills
  • Excellent presentation and whiteboarding skills
  • 3-5 years in customer-facing architecture or consulting role
  • Ability to design and architect distributed data systems.
  • Desired: Experience working with open source
  • Comfortable writing code in either Python, Scala or Java.
  • Experience with big data technologies such as Spark, Hadoop, Kafka, & Storm.

APPLY HERE