Technical Solution Engineer, Spark - São Paulo, Brasil - Databricks

    Databricks
    Databricks São Paulo, Brasil

    há 1 semana

    Default job background
    Descrição
  • DiscoverCustomersPartners
  • Databricks PlatformIntegrations and DataPricingOpen Source
  • Databricks for IndustriesCross Industry SolutionsMigration & DeploymentSolution Accelerators
  • Training and CertificationEventsBlog and PodcastsGet HelpDive Deep
  • CompanyCareersPressSecurity and Trust
  • Ready to get started?
  • Technical Solution Engineer, Spark Sao Paulo, Brazil

    P-993

    As a Spark Technical Solutions Engineer, you will provide technical and consulting related solutions for the challenging Spark/ML/AI/Delta/Streaming/Lakehouse reported issues by our customers and resolve any challenges involving the Databricks unified analytics platform with your comprehensive technical and client-facing skills. You will assist our customers in their Databricks journey and provide them with the guidance and expertise that they need to accomplish value and achieve their strategic goals using our products.

    The impact you will have:

  • Performing initial level analysis and troubleshooting issues in Spark using Spark UI metrics, DAG, Event Logs for multiple customer reported job slowness issues.
  • Troubleshoot, resolve and suggest deep code-level analysis of Spark to address customer issues related to Spark core internals, Spark SQL, Structured Streaming, Delta, Lakehouse and other databricks runtime features.
  • Assist the customers in setting up reproducible spark problems with solutions in the areas of Spark SQL, Delta, Memory Management, Performance tuning, Streaming, Data Science, Data Integration areas in Spark.
  • Participate in the Designated Solutions Engineer program and guide one or two of strategic customer's daily Spark and Cloud issues.
  • Coordinate with Account Executives, Customer Success Engineers and Resident Solution Architects for coordinating the customer issues and best practices guidelines.
  • Participate in screen sharing meetings, answering slack channel conversations with our team members and customers, helping in driving the major spark issues at an individual contributor level.
  • Build an internal wiki, knowledge base with technical documentation, manuals for the support team and for the customers. Help create company documentation and knowledge base articles.
  • Coordinate with Engineering and Backline Support teams to help report product defects.
  • Participate in weekend and weekday on-call rotation and run escalations during databricks runtime outages, incident situations, and plan day 2 day activities and provide escalated level of support for important customer operational issues.
  • What we look for:

  • 3 years of hands-on experience developing any two or more of the Big Data, Hadoop, Spark,Machine Learning, Artificial Intelligence, Streaming, Kafka, Data Science, ElasticSearch related industry use cases at the production scale. Spark experience is mandatory.
  • Experience in the performance tuning/troubleshooting of Hive and Spark-based applications at production scale.
  • Real-time experience in JVM and Memory Management techniques such as Garbage collections, Heap/Thread Dump Analysis.
  • Experience with any SQL-based databases, Data Warehousing/ETL technologies like Informatica, DataStage, Oracle, Teradata, SQL Server, MySQL and SCD type use cases.
  • Experience with AWS or Azure or GCP
  • Benefits:

  • Private Medical and Dental
  • Life Insurance
  • Meal Allowance
  • Equity Awards
  • Paid Parental Leave
  • Fitness Reimbursement
  • Annual Career Development Fund
  • Home Office/Work Headphones Reimbursement
  • Childcare Reimbursement
  • Business Travel Accident Insurance
  • Mental Wellness Resources
  • #LI-DC
    #LI-Hybrid