tesorio logo

Data Infrastructure Engineer

Bay Area or Remote - Engineering Full-time
We are at the forefront of creating the latest FinTech category and we are rapidly expanding our team. We’re looking for a Data Infrastructure Engineer to work on our Data Science team.

Company Overview

Tesorio is a high-growth, early-stage startup backed by some of the Bay Area’s most prominent Venture Capital firms (First Round, Floodgate, Y Combinator) and the world’s top finance execs (e.g. the ex-CFO of Oracle, the ex-CFO of Yahoo, and the founder of Adaptive Insights).

We build software that applies proprietary machine learning models to help manage a core problem that all Mid-Market businesses face: managing, predicting, and collecting cash. As we’ve taken this solution to market over the last 18 months, we’ve been able to bring on some great clients like Veeva Systems, Box, WP Engine, Rainforest QA, and many more.

Tesorio’s Cash Flow Performance platform is a sought after solution for the modern-day CFO’s toughest problems. Companies such as Anaplan have successfully tackled forecasting revenues and expenses, however, no other system has been built from the ground up to help companies understand the complexities around cash flow and take action to optimize the core lifeblood of their business.
What’s in it for you?
  • Remote OK (Western Hemisphere) or work out of an awesome office with all the perks.
  • Almost all of Engineering and Data Science work fully remote and we work hard to make sure remote employees feel a part of the team.
  • This role is for a fast paced, high impact project that adds new stream of revenue and strategic value to the company.
  • Work with some of the best and brightest (but also very humble).
  • Fast growing startup backed by top tier investors - Y Combinator, First Round Capital, Floodgate, Fathom.
  • Responsibilities
  • You will be responsible for creating and maintaining machine learning infrastructure on Kubernetes
  • Build and own workflow management systems like airflow, kubeflow or argo.
  • Advise data and ML engineers on how to package and deploy their workflows
  • Implement logging, metrics and monitoring services for your infrastructure and container logs
  • Create Helm charts for versioned deployments of the system on client premises
  • Continuously strive to abstract away infrastructure, high availability, identity and access management concerns from Machine Learning and Software Engineers
  • Understand the product requirements and bring your own opinions and document best practices for leveraging Kubernetes
  • Required Skills
  • 6+ years of experience in creating and maintaining data and machine learning platform in production
  • Expert level knowledge of Kubernetes like various operators, deployments, cert management, security, binding users with cluster and IAM roles etc.,
  • Experience dealing with persistence pitfalls on Kubernetes, creating and owning workflow management system (Airflow, Kubeflow, Argo etc.,) on Kubernetes
  • Experience creating Helm charts for versioned deployments on client premises
  • Experience securing the system with proper identity and access management for people and applications.
  • Ability to work in a fast paced, always-changing environment
  • Nice to Haves
  • Experience spinning up infrastructure using Terraform and Ansible
  • Experience working with data engineers running workflow management tools on your infrastructure