Cloud Data Engineer - 13031
Remote, United States
We are seeking an experienced Cloud Data Engineer for the development of new Cloud native tools and services with experience with Infrastructure and data pipeline as code. In this role, you will work with architecture, design, build, and manage physical data structures designed for flexibility, scalability and resiliency to support current and future business needs of all the Blackline products and initiatives in automated, repeatable way. Tools and services built by Cloud Engineering will provide the cloud native foundation for future Blackline Products and Services. You will also design, build, and maintain processes and components of a data pipeline to support analytics, focusing on data quality and governance, pipeline performance, and best practices for democratized data access. You will also collaborate and influence across partner teams on design and architecture with our IAC principles and make recommendations for improvements.
If you’re an expert in DevOps principles, Containers, Event driven technology, traditional data warehousing, ETL and/or big data pipeline and processing, you’re exactly who we’re seeking. Technical capabilities aside, if you’re a self-starter who’s comfortable with ambiguity, able to think big without overlooking minute details, and who thrives in a fast-paced environment, you’re perfect for our team.
Roles and Responsibility (list in order of importance)
- Design and build Blackline's cloud infrastructure platforms using infrastructure as code methods to accelerate software engineering and data science teams ability to deliver new products and services
- Design and build cloud tools that democratize access to the cloud in order to accelerate software engineering and data science teams ability to deliver new products and services
- Consume Blackline's centrally agreed base Hardened OS (Linux and Windows)
- Ensure compliance with centrally defined Security and with Operational risk standards (E.g. Network, Firewall, OS, Logging, Monitoring, Availability, Resiliency)
- Build and support continuous integration (CI), continuous delivery (CD) and continuous testing activities
- Support non-functional requirements such as serviceability, supportability, logging, Monitoring and alerting etc.
- Ensure good Change management practice is implemented as specified by central standards.
- Provide impact assessments where requested for changes proposed on GCP core platform
Years of Experience in Related Field: Minimum 5 Years
Education: Masters Preferred
Technical/Specialized Knowledge, Skills, and Abilities:
- Expert understanding of data principles, Platform and Infrastructure as a Code concepts and techniques
- Strong understanding of Containers, Git, Jenkins, CI/CD and available tools
- Security and Compliance, e.g. IAM and cloud compliance/auditing/monitoring tools
- Proficient with Terraform and a modern scripting language (preferably Python) for automation of build tasks.
- Experience with Big Data on GCP - BigQuery, Pub/Sub, Dataproc, Dataflow. (Nice to have)
- Experience with Relational Databases, NoSQL Databases and/or Big Data technologies (Nice to have)
- Experience with container orchestration technologies (kubernetes, mesos, swarm etc.) and deployment methodologies, configuration management tools (CI/CD, Chef, Puppet, Ansible, etc.), and logging (ELK) and monitoring tools (App Dynamics, etc.)
- Prototype, develop and apply software integrations based on user feedback.
- Implement automation tools and frameworks (Jenkins, CI/CD pipelines).
- Experience of building a range of Services in a Cloud Service provider (ideally GCP)
- Able to carry out approaches such as risk-management, clustering, load balancing, and failover.
- Conduct system tests for security, performance, and availability.
- Good interpersonal and communication skills, Ability to build strong relationships with Application teams, cross functional IT and global/local IT teams
- A track record of constantly looking for ways to do things better and an excellent understanding of the mechanism necessary to successfully implement change
- Set and achieved challenging short-, medium- and long-term goals which exceeded the standards in their field
- Excellent written and spoken communication skills; an ability to communicate with impact, ensuring complex information is articulated in a meaningful way to wide and varied audiences
- Working knowledge in IP and storage networking including SDN, Linux, application networking, DNS, SAN and hybrid technologies
- Networking principles and protocols such as IP subnetting, routing, firewall rules, Virtual Private Cloud, Load-Balancer, Cloud DNS, Cloud CDN, etc.
- Individuals who are motivated by enabling and helping others within the company be data driven
- Demonstrable Cloud service provider experience (ideally GCP) - infrastructure build and configurations of a variety of services including Compute, Storage, SDN (VPC and XPN)
- Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools
- We run in Google Cloud and rely heavily on BigQuery, Cloud Storage and our internal ETL frameworks to automate tasks, Experience with these technologies is a plus.
- Experience working within an Agile environment
- Automation scripting (using scripting languages such as Terraform, Ansible etc.)
- Server administration (either Linux or Windows)
- Ability to quickly acquire new skills and tools