OnlyDataJobs.com

OverDrive Inc.
  • Garfield Heights, OH

The Data Integration team at OverDrive provides data for other teams to analyze and build their systems upon. We are plumbers, building a company-wide pipeline of clean, usable data for others to use. We typically don’t analyze the data, but instead we make the data available to others. Your job, if you choose to join us, is to help us build a real-time data platform that connects our applications and makes available a data stream of potentially anything happening in our business.


Why Apply:


We are looking for someone who can help us wire up the next step. Help us create something from the ground up (almost a green field). Someone that can help us move large data from one team to the next and come up with ideas and solutions around how we go about looking at data. Using technologies like Kafka, Scala, Clojure, F#.


About You:



  • You always keep up with the latest in distributed systems. You're extremely depressed each summer when the guy who runs highscalability.com hangs out the "Gone Fishin" sign.

  • You’re humble. Frankly, you’re in a supporting role. You help build infrastructure to deliver and transform data for others. (E.g., someone else gets the glory because of your effort, but you don’t care.)

  • You’re patient. Because nothing works the first time, when it comes to moving data around.

  • You hate batch. Real-time is your thing.

  • Scaling services is easy. You realize that the hardest part is scaling your data, and you want to help with that.

  • You think microservices should be event-driven. You prefer autonomous systems over tightly-coupled, time-bound synchronous ones with long chains of dependencies.


 Problems You Could Help Solve:



  • Help us come up with solutions around speeding up our process

  • Help us come up with ideas around making our indexing better

  • Help us create better ways to track all our data

  • If you like to solve problems and use cutting edge technology – keep reading


 Responsibilities:



  • Implement near real-time ETL-like processes from hundreds of applications and data sources using the Apache Kafka ecosystem of technologies.

  • Designing, developing, testing and tuning a large-scale ‘stream data platform’ for connecting systems across our business in a decoupled manner.

  • Deliver data in near real-time from transactional data stores into analytical data stores.

  • R&D ways to acquire data and suggest new uses for that data.

  • “Stream processing.” Enable applications to react to, process and transform streams of data between business domains.

  • “Data Integration.” Capture application events and data store changes and pipe to other interested systems.


 Experience/Skills: 



  • Comfortable with functional programming concepts. While we're not writing strictly functional code, experience with languages like Scala, Haskell, or Clojure will make working with streaming data easier.

  • Familiarity with the JVM.  We’re using Scala with a little bit of Java and need to occasionally tweak the performance settings of the JVM itself.

  • Familiarity with C# and the .Net framework is helpful. While we don’t use it day to day, most of our systems run on Windows and .Net.

  • Comfortable working in both Linux and Windows environments. Our systems all run on Linux, but we interact with many systems running on Windows servers.

  • Shell scripting & common Linux tool skills.

  • Experience with build tools such as Maven, sbt, or rake.

  • Knowledge of distributed systems.

  • Knowledge of, or experience with, Kafka a plus.

  • Knowledge of Event-Driven/Reactive systems.

  • Experience with DevOps practices like Continuous Integration, Continuous Deployment, Build Automation, Server automation and Test Driven Development.


 Things You Dig: 



  • Stream processing tools (Kafka Streams, Storm, Spark, Flink, Google Cloud DataFlow etc.)

  • SQL-based technologies (SQL Server, MySQL, PostgreSQL, etc.)

  • NoSQL technologies (Cassandra, MongoDB, Redis, HBase, etc.)

  • Server automation tools (Ansible, Chef, Puppet, Vagrant, etc.)

  • Distributed Source Control (Mercurial, Git)

  • The Cloud (Azure, Amazon AWS)

  • The ELK Stack (Elasticsearch, Logstash, Kibana)


What’s Next:


As you’ve probably guessed, OverDrive is a place that values individuality and variety. We don’t want you to be like everyone else, we don’t even want you to be like us—we want you to be like you! So, if you're interested in joining the OverDrive team, apply below, and tell us what inspires you about OverDrive and why you think you are perfect for our team.



OverDrive values diversity and is proud to be an equal opportunity employer.

Amiseq Inc.
  • Atlanta, GA

Looking for experienced Google Cloud Professional to be part of its Cloud practice. You will be working on industry leading Google Cloud Platform (GCP) to design, develop, and implement next generation Marketing solutions leveraging Cloud native and Commercial/Open Source Big Data technologies.


    • Certified Google Cloud Data Engineer
    • 5+ years of technical management and though leadership role delivering success in complex data analytics environment, managing various stakeholders relationships to get consensus on solution
    • 5+ year's design and/or implementation of highly distributed applications
    • 2+ years' experience in "migrating on premise workloads to one or more industry leading public cloud(s)
    • Demonstrated experience of designing and building Big Data solutions on GCP stack leveraging BigQuery, Dataproc, Dataflow, BigTable, Data Prep, etc.
    • 5+ years experience using Python for building data pipelines
    • Experience in AGILE development, SCRUM and Application Lifecycle Management (ALM) with programming experience in one or more of the following areas: Java, Python, PHP, Perl
    • Technical architectural and development experience on Massively Parallel Processing technologies, such as Hadoop, Spark, Teradata, Netezza, Hadoop
    • Technical architectural and development experience on one or more Data Integration technologies, such as Ab Initio, Talend, Pentaho, Informatica, Data Stage, Map Reduce
    • Technical architectural experience on Data Visualization technologies, such as Tableau, Qlik, etc.
    • Technical architectural experience on data modeling, designing data structures for business reporting
    • Deep understanding of Advanced Analytics, such as predictive, prescriptive, etc.
    • Working knowledge of cloud components: Software design and development, Systems Operations / Management, Database architecture, Virtualization, IP Networking, Storage, IT Security
    • Technical prowess and passion-especially for public Cloud, modern Application design practices and principles. Certifications on Cloud Platform preferred.
    • Oversight experience on major transformation projects and successful transitions to operations support teams
    • Presentation skills with a high degree of comfort to both large and small audiences 
FreshBooks
  • Toronto, ON, Canada

FreshBooks has a big vision. We launched in 2003 but we’re just getting started and there’s a lot left to do. We're a high performing team working towards a common goal: building an extraordinary online accounting application to help small businesses better handle their finances. Known for extraordinary customer service and based in Toronto, Canada, FreshBooks serves paying customers in over 120 countries.


The Opportunity:


FreshBooks is seeking a data engineer to join our team. You will help build new features and update existing ones in our current data pipeline infrastructure. If you’re committed to great work and are constantly looking for ways to improve the systems you’re responsible for, we’d love to chat with you!


What you'll do:



  • Collaborate with data engineers and full-stack developers on cross-functional agile teams working on features for our stakeholders.

  • Work closely with our analytics, data science, and product teams to ensure their data needs are met.

  • Participate and share your ideas in technical design and architecture discussions.

  • Ship your code with our continuous integration process.

  • Provide coaching to junior data engineers and share and learn from your peers.

  • Develop your craft and build your expertise in data engineering.

  • What you bring:

  • Enthusiasm for data engineering!

  • Experience creating and/or maintaining data pipelines, from batch job to real time implementations.

  • Experience with at least one main development language. You’ll almost certainly work closely with Python code. If you have a stronger track record in a different language -- or, better yet, multiple different languages -- that’s great, and we’ll expect you to demonstrate to us that you can learn to produce well-factored idiomatic Python code.

  • Experience with MySQL, Postgres, or similar storage technologies.

  • A strong understanding of test-driven (and behavioural test driven) development, and of building substantially complete test code around the code that you write, and not just for the happy path.

  • Experience working with large codebases and writing robust and testable code.

  • Familiarity with continuous integration and automated build pipelines.

  • Experience using github, reviewing code, reviewing PRs, and merging branches.

  • Experience working in an Agile environment.

  • The ability to balance desire to ship code quickly to our internal customers with responsibility of making good technical decisions.


What you might bring:



  • A track record of staying on the forefront of data engineering technology.

  • Experience with AWS, or another major cloud provider such as Google Cloud Platform.

  • Experience developing and/or managing real time data pipelines and fast data architectures.

  • Experience with BI tools: Periscope, Looker.

  • Experience with Redshift, Big Query, or similar cloud storage technologies.

  • Experience with other modern storage technologies such as Cassandra, MongoDB, and others.

  • Experience with Spark, Kafka, Flink, Gearpump, Dataflow, or other streaming technologies.

  • Experience with Docker, Kubernetes, Ansible, and Terraform, and other DevOps and infrastructure as code technologies.

  • Experience with machine learning technologies, techniques, and algorithms, especially operationalizing machine learning models.

  • A limitless imagination for where data could go and what we can do with it to make our customers and our people awesome!


Why Join Us?


Have we got your attention? Submit your application today and a member of our recruitment team will be in touch with you shortly!


FreshBooks is an equal opportunity employer that embraces the differences in all of our employees. We celebrate diversity and are committed to creating an inclusive environment for all FreshBookers. All applicants are evaluated based on their experience and qualifications in relation to this position.


FreshBooks provides employment accommodation during the recruitment process. Should you require any accommodation, please indicate this on your application and we will work with you to meet your accessibility needs. For any questions, suggestions or required documents regarding accessibility in a different format, please contact us at phone 416-780-2700 and/or accessibility@freshbooks.com

Colaberry Data Analytics
  • Boston, MA
  • Salary: $125k - 140k

Work Overview 
Ag --> AgTech [An industry transformed well beyond molecules and chemicals] 
Building solutions for a sustainable future that will include a global population projected to eclipse 9.6 billion by 2050. We approach agriculture holistically, looking across a broad range of solutions from using biotechnology and plant breeding to produce the best possible seeds, to advanced predictive and prescriptive analytics designed to select the best possible crop system for every acre. 


To make this possible, they collect terabytes of data across all aspects of its operations, from genome sequencing, crop field trials, manufacturing, supply chain, financial transactions and everything in between. There is an enormous need and potential here to do something that has never been done before. We need great people to help transform these complex scientific datasets into innovative software that is deployed across the pipeline, accelerating the pace and quality of all crop system development decisions to unbelievable levels. 

Role Overview:
Automating scientific data from legacy systems, augmenting the data and serving via a RESTful API. Kafka client = change event topic published from legacy, new scientific data is processed in minutes - keeping these comprehensive services up to date in near real time. Providing more than 12 billion marker calls @ your fingertips


Tech Overview: 
- Go [Golang]
- Protocol buffers and gRPC 
Experience with: Google Cloud Platform, Apache Beam and or Google Cloud Dataflow, protocol buffers and gRPC, Google Kubernetes Engine [GKE] or Kubernetes 
- Experience working with scientific datasets, or a background in the application of quantitative science to business problems     
- Experience building and maintaining data-intensive APIs using a RESTful approach 
- Experience with stream processing using Apache Kafka. 

ITV
  • London, UK

We are looking for a Java Engineer to join our rapidly growing team within ITV Technology. Our strategy is to continue to grow our successful Data Engineering capability, as we understand its significance and importance to our business and to the experience of millions of customers every day.


The ideal candidate will have a strong background in software engineering with a keen interest in learning how to build large scale data processing systems. The role will focus on designing, building, and running data handling systems across ITV Online products.


What You’ll Do


Working within the Data Engineering team you will:



  • Work with stakeholders to understand their problems, and build solutions using cloud services and custom development

  • Build large scale data processing systems using cutting edge technologies

  • Work with huge datasets and very high volumes of data from our various apps and sites

  • Be a part of cross-functional product teams and contribute to their strategy and direction

  • Practice and demonstrate high-quality software engineering methods

  • Solve challenging business and engineering problems

  • Promote enhanced ways of working within the team and across the company

  • Maintain high data quality standards through automated testing

  • Support running systems in production

  • Champion new products and technologies


What You’ll Bring



  • Experience of JAVA Software Engineering

  • A good understanding of Agile software engineering principles

  • Experience working with public cloud services - GCP, AWS, etc

  • Exposure to streaming data processing tools, eg Dataflow, Spark, Storm etc

  • A basic understanding of data analysis

  • Familiarity with Unix based operating systems and common command line tools

  • Experience running production systems at large scale

  • A desire to learn new things and share with the team


About Us


ITV Online operates the award winning ITV Hub, one of the UK’s leading VOD (video on demand) platforms with over 27m registered users. We also operate a range of other programme apps and websites.  We deliver the most popular television from the UK’s largest commercial broadcaster online with a focus on the user experience. We’re all about pushing the boundaries and being innovative.


What we can offer


We offer a competitive salary and 5 weeks’ holiday on top of public holidays, plus the option to buy more. Other benefits include an annual bonus plan, flexible working, life assurance cover, and interest-free season-ticket loans, an opportunity to buy ITV shares, and the chance to join pension, health insurance, and cycle to work schemes.


We continually invest in our staff and offer a range of in-house training, external courses and attendance at conferences and events.


The role is based in our Gray’s Inn Road office in London.

Nihaki
  • Philadelphia, PA

1.           Hortonworks Data Platform (HDP) based on Apache Hadoop, Apache Hive, Apache Spark

2.           Hortonworks DataFlow (HDF): based on Apache NiFi, Apache Storm, Apache Kafka

3.           Experience with high-scale, distributed development (hands-on experience with REST/JSON or XML/SOAP  based API / web services)

4.           Apache Phoenix, Maria DB

·         8+ of relevant work experience, as a Hadoop Big Data Engineer / Developer

·         Strong understanding of the Java development, debugging & profiling

    • Experience configuring, administering and working with Hortonworks Data Platform (HDP) based on Apache Hadoop, Apache Hive, Apache Spark
    • Experience configuring, administering and working with Hortonworks DataFlow (HDF): based on Apache NiFi, Apache Storm, Apache Kafka

·         Experience designing and deploying production large-scale Hadoop solutions.

·         Ability to understand and translate customer requirements into technical requirements.

·         Experience installing and administering multi-node Hadoop clusters

·         Strong experience implementing software and/or solutions in the enterprise Linux or Unix environment  

·         Experience designing data queries against data in a Hadoop environment using tools such as Apache Hive, Apache Phoenix, Maria DB or others.

    • Experience with DevOps process using GIT, Maven and job scheduling using the Control-M or Ozzie
    • Ability to develop architecture standards, best practices, design patterns

·         Significant previous work writing to network-based APIs, preferably REST/JSON or XML/SOAP  

·         Solid background in Database administration and design, along with Data Modeling with star schema, slowing changing dimensions, and/or data capture.

·         Demonstrated experience implementing big data use-cases, understanding of standard design patterns commonly used in Hadoop-based deployments.  

·         Excellent verbal and written communications

·         Ability to drive projects with customers to successful completion  

·         Skillful in writing and producing technical documentation, knowledge base articles  

·         Experience in contributing to pre-and post- sales process, helping sales and product teams to interpret customers requirements  

·         Keep current with the Hadoop Big Data ecosystem technologies.

Coolblue
  • Rotterdam, Netherlands
You support our Business Analysis and Data Science teams through your technical expertise and passion for data.

What you tell people at parties

'I am a data hacker:  I make Coolblue a little bit more data-driven everyday.'

What you really do

  • You actively coach the people around you and help them build scalable Data Pipelines.
  • You provide technical support to our Business Analysts, who transform the data your team provide to Actionable Insights for the all company. Your technical expertise will make Coolblue smarter by assisting a data-driven decision making process.

How you do it

  • You help develop data pipelines in Python
  • You are an all-rounder in a Scrum team. You will have your own specialization, but you will also be able to perform all other tasks within the team.
  • You prioritize your own work together with your team and Product Owner.
  • You will receive immediate feedback from Business Analysts and you will have a lot of opportunities to experiment.
  • You coach and provide feedback to your team members.
  • You will be using the right tools for whatever job will be thrown at you. Airflow, Spark and BigQuery are technologies we are already using.

Team

You'll work in a Scrum team of four to six members formed mostly by Python developers with a passion for data. The team is managed by a Team Lead. The Teams backlog is cured by Product Owner Jessica and Scrum Master Sonja. Your team works closely together with Business Analysts and a team of Data Scientists, providing them with data structures they can easily use to support company decisions.

A day at the office

Natalia sends you a Pull Request to speed up the transfer of data from the ERP system using a new messaging system. You review it together and then merge the code. After that, you join Jessica, the Product Owner, who wants discuss the requirements for the new process that will provide new insights about our customer Journey. Just before lunch, you pass by Matthias, a data scientist who is working on a new Machine Learning algorithm which can forecast warehouse stock. You help him moving some data to BigQuery. You invite him to have lunch together with your team. On the way back, you share your concerns about the Google Adwords ingest process with Rafael. It's losing records, so you want to refactor some of the python code. You add a few automated tests, send the Pull Request to your teammate, Soumya, for review and at 16:30, the improvements are automatically deployed via our continuous integration pipeline. By 17:30, you're done for the day and, together with your team members, head to the bar for a drink.

What we're asking

  • You have a minimum of five years experience as a software developer.
  • You are familiar with databases, ETL and data processing frameworks (Spark, Dataflow, Dataproc etc.) and Big Data on Google Cloud Platform.
  • You are familiar with continuous integration, logging and monitoring.
  • You think about the process and implementation from end to end.
  • Agile and flexibility come naturally to you.
  • You're willing to investigate what users want and why.
  • You have experience with helping others improve their craftsmanship and technical skills.
  • You're familiar with continuous integration systems that automate your code deployments.
  • You're willing to relocate to Rotterdam, or nearby.
MATRIX Resources
  • Houston, TX
Our client in Houston,TX is seeking a Data Engineer of EDW to join their team. Data Engineer of EDW is a hands-on individual contributor associate who is equally comfortable working side-by-side with team members to deliver solutions and discussing strategy with executive leadership. This person actively participates in thought leadership around the EDW, data architecture and integration, and will collaborate with the team as we move toward new and emerging data stores to support real-time analytics and machine learning.
    • Take ownership of design, development and support of smaller modules of products required for Cloud migration strategy of the organization.
    • Implement a real time streaming data ingestion and processing pipeline using Google Dataflow/ Apache Kafka/Google Pub/Sub (Apache Beam) or related technology
    • Interface with business intelligence analysts and others in IT (i.e. data engineers, architects, WebOps) in frequent whiteboard sessions to discuss the design, implementation, and testing of data pipelines
    • Maintain data architecture standards and ETL/ELT best practices consistent with a column oriented data store in an analytic use case
    • Architect, plan, and implement a highly available, scalable, and adaptable end-to-end data environment that addresses longer-term business needs.
    • Design, development and re-engineering of process to improve performance results, organization effectiveness and/or systems/quality/services.
    • Keeps company leaders informed of important developments, potential problems proactively.

Requirements
    • Experience in building real time streaming data ingestion and processing pipeline using Apache Beam (running on either Google Dataflow or Apache (Apex, Flink, or Spark) or Kafka in an analytics or data science use case
    • Experience with data processing tools (e.g. Hadoop, Spark, Dataflow, etc.)
    • Experience building ETL/ELT pipelines
    • Experience with column-oriented databases (e.g Redhift, BigQuery, Vertica)
    • Rock star ability to write effective, performance tuned and optimized SQLs on different platforms.
    • Strong programming ability in Python, PySpark, Apache Beam, PIG etc.
    • Excellent bash or shell scripting experience. Should be able to automate the processes for Google Cloud using command lines wrapped in a shell script.
    • Understanding of Oozie, Zookeeper, Apache Airflow or Splunk for the scheduling and log processing.
    • Willingness to explore and implement new ideas and technologies
    • 6+ more years of experience working directly with subject matter experts in both business and technology domains
    • 6+ years of experience with a modern programming language i.e. Python
    • 2+ years of experience with Apache Beam executed on Apex, Flink, Spark, or Google Dataflow
    • Masters Degree in Computer Science, Management Information systems, Statistics, related field or equivalent work experience.
    • Strong knowledge of end-to-end data warehouse development life cycle (data integration, logical and physical modeling, and data delivery) supporting enterprise analytics and BI solutions.
    • Hands-on knowledge of enterprise repository tools, data modeling tools, data mapping tools, and data profiling tools.
    • Ability to manage data and metadata migration. Understanding of Web services (SOAP, XML, REST, UDDI, WSDL) and integrating with our existing data environment.

21616-1
Coolblue
  • Rotterdam, Netherlands
You support our Business Analysis and Data Science teams through your technical expertise and passion for data.

What you tell people at parties

'I am a data hacker:  I make Coolblue a little bit more data-driven everyday.'

What you really do

  • You actively coach the people around you and help them build scalable Data Pipelines.
  • You provide technical support to our Business Analysts, who transform the data your team provide to Actionable Insights for the all company. Your technical expertise will make Coolblue smarter by assisting a data-driven decision making process.

How you do it

  • You help develop data pipelines in Python
  • You are an all-rounder in a Scrum team. You will have your own specialization, but you will also be able to perform all other tasks within the team.
  • You prioritize your own work together with your team and Product Owner.
  • You will receive immediate feedback from Business Analysts and you will have a lot of opportunities to experiment.
  • You coach and provide feedback to your team members.
  • You will be using the right tools for whatever job will be thrown at you. Airflow, Spark and BigQuery are technologies we are already using.

Team

You'll work in a Scrum team of four to six members formed mostly by Python developers with a passion for data. The team is managed by a Team Lead. The Teams backlog is cured by Product Owner Jessica and Scrum Master Sonja. Your team works closely together with Business Analysts and a team of Data Scientists, providing them with data structures they can easily use to support company decisions.

A day at the office

Natalia sends you a Pull Request to speed up the transfer of data from the ERP system using a new messaging system. You review it together and then merge the code. After that, you join Jessica, the Product Owner, who wants discuss the requirements for the new process that will provide new insights about our customer Journey. Just before lunch, you pass by Matthias, a data scientist who is working on a new Machine Learning algorithm which can forecast warehouse stock. You help him moving some data to BigQuery. You invite him to have lunch together with your team. On the way back, you share your concerns about the Google Adwords ingest process with Rafael. It's losing records, so you want to refactor some of the python code. You add a few automated tests, send the Pull Request to your teammate, Soumya, for review and at 16:30, the improvements are automatically deployed via our continuous integration pipeline. By 17:30, you're done for the day and, together with your team members, head to the bar for a drink.

What we're asking

  • You have a minimum of five years experience as a software developer.
  • You are familiar with databases, ETL and data processing frameworks (Spark, Dataflow, Dataproc etc.) and Big Data on Google Cloud Platform.
  • You are familiar with continuous integration, logging and monitoring.
  • You think about the process and implementation from end to end.
  • Agile and flexibility come naturally to you.
  • You're willing to investigate what users want and why.
  • You have experience with helping others improve their craftsmanship and technical skills.
  • You're familiar with continuous integration systems that automate your code deployments.
  • You're willing to relocate to Rotterdam, or nearby.
Coolblue
  • Rotterdam, Netherlands
You support our Business Analysis and Data Science teams through your technical expertise and passion for data.

What you tell people at parties

'I am a data hacker:  I make Coolblue a little bit more data-driven everyday.'

What you really do

  • You actively coach the people around you and help them build scalable Data Pipelines.
  • You provide technical support to our Business Analysts, who transform the data your team provide to Actionable Insights for the all company. Your technical expertise will make Coolblue smarter by assisting a data-driven decision making process.

How you do it

  • You help develop data pipelines in Python
  • You are an all-rounder in a Scrum team. You will have your own specialization, but you will also be able to perform all other tasks within the team.
  • You prioritize your own work together with your team and Product Owner.
  • You will receive immediate feedback from Business Analysts and you will have a lot of opportunities to experiment.
  • You coach and provide feedback to your team members.
  • You will be using the right tools for whatever job will be thrown at you. Airflow, Spark and BigQuery are technologies we are already using.

Team

You'll work in a Scrum team of four to six members formed mostly by Python developers with a passion for data. The team is managed by a Team Lead. The Teams backlog is cured by Product Owner Jessica and Scrum Master Sonja. Your team works closely together with Business Analysts and a team of Data Scientists, providing them with data structures they can easily use to support company decisions.

A day at the office

Natalia sends you a Pull Request to speed up the transfer of data from the ERP system using a new messaging system. You review it together and then merge the code. After that, you join Jessica, the Product Owner, who wants discuss the requirements for the new process that will provide new insights about our customer Journey. Just before lunch, you pass by Matthias, a data scientist who is working on a new Machine Learning algorithm which can forecast warehouse stock. You help him moving some data to BigQuery. You invite him to have lunch together with your team. On the way back, you share your concerns about the Google Adwords ingest process with Rafael. It's losing records, so you want to refactor some of the python code. You add a few automated tests, send the Pull Request to your teammate, Soumya, for review and at 16:30, the improvements are automatically deployed via our continuous integration pipeline. By 17:30, you're done for the day and, together with your team members, head to the bar for a drink.

What we're asking

  • You have a minimum of five years experience as a software developer.
  • You are familiar with databases, ETL and data processing frameworks (Spark, Dataflow, Dataproc etc.) and Big Data on Google Cloud Platform.
  • You are familiar with continuous integration, logging and monitoring.
  • You think about the process and implementation from end to end.
  • Agile and flexibility come naturally to you.
  • You're willing to investigate what users want and why.
  • You have experience with helping others improve their craftsmanship and technical skills.
  • You're familiar with continuous integration systems that automate your code deployments.
  • You're willing to relocate to Rotterdam, or nearby.
Coolblue
  • Rotterdam, Netherlands
You support our Business Analysis and Data Science teams through your technical expertise and passion for data.

What you tell people at parties

'I am a data hacker:  I make Coolblue a little bit more data-driven everyday.'

What you really do

  • You actively coach the people around you and help them build scalable Data Pipelines.
  • You provide technical support to our Business Analysts, who transform the data your team provide to Actionable Insights for the all company. Your technical expertise will make Coolblue smarter by assisting a data-driven decision making process.

How you do it

  • You help develop data pipelines in Python
  • You are an all-rounder in a Scrum team. You will have your own specialization, but you will also be able to perform all other tasks within the team.
  • You prioritize your own work together with your team and Product Owner.
  • You will receive immediate feedback from Business Analysts and you will have a lot of opportunities to experiment.
  • You coach and provide feedback to your team members.
  • You will be using the right tools for whatever job will be thrown at you. Airflow, Spark and BigQuery are technologies we are already using.

Team

You'll work in a Scrum team of four to six members formed mostly by Python developers with a passion for data. The team is managed by a Team Lead. The Teams backlog is cured by Product Owner Jessica and Scrum Master Sonja. Your team works closely together with Business Analysts and a team of Data Scientists, providing them with data structures they can easily use to support company decisions.

A day at the office

Natalia sends you a Pull Request to speed up the transfer of data from the ERP system using a new messaging system. You review it together and then merge the code. After that, you join Jessica, the Product Owner, who wants discuss the requirements for the new process that will provide new insights about our customer Journey. Just before lunch, you pass by Matthias, a data scientist who is working on a new Machine Learning algorithm which can forecast warehouse stock. You help him moving some data to BigQuery. You invite him to have lunch together with your team. On the way back, you share your concerns about the Google Adwords ingest process with Rafael. It's losing records, so you want to refactor some of the python code. You add a few automated tests, send the Pull Request to your teammate, Soumya, for review and at 16:30, the improvements are automatically deployed via our continuous integration pipeline. By 17:30, you're done for the day and, together with your team members, head to the bar for a drink.

What we're asking

  • You have a minimum of five years experience as a software developer.
  • You are familiar with databases, ETL and data processing frameworks (Spark, Dataflow, Dataproc etc.) and Big Data on Google Cloud Platform.
  • You are familiar with continuous integration, logging and monitoring.
  • You think about the process and implementation from end to end.
  • Agile and flexibility come naturally to you.
  • You're willing to investigate what users want and why.
  • You have experience with helping others improve their craftsmanship and technical skills.
  • You're familiar with continuous integration systems that automate your code deployments.
  • You're willing to relocate to Rotterdam, or nearby.
Saicon Technologies
  • Austin, TX

Client Name: Home Depot

Job Title: Data Engineer

Location: Austin, TX

Duration: 12+ Months

#No of Position: 6

Required:

  • 6 or more years of Development Experience. 2+ Years in Hadoop Environment.
  • Ability to code data pipelines in Preferable Hive: Also open to Dta Pipelines in Java, Python, Spark, Scala
  • Need at least 50% of time spent in Data Pipeline development including Job Development and Orchestration.
  •  not so much of a Hadoop Engineer. But more of a data engineer. Someone that uses either Python or Java to build data pipes.
  • To move data in and out of a big data environment for marketing analytics.  

Nice to Haves:

  • Familiarity with cloud environments, preferable Google
  • Knowledge of Google data products a big plus (Dataflow, Dataproc, BigQuery, Pub/Sub)
Other similar technologies that are good ApacheNifi, Apache Airflow, ApacheBeam, Hadoop
Coolblue
  • Rotterdam, Netherlands
You support our Business Analysis and Data Science teams through your technical expertise and passion for data.

What you tell people at parties

'I am a data hacker:  I make Coolblue a little bit more data-driven everyday.'

What you really do

  • You actively coach the people around you and help them build scalable Data Pipelines.
  • You provide technical support to our Business Analysts, who transform the data your team provide to Actionable Insights for the all company. Your technical expertise will make Coolblue smarter by assisting a data-driven decision making process.

How you do it

  • You help develop data pipelines in Python
  • You are an all-rounder in a Scrum team. You will have your own specialization, but you will also be able to perform all other tasks within the team.
  • You prioritize your own work together with your team and Product Owner.
  • You will receive immediate feedback from Business Analysts and you will have a lot of opportunities to experiment.
  • You coach and provide feedback to your team members.
  • You will be using the right tools for whatever job will be thrown at you. Airflow, Spark and BigQuery are technologies we are already using.

Team

You'll work in a Scrum team of four to six members formed mostly by Python developers with a passion for data. The team is managed by a Team Lead. The Teams backlog is cured by Product Owner Jessica and Scrum Master Sonja. Your team works closely together with Business Analysts and a team of Data Scientists, providing them with data structures they can easily use to support company decisions.

A day at the office

Natalia sends you a Pull Request to speed up the transfer of data from the ERP system using a new messaging system. You review it together and then merge the code. After that, you join Jessica, the Product Owner, who wants discuss the requirements for the new process that will provide new insights about our customer Journey. Just before lunch, you pass by Matthias, a data scientist who is working on a new Machine Learning algorithm which can forecast warehouse stock. You help him moving some data to BigQuery. You invite him to have lunch together with your team. On the way back, you share your concerns about the Google Adwords ingest process with Rafael. It's losing records, so you want to refactor some of the python code. You add a few automated tests, send the Pull Request to your teammate, Soumya, for review and at 16:30, the improvements are automatically deployed via our continuous integration pipeline. By 17:30, you're done for the day and, together with your team members, head to the bar for a drink.

What we're asking

  • You have a minimum of five years experience as a software developer.
  • You are familiar with databases, ETL and data processing frameworks (Spark, Dataflow, Dataproc etc.) and Big Data on Google Cloud Platform.
  • You are familiar with continuous integration, logging and monitoring.
  • You think about the process and implementation from end to end.
  • Agile and flexibility come naturally to you.
  • You're willing to investigate what users want and why.
  • You have experience with helping others improve their craftsmanship and technical skills.
  • You're familiar with continuous integration systems that automate your code deployments.
  • You're willing to relocate to Rotterdam, or nearby.
MailChimp
  • Atlanta, GA
Mailchimp is a leading marketing platform for small business. We empower millions of customers around the world to build their brands and grow their companies with a suite of marketing automation, multichannel campaign, CRM, and analytics tools.
At Mailchimp, Software Data Engineers design, build, and maintain systems that process the streams of data created by MailChimps millions of users. We work closely with MailChimps Development and Data System teams to help them access and manage this high-volume, high-velocity data.
We're looking for a Data Engineer / Software Engineer to join our ranks. This role sits at the nexus of MailChimp's Development, Operations, and Data Science teams, bridging the gap between data storage and data analysis by creating systems to facilitate engineers' access to the data they need. Are you a self-directed, experienced engineer who enjoys the challenge of balancing performance and complexity inherent in distributed systems and data processing at large scale? If so, read on!
Our ideal candidate will combine deep technical skills with a strong desire to support the rest of the organization with a data infrastructure that meets their needs. We are looking for an enthusiastic and effective collaborator, eager to work across the organization.
What Youll Do
    • Design, build, deploy and maintain high quality data pipelines, large scale highly parallelized batch processing jobs.
    • Build and deploy the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies
    • Build and deploy Identify, design, and implement internal process improvements: automating manual processes, process monitoring, optimizing data delivery, building deployment pipelines, re-designing infrastructure for greater scalability, etc
    • Create data tools that assist the data science team members in optimizing our products and services
    • Champion appropriate data governance, security, privacy, quality, and retention policies within the organization
    • Help scope, estimate, and prioritize work in an environment of open discussion

We're open to a variety of experience for this role, from mid level to quite senior. You will, though, need a minimum of 2 years experience in data software engineering in order to find success in the role.
Wed love to hear from you if...
    • You enjoy working independently with appropriate support
    • Are passionate about building and deploying systems at scale and eager to track developments in your field
    • Have strong operational and development experience using Google Cloud Platform, in particular using BigQuery, Airflow, Dataflow/Beam, Dataproc (Spark). In addition to Airflow, Dataflow/Beam, bonus points for experience with Apache Kafka, and/or experience with Storm, or similar streaming platforms
    • Are proficient in Python scripting and Web frameworks (e.g., Flask). Experience with JVM.
    • Have experience scaling and optimizing schemas and performance tuning SQL and ETL pipelines. Have strong operational and development experience of at least one RDBMS (MySQL, PostgreSQL, etc.)
    • Have experience working with other data search and processing systems, such as Elasticsearch, Hadoop, etc.
    • Have experience building deployment pipelines with Jenkins, Kubernetes.

Mailchimp is a founder-owned and highly profitable company headquartered in the heart of Atlanta. Our purpose is to empower the underdog, and our mission is to democratize cutting edge marketing technology for small business. We offer our employees an exceptional workplace , extremely competitive compensation, fully paid benefits (for employees and their families), and generous profit sharing . We hire humble , collaborative, and ambitious people, and give them endless opportunities to grow and succeed.
We love our hometown and support sustainable urban renewal. Our headquarters is in the historic Ponce City Market , right on the Atlanta Beltline . If you'd like to be considered for this position, please apply below. We look forward to meeting you!
Mailchimp is an equal opportunity employer, and we value diversity at our company. We don't discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
The Rocket Science Group LLC
  • Atlanta, GA
Mailchimp is a leading marketing platform for small business. We empower millions of customers around the world to build their brands and grow their companies with a suite of marketing automation, multichannel campaign, CRM, and analytics tools.
At Mailchimp, Software Data Engineers design, build, and maintain systems that process the streams of data created by MailChimps millions of users. We work closely with MailChimps Development and Data System teams to help them access and manage this high-volume, high-velocity data.
We're looking for a Data Engineer / Software Engineer to join our ranks. This role sits at the nexus of MailChimp's Development, Operations, and Data Science teams, bridging the gap between data storage and data analysis by creating systems to facilitate engineers' access to the data they need. Are you a self-directed, experienced engineer who enjoys the challenge of balancing performance and complexity inherent in distributed systems and data processing at large scale? If so, read on!
Our ideal candidate will combine deep technical skills with a strong desire to support the rest of the organization with a data infrastructure that meets their needs. We are looking for an enthusiastic and effective collaborator, eager to work across the organization.
What Youll Do
    • Design, build, deploy and maintain high quality data pipelines, large scale highly parallelized batch processing jobs.
    • Build and deploy the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies
    • Build and deploy Identify, design, and implement internal process improvements: automating manual processes, process monitoring, optimizing data delivery, building deployment pipelines, re-designing infrastructure for greater scalability, etc
    • Create data tools that assist the data science team members in optimizing our products and services
    • Champion appropriate data governance, security, privacy, quality, and retention policies within the organization
    • Help scope, estimate, and prioritize work in an environment of open discussion

We're open to a variety of experience for this role, from mid level to quite senior. You will, though, need a minimum of 2 years experience in data software engineering in order to find success in the role.
Wed love to hear from you if...
    • You enjoy working independently with appropriate support
    • Are passionate about building and deploying systems at scale and eager to track developments in your field
    • Have strong operational and development experience using Google Cloud Platform, in particular using BigQuery, Airflow, Dataflow/Beam, Dataproc (Spark). In addition to Airflow, Dataflow/Beam, bonus points for experience with Apache Kafka, and/or experience with Storm, or similar streaming platforms
    • Are proficient in Python scripting and Web frameworks (e.g., Flask). Experience with JVM.
    • Have experience scaling and optimizing schemas and performance tuning SQL and ETL pipelines. Have strong operational and development experience of at least one RDBMS (MySQL, PostgreSQL, etc.)
    • Have experience working with other data search and processing systems, such as Elasticsearch, Hadoop, etc.
    • Have experience building deployment pipelines with Jenkins, Kubernetes.

Mailchimp is a founder-owned and highly profitable company headquartered in the heart of Atlanta. Our purpose is to empower the underdog, and our mission is to democratize cutting edge marketing technology for small business. We offer our employees an exceptional workplace , extremely competitive compensation, fully paid benefits (for employees and their families), and generous profit sharing . We hire humble , collaborative, and ambitious people, and give them endless opportunities to grow and succeed.
We love our hometown and support sustainable urban renewal. Our headquarters is in the historic Ponce City Market , right on the Atlanta Beltline . If you'd like to be considered for this position, please apply below. We look forward to meeting you!
Mailchimp is an equal opportunity employer, and we value diversity at our company. We don't discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Coolblue
  • Rotterdam, Netherlands
You support our Business Analysis and Data Science teams through your technical expertise and passion for data.

What you tell people at parties

'I am a data hacker:  I make Coolblue a little bit more data-driven everyday.'

What you really do

  • You actively coach the people around you and help them build scalable Data Pipelines.
  • You provide technical support to our Business Analysts, who transform the data your team provide to Actionable Insights for the all company. Your technical expertise will make Coolblue smarter by assisting a data-driven decision making process.

How you do it

  • You help develop data pipelines in Python
  • You are an all-rounder in a Scrum team. You will have your own specialization, but you will also be able to perform all other tasks within the team.
  • You prioritize your own work together with your team and Product Owner.
  • You will receive immediate feedback from Business Analysts and you will have a lot of opportunities to experiment.
  • You coach and provide feedback to your team members.
  • You will be using the right tools for whatever job will be thrown at you. Airflow, Spark and BigQuery are technologies we are already using.

Team

You'll work in a Scrum team of four to six members formed mostly by Python developers with a passion for data. The team is managed by a Team Lead. The Teams backlog is cured by Product Owner Jessica and Scrum Master Sonja. Your team works closely together with Business Analysts and a team of Data Scientists, providing them with data structures they can easily use to support company decisions.

A day at the office

Natalia sends you a Pull Request to speed up the transfer of data from the ERP system using a new messaging system. You review it together and then merge the code. After that, you join Jessica, the Product Owner, who wants discuss the requirements for the new process that will provide new insights about our customer Journey. Just before lunch, you pass by Matthias, a data scientist who is working on a new Machine Learning algorithm which can forecast warehouse stock. You help him moving some data to BigQuery. You invite him to have lunch together with your team. On the way back, you share your concerns about the Google Adwords ingest process with Rafael. It's losing records, so you want to refactor some of the python code. You add a few automated tests, send the Pull Request to your teammate, Soumya, for review and at 16:30, the improvements are automatically deployed via our continuous integration pipeline. By 17:30, you're done for the day and, together with your team members, head to the bar for a drink.

What we're asking

  • You have a minimum of five years experience as a software developer.
  • You are familiar with databases, ETL and data processing frameworks (Spark, Dataflow, Dataproc etc.) and Big Data on Google Cloud Platform.
  • You are familiar with continuous integration, logging and monitoring.
  • You think about the process and implementation from end to end.
  • Agile and flexibility come naturally to you.
  • You're willing to investigate what users want and why.
  • You have experience with helping others improve their craftsmanship and technical skills.
  • You're familiar with continuous integration systems that automate your code deployments.
  • You're willing to relocate to Rotterdam, or nearby.
Coolblue
  • Rotterdam, Netherlands
You support our Business Analysis and Data Science teams through your technical expertise and passion for data.

What you tell people at parties

'I am a data hacker:  I make Coolblue a little bit more data-driven everyday.'

What you really do

  • You actively coach the people around you and help them build scalable Data Pipelines.
  • You provide technical support to our Business Analysts, who transform the data your team provide to Actionable Insights for the all company. Your technical expertise will make Coolblue smarter by assisting a data-driven decision making process.

How you do it

  • You help develop data pipelines in Python
  • You are an all-rounder in a Scrum team. You will have your own specialization, but you will also be able to perform all other tasks within the team.
  • You prioritize your own work together with your team and Product Owner.
  • You will receive immediate feedback from Business Analysts and you will have a lot of opportunities to experiment.
  • You coach and provide feedback to your team members.
  • You will be using the right tools for whatever job will be thrown at you. Airflow, Spark and BigQuery are technologies we are already using.

Team

You'll work in a Scrum team of four to six members formed mostly by Python developers with a passion for data. The team is managed by a Team Lead. The Teams backlog is cured by Product Owner Jessica and Scrum Master Sonja. Your team works closely together with Business Analysts and a team of Data Scientists, providing them with data structures they can easily use to support company decisions.

A day at the office

Natalia sends you a Pull Request to speed up the transfer of data from the ERP system using a new messaging system. You review it together and then merge the code. After that, you join Jessica, the Product Owner, who wants discuss the requirements for the new process that will provide new insights about our customer Journey. Just before lunch, you pass by Matthias, a data scientist who is working on a new Machine Learning algorithm which can forecast warehouse stock. You help him moving some data to BigQuery. You invite him to have lunch together with your team. On the way back, you share your concerns about the Google Adwords ingest process with Rafael. It's losing records, so you want to refactor some of the python code. You add a few automated tests, send the Pull Request to your teammate, Soumya, for review and at 16:30, the improvements are automatically deployed via our continuous integration pipeline. By 17:30, you're done for the day and, together with your team members, head to the bar for a drink.

What we're asking

  • You have a minimum of five years experience as a software developer.
  • You are familiar with databases, ETL and data processing frameworks (Spark, Dataflow, Dataproc etc.) and Big Data on Google Cloud Platform.
  • You are familiar with continuous integration, logging and monitoring.
  • You think about the process and implementation from end to end.
  • Agile and flexibility come naturally to you.
  • You're willing to investigate what users want and why.
  • You have experience with helping others improve their craftsmanship and technical skills.
  • You're familiar with continuous integration systems that automate your code deployments.
  • You're willing to relocate to Rotterdam, or nearby.
travel audience GmbH
  • Berlin, Germany
  • Salary: €60k - 75k

We are searching for a communicative, experienced and proactive Senior DevOps Engineer to join our fast-growing and diverse team of Engineers.


By joining us, you will be combining your passion for technology and have a direct impact on the lives of millions of travellers, while also helping travel audience in becoming the global leader in data-driven advertising for the entire travel industry.


What you will do:



  • Drive the company-wide adoption of the cloud platform products and work closely on its implementation with the delivery teams;

  • Provide guidance on performance optimisation, help with the analysis and engage as the subject matter expert;

  • Participate in the analysis of new requirements and develop solutions and services to support the development teams.

  • Help to shape and execute the technical roadmap and strategy for the next generation of application features and cloud infrastructure in Google Cloud.

  • Develop systems automation and provisioning frameworks for multiple applications and environments.

  • Mentor, support and coach regarding tools, concepts and best practices.



You are who we are looking for if: 



  • You are familiar with distributed systems, their complexity and benefits, and also the trade-offs involved;

  • You like to code and automate as much as possible in an environment with Terraform, Kubernetes, Helm Charts and Golang applications;

  • You understand different database technologies and messaging queue patterns and you know your way around BigQuery, Postgres, Redis, Aerospike, Kafka, Google pub/sub;

  • You like to monitor everything and Prometheus and Grafana are your best palls. If you have used them in Federated setups, please let us know;

  • You have worked in a production Kubernetes environment and you understand the concepts around overlay networks and Kubernetes Operators. Multi-region setup is our next big challenge!

  • You are in the frontline of technology innovation and you love to transfer your knowledge, experience and best practices to other Engineers;

  • You have experience or interest in working with Google data solutions like Dataflow, Airflow or Apache Beam.


Why join us?


As part of our team, you will work in a highly motivated environment with flat hierarchies and short decision-making processes. You’ll have a lot of freedom to contribute your own ideas, implement them and work with a modern tech stack. We offer you:



  • The opportunity to lead and develop a team while truly having an impact on the business;

  • A fast-paced industry where you handle new problems every day;

  • An environment where you are encouraged to research, explore and try new ways of doing things;

  • The opportunity to work with large amounts of data;

  • An open and dynamic start-up culture that supports great work-life balance.


For any queries, you can reach out to us at jobs@travelaudience.com
We are awaiting your application and looking forward to starting our journey together!

Coolblue
  • Rotterdam, Netherlands
You support our Business Analysis and Data Science teams through your technical expertise and passion for data.

What you tell people at parties

'I am a data hacker:  I make Coolblue a little bit more data-driven everyday.'

What you really do

  • You actively coach the people around you and help them build scalable Data Pipelines.
  • You provide technical support to our Business Analysts, who transform the data your team provide to Actionable Insights for the all company. Your technical expertise will make Coolblue smarter by assisting a data-driven decision making process.

How you do it

  • You help develop data pipelines in Python
  • You are an all-rounder in a Scrum team. You will have your own specialization, but you will also be able to perform all other tasks within the team.
  • You prioritize your own work together with your team and Product Owner.
  • You will receive immediate feedback from Business Analysts and you will have a lot of opportunities to experiment.
  • You coach and provide feedback to your team members.
  • You will be using the right tools for whatever job will be thrown at you. Airflow, Spark and BigQuery are technologies we are already using.

Team

You'll work in a Scrum team of four to six members formed mostly by Python developers with a passion for data. The team is managed by a Team Lead. The Teams backlog is cured by Product Owner Jessica and Scrum Master Sonja. Your team works closely together with Business Analysts and a team of Data Scientists, providing them with data structures they can easily use to support company decisions.

A day at the office

Natalia sends you a Pull Request to speed up the transfer of data from the ERP system using a new messaging system. You review it together and then merge the code. After that, you join Jessica, the Product Owner, who wants discuss the requirements for the new process that will provide new insights about our customer Journey. Just before lunch, you pass by Matthias, a data scientist who is working on a new Machine Learning algorithm which can forecast warehouse stock. You help him moving some data to BigQuery. You invite him to have lunch together with your team. On the way back, you share your concerns about the Google Adwords ingest process with Rafael. It's losing records, so you want to refactor some of the python code. You add a few automated tests, send the Pull Request to your teammate, Soumya, for review and at 16:30, the improvements are automatically deployed via our continuous integration pipeline. By 17:30, you're done for the day and, together with your team members, head to the bar for a drink.

What we're asking

  • You have a minimum of five years experience as a software developer.
  • You are familiar with databases, ETL and data processing frameworks (Spark, Dataflow, Dataproc etc.) and Big Data on Google Cloud Platform.
  • You are familiar with continuous integration, logging and monitoring.
  • You think about the process and implementation from end to end.
  • Agile and flexibility come naturally to you.
  • You're willing to investigate what users want and why.
  • You have experience with helping others improve their craftsmanship and technical skills.
  • You're familiar with continuous integration systems that automate your code deployments.
  • You're willing to relocate to Rotterdam, or nearby.
Coolblue
  • Rotterdam, Netherlands
You support our Business Analysis and Data Science teams through your technical expertise and passion for data.

What you tell people at parties

'I am a data hacker:  I make Coolblue a little bit more data-driven everyday.'

What you really do

  • You actively coach the people around you and help them build scalable Data Pipelines.
  • You provide technical support to our Business Analysts, who transform the data your team provide to Actionable Insights for the all company. Your technical expertise will make Coolblue smarter by assisting a data-driven decision making process.

How you do it

  • You help develop data pipelines in Python
  • You are an all-rounder in a Scrum team. You will have your own specialization, but you will also be able to perform all other tasks within the team.
  • You prioritize your own work together with your team and Product Owner.
  • You will receive immediate feedback from Business Analysts and you will have a lot of opportunities to experiment.
  • You coach and provide feedback to your team members.
  • You will be using the right tools for whatever job will be thrown at you. Airflow, Spark and BigQuery are technologies we are already using.

Team

You'll work in a Scrum team of four to six members formed mostly by Python developers with a passion for data. The team is managed by a Team Lead. The Teams backlog is cured by Product Owner Jessica and Scrum Master Sonja. Your team works closely together with Business Analysts and a team of Data Scientists, providing them with data structures they can easily use to support company decisions.

A day at the office

Natalia sends you a Pull Request to speed up the transfer of data from the ERP system using a new messaging system. You review it together and then merge the code. After that, you join Jessica, the Product Owner, who wants discuss the requirements for the new process that will provide new insights about our customer Journey. Just before lunch, you pass by Matthias, a data scientist who is working on a new Machine Learning algorithm which can forecast warehouse stock. You help him moving some data to BigQuery. You invite him to have lunch together with your team. On the way back, you share your concerns about the Google Adwords ingest process with Rafael. It's losing records, so you want to refactor some of the python code. You add a few automated tests, send the Pull Request to your teammate, Soumya, for review and at 16:30, the improvements are automatically deployed via our continuous integration pipeline. By 17:30, you're done for the day and, together with your team members, head to the bar for a drink.

What we're asking

  • You have a minimum of five years experience as a software developer.
  • You are familiar with databases, ETL and data processing frameworks (Spark, Dataflow, Dataproc etc.) and Big Data on Google Cloud Platform.
  • You are familiar with continuous integration, logging and monitoring.
  • You think about the process and implementation from end to end.
  • Agile and flexibility come naturally to you.
  • You're willing to investigate what users want and why.
  • You have experience with helping others improve their craftsmanship and technical skills.
  • You're familiar with continuous integration systems that automate your code deployments.
  • You're willing to relocate to Rotterdam, or nearby.