OnlyDataJobs.com

Cloudreach
  • Atlanta, GA

Big dreams often start small. From an idea in a London pub, we have grown into a global cloud enabler which operates across 7 countries and speak over 30 languages.


Our purpose at Cloudreach is to enable innovation. We do this by helping enterprise customers adopt and harness the power of cloud computing. We believe that the growth of a great business can only be fuelled by great people, so join us in our partnership with AWS, Microsoft and Google and help us  build one of the most disruptive companies in the cloud industry. Its not your average job, because Cloudreach is not your average company.


What does the Cloud Enablement team do?

Our Cloud Enablement team helps provide consultative, architectural, program and engineering support for our customers' journeys to the cloud. The word 'Enablement' was chosen carefully, to encompass the idea that we support and encourage a collaborative approach to Cloud adoption, sharing best practices, helping change the culture of teams and strategic support to ensure success.


How will you spend your days ?

    • Build technical solutions required for optimal ingestion, transformation, and loading of data from a wide variety of data sources using open source, AWS, Azure or GCP big data frameworks and services.
    • Work with the product and software team to provide feedback surrounding data-related technical issues and support for data infrastructure needs uncovered during customer engagements / testing.
    • Understand and formulate processing pipelines of large, complex data sets that meet functional / non-functional business requirements.
    • Create and maintain optimal data pipeline architecture
    • Working alongside the Cloud Architect and Cloud Enablement Manager to implement Data Engineering solutions
    • Collaborating with the customers data scientists and data stewards/governors during workshop sessions to uncover more detailed business requirements related to data engineering


What kind of career progression can you expect ?

    • You can grow into a Cloud Data Engineer Lead or a Cloud Data Architect
    • There are opportunities for relocation in our other cloudy hubs

How to stand out ?

    • Experience in building scalable end-to-end data ingestion and processing solutions
    • Good understanding of data infrastructure and distributed computing principles
    • Proficient at implementing data processing workflows using Hadoop and frameworks such as Spark and Flink
    • Good understanding of data governance and how regulations can impact data storage and processing solutions such as GDPR and PCI
    • Ability to identify and select the right tools for a given problem, such as knowing when to use a relational or non-relational database
    • Working knowledge of non-relational and row/columnar based relational databases
    • Experience with Machine Learning toolkits
    • Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
    • This position requires travel up to 70% (M-F) any given week, with an average of 50% per year


What are our cloudy perks?

      A M
    • a
        c
      • Book Pro and iphone or google pixel (your pick!)
      • Unique cloudy culture -- we work hard and play hard
      • Uncapped holidays and your birthday off
      • World-class training and career development opportunities through our own Cloudy University
      • Centrally-located offices
      • Fully stocked kitchen and team cloudy lunches, taking over a restaurant or two every Friday
      • Office amenities like pool tables and Xbox on the big screen TV
      • Working with a collaborative, social team, and growing your skills faster than you will anywhere else
      • Quarterly kick-off events with the opportunity to travel abroad
      • Full benefits and 401k match


    If you want to learn more, check us out on Glassdoor. Not if. When will you join Cloudreach?

Cloudreach
  • Atlanta, GA

Big dreams often start small. From an idea in a London pub, we have grown into a global cloud enabler which operates across 7 countries and speak over 30 languages.


Our purpose at Cloudreach is to enable innovation. We do this by helping enterprise customers adopt and harness the power of cloud computing. We believe that the growth of a great business can only be fueled by great people, so join us in our partnership with AWS, Microsoft and Google and help us build one of the most disruptive companies in the cloud industry. Its not your average job, because Cloudreach is not your average company.


Mission:

The purpose of a Cloud Data Architect is to design solution that enable data scientists and analysts to gain insights into data using data-driven cloud based services and infrastructures. At Cloudreach, they will be subject matter experts and will be responsible for the stakeholder management and technical leadership for data ingestion and processing engagements. A good understanding of cloud platforms and prior experience working with big data tooling and frameworks is required.


What will you do at Cloudreach?

  • Build technical solutions required for optimal ingestion, transformation, and loading of data from a wide variety of data sources using open source, AWS, Azure or GCP big data frameworks and services.
  • Work with the product and software team to provide feedback surrounding data-related technical issues and support for data infrastructure needs uncovered during customer engagements / testing.
  • Understand and formulate processing pipelines of large, complex data sets that meet functional / non-functional business requirements.
  • Create and maintain optimal data pipeline architecture
  • Working alongside Cloud Data Engineers, Cloud System Developers and Cloud Enablement Manager to implement Data Engineering solutions
  • Collaborating with the customers data scientists and data stewards during workshop sessions to uncover more detailed business requirements related to data engineering
  • This position requires travel up to 70% (M-F) any given week, with an average of 50% per year


What do we look for?

The Cloud Data Architect has extensive experience working with big data tools and supporting cloud services, a pragmatic mindset focused on translating functional and non-functional requirements into viable architectures, and ideally a consultancy background, leading a highly skilled team on engagements that implement complex and innovative data solutions for clients.


In addition, the Cloud Data Architect thrives in a collaborative and agile environment with an ability to learn new concepts easily.


  • Technical
      skills:
  • Exper
    • ience in building scalable end-to-end data ingestion and processing solutions Good
    • understanding of data infrastructure and distributed computing principles Profi
    • cient at implementing data processing workflows using Hadoop and frameworks such as Spark and Flink Good
    • understanding of data governance and how regulations can impact data storage and processing solutions such as GDPR and PCI Abili
    • ty to identify and select the right tools for a given problem, such as knowing when to use a relational or non-relational database Worki
    • ng knowledge of non-relational and row/columnar based relational databases Exper
    • ience with Machine Learning toolkits Exper
    • ience with object-oriented and/or functional programming languages, such as Python, Java and Scala
    • Demonstrable working experience
      • A successful history of manipulating, processing and extracting value from large disconnected datasets
      • Delivering production scale data engineering solutions leveraging one or more cloud services
      • Confidently taking responsibility for the technical output of a project
      • Ability to quickly pick up new skills and learn on the job
      • Comfortably working with various stakeholders such as data scientists, architects and other developers
    • Solid communication skills: You can clearly articulate the vision and confidently communicate with all stakeholder levels: Cloudreach, Customer, 3rd Parties and Partners - both verbal and written. You are able to identify core messages and act quickly and appropriately.


    What are our cloudy perks?

    • A MacBook Pro and smartphone.
    • Unique cloudy culture -- we work hard and play hard.
    • Uncapped holidays and your birthday off.
    • World-class training and career development opportunities through our own Cloudy University.
    • Centrally-located offices.
    • Fully stocked kitchen and team cloudy lunches, taking over a restaurant or two every Friday.
    • Office amenities like pool tables and Xbox on the big screen TV.
    • Working with a collaborative, social team, and growing your skills faster than you will anywhere else.
    • Full benefits and 401k match.
    • Kick-off events at cool locations throughout the country.
Expedia Group - Europe
  • London, UK

Expedia needs YOU!


At Hotels.com (part of the Expedia Group) ensuring that our customers find the perfect hotel is an exciting mission for our technology teams.

We are looking for a Software Development Engineer to join our awesome team of bright minds. We are building Hotels.com's data streaming and ingestion platform, reliably handling many thousands of messages every second.

Our platform enables data producers and consumers to easily share and process information. We have built and continue to improve our platform using open source projects such as: Kafka, Kubernetes and Java Spring, all running in the cloud, and we love contributing our work back to the open source community.


What you'll do:


Are you interested in building a world-class, fast-growing data platform in the cloud?



  • Are you keen to work with and contribute to open source projects?

  • Do you have a real passion for clean code and finding elegant solutions to problems?

  • Are you eager to learn streaming, cloud and big data technologies and keep your skills up to date?

  • If any of those are true... Hotels.com is looking for you!

  • We seek an enthusiastic, experienced and reliable engineer who enjoys getting things done.

  • You should have good communication skills and be equally comfortable clearly explaining technical problems and solutions with analysts, engineers and product managers.

  • We welcome your fresh ideas and approaches as we constantly aim to improve our development methodologies.

  • Our team has experience using a wide range of cutting edge technologies and years of streaming, cloud and big data experience.

  • We are always learning and growing, so we guarantee that you won't be bored with us!


We don’t believe in skill matching against a list of buzzwords…

However we do believe in hiring smart, friendly and creative people, with excellent programming abilities, who are on a journey to mastery through craftsmanship. We believe great developers can learn new technologies quickly and effectively but it wouldn't hurt if you have experience with some of the following (or a passion to learn them):

Technologies:
Kafka, Kubernetes, Spring, AWS, Spark Streaming, Hive, Flink, Docker.

Experience:
Modern core and server side Java (concurrency, streams, reactive, lambdas).
Microservice architecture, design, and standard methodologies with an eye towards scale.
Building and debugging highly scalable performant systems.
Actively contributing to Open Source Software.


What you’ll do:



  • Write clean, efficient, thoroughly tested code, backed-up with pair programming and code reviews.

  • Much of our code is Java, but we use all kinds of languages and frameworks.

  • Be part of an agile team that is continuously learning and improving.

  • Develop scalable and highly-performant distributed systems with everything this entails (availability, monitoring, resiliency).

  • Work with our business partners to flesh out and deliver on requirements in an agile manner.

  • Evolve development standards and practices.

  • Take architectural ownership for various critical components and systems.

  • Proactive problem solving at the organization level.

  • Communicate and document solutions and design decisions.

  • Build bridges between technical teams to enable valuable collaborations.


As a team we love to:



  • Favor clean code, and simple, robust architecture.

  • Openly share knowledge in order to grow and develop our team members.

  • Handle massive petabyte-scale data sets.

  • Host and attend meetups and conferences to present our work. This year we've presented at the

  • Dataworks Summit in Berlin and the Devoxx Conference in London.

  • Contribute to Open Source. In recent years our team created Circus Train, Waggle Dance, BeeJU, CORC, Plunger, Jasvorno and contributed to Confluent, aws-alb-ingress-controller, S3Proxy, Cascading, Hive, HiveRunner, Kafka and Terraform. In addition, we are currently working towards open sourcing our streaming platform.

  • Create an inclusive and fun workplace.


Do all of this in a comfortable, modern office, with a massive roof terrace in great location within central London!


We’ll take your career on a journey that’s flexible and right for you; recognizing and rewarding your achievements:

A conversation around flexible working and what will best fit you is encouraged at Hotels.com.
Competitive salaries and many growth opportunities within the wider global Expedia Group.
Option to attend conferences globally and enrich the technology skills you are passionate about.
Cash and Stock rewards for outstanding performance.
Extensive travel rewards and discounts for all employees, perfect for ticking some destinations off your bucket list!


We believe that a diverse and inclusive workforce, is the most awesome workforce…
We believe in being Different. We seek new ideas, different ways of thinking, diverse backgrounds and approaches, because averages can lie and conformity is dangerous.
Expedia is committed to crafting an inclusive work environment with a diverse workforce. You will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Why join us:
Expedia Group recognizes our success is dependent on the success of our people.  We are the world's travel platform, made up of the most knowledgeable, passionate, and creative people in our business.  Our brands recognize the power of travel to break down barriers and make people's lives better – that responsibility inspires us to be the place where exceptional people want to do their best work, and to provide them the tools to do so. 


Whether you're applying to work in engineering or customer support, marketing or lodging supply, at Expedia Group we act as one team, working towards a common goal; to bring the world within reach.  We relentlessly strive for better, but not at the cost of the customer.  We act with humility and optimism, respecting ideas big and small.  We value diversity and voices of all volumes. We are a global organization but keep our feet on the ground so we can act fast and stay simple.  Our teams also have the chance to give back on a local level and make a difference through our corporate social responsibility program, Expedia Cares.


If you have a hunger to make a difference with one of the most loved consumer brands in the world and to work in the dynamic travel industry, this is the job for you.


Our family of travel brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Egencia®, trivago®, HomeAway®, Orbitz®, Travelocity®, Wotif®, lastminute.com.au®, ebookers®, CheapTickets®, Hotwire®, Classic Vacations®, Expedia® Media Solutions, CarRentals.com™, Expedia Local Expert®, Expedia® CruiseShipCenters®, SilverRail Technologies, Inc., ALICE and Traveldoo®.


We’re excited for you to make Expedia an even more awesome place to work!
So what are you waiting for? Apply now and join us on our journey to become the #1 travel company in the world!



Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

KI labs GmbH
  • München, Germany
  • Salary: €70k - 95k

About Us


At KI labs we design and build state of the art software and data products and solutions for the major brands of Germany and Europe. We aim to push the status quo of technology in corporations, with special focus areas of software, data and culture. Inside, we are a team of software developers, designers, product managers and data scientists, who are passionate about building the products of future today. We believe in open-source and independent teams, follow Agile practices, lean startup method and aim to share this culture with our clients. We are mainly located in Munich and recently Lisbon.


Your Responsibilities



  • Lead and manage a team of talented Data Engineers and Data Scientists in a diverse environment.

  • Ensure success of your team members and foster their career growth.

  • Ensure that the technology stacks, infrastructure, software architecture and development methods we use provide for an efficient project delivery and sustainable company growth.

  • Use your extensive practical expertise to help teams design, build and scale efficient and elegant data products on cloud computing platforms such as AWS, Azure, or Google Cloud Platform.

  • Enable and facilitate a data-driven culture for internal and external clients, and use advanced data pipelines to generate insights;


Skills and qualifications



  • Advanced degree in Computer Science, Engineering, Data Science, Applied Mathematics, Statistics or related fields;

  • Substantial practical experience in software and data engineering.

  • You are an authentic leader, keen on leading by example rather than by direct top-down management;

  • You are committed to operational excellence and to facilitating team communication and collaboration on a daily basis.

  • You like to see the big picture and you have a deep understanding of the building blocks of large-scale data architectures and pipelines.

  • You have passion for data, be it at crunching, transforming or visualising large data streams;

  • You have working knowledge of programming languages such as Python, Java, Scala, or Go

  • You have working knowledge of modern big data tooling and frameworks (Hadoop, Spark, Flink, Kafka, etc.), data storage systems, analytics tools, and ideally machine learning platforms.


Why work with us



  • You will have an opportunity to be at the frontline of innovation together with our prominent clients, influencing the car you drive in five years, the services you have on your flight, and the way you pay for your morning coffee.

  • Working closely with our leadership team, you will have a real chance to influence what our quickly growing company looks like in 3 years.

  • You get a challenging working environment located in the center of Munich and Lisbon and an ambitious team of individuals with unique backgrounds and expertise.

  • You get the chance to work on various interesting projects in short time frames; we do not do work on maintenance or linear projects.

  • We have an open-door work culture where ideas and initiatives are encouraged.

  • We offer a performance-based competitive salary.

Findify
  • Stockholm, Sweden

As part of our engineering team you will be responsible for building our product, an advanced machine learning algorithm within search personalization for e-commerce. As demand for our product continues to increase, we are on a journey to grow the team substantially in 2019. We’d love for you to join us.



About the role:


You will be part of a small team that moves fast and iterates. We do weekly sprints, code reviews, testing, and put a lot of emphasis on code style, cleanliness and robustness. You will get to work with amazing engineers specializing in machine learning and distributed systems.



Your responsibilities include:



  • Managing and improving Findify’s data pipelines - a crucial responsibility for us as Findify collects millions of data points every day to feed our machine learning algorithms

  • Enhancing some of the critical components of our system to successfully integrate our customers’ products and improve our search capabilities

  • Actively contributing to the overall design of our infrastructure and the application of our product vision



About you:


You are a creative problem-solver with passion for programming and building scalable architectures.



You are:



  • Initiative-taking; you are self-motivated, a doer, and can drive projects from start to finish

  • A team-player; you are comfortable working with different styles and believe (like us) that together we achieve much more than alone

  • Driven; working hard to achieve a goal you care about and running several projects in parallel is no stranger to you

  • A great communicator; you are comfortable in communicating in English both written and oral, including explaining your technical development to internal and external partners

  • Located within time zone GMT and GMT +3



You have:



  • A BSc or MSc in Computer Science or related technical discipline

  • At least 3 years of Scala work experience

  • Experience with relational database systems such as PostgreSQL

  • Familiarity with Akka Stream, Akka HTTP or Flink

  • Experience with Git

  • Working knowledge with Linux/Unix



We’d be extra impressed if you also have:



  • Experience with AWS / key-value databases such as Cassandra / data-mining / machine learning / search frameworks such as Lucene / e-commerce platforms

  • Dev-ops skills

  • Experience in working in/with remote teams

  • Experience in working in agile/lean methodologies

  • A side project or blog that showcases your passion



We believe that the more inclusive we are, the better products we build and the better we are able to serve our customers. Women and other minorities under-represented in tech are strongly encouraged to apply.

FlixBus
  • Berlin, Germany

Your Tasks – Paint the world green



  • Holistic cloud-based infrastructure automation

  • Distributed data processing clusters as well as data streaming platforms based on Kafka, Flink and Spark

  • Microservice platforms based on Docker

  • Development infrastructure and QA automation

  • Continuous Integration/Delivery/Deployment


Your Profile – Ready to hop on board



  • Experience in building and operating complex infrastructure

  • Expert-level: Linux, System Administration

  • Experience with Cloud Services, Expert-Level with either AWS or GCP  

  • Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShift

  • Mindset: "Automate Everything", "Infrastructure as Code", "Pipelines as Code", "Everything as Code"

  • Hands-on experience with "Infrastructure as Code" tools: TerraForm, CloudFormation, Packer

  • Experience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)

  • Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELK

  • At least basic knowledge in designing and implementing Service Level Agreements

  • Solid knowledge of Network and general Security Engineering

  • At least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, Bamboo

  • At least basic hands-on DBA experience, experience with data backup and recovery

  • Experience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory

American Express
  • Phoenix, AZ

Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every single day. Today, creative ideas, insight and new points of view are at the core of how we craft a more powerful, personal and fulfilling experience for all our customers. So if youre passionate about a career building breakthrough software and making an impact on an audience of millions, look no further.

There are hundreds of chances for you to make your mark on Technology and life at American Express. Heres just some of what youll be doing:

    • Take your place as a core member of an Agile team driving the latest application development practices.
    • Find your opportunity to execute new technologies, write code and perform unit tests, as well as working with data science, algorithms and automation processing
    • Engage your collaborative spirit by Collaborate with fellow engineers to craft and deliver recommendations to Finance, Business, and Technical users on Finance Data Management. 


Qualifications:

  

Are you up for the challenge?


    • 4+ years of Software Development experience.
    • BS or MS Degree in Computer Science, Computer Engineering, or other Technical discipline including practical experience effectively interpreting Technical and Business objectives and challenges and designing solutions.
    • Ability to effectively collaborate with Finance SMEs and partners of all levels to understand their business processes and take overall ownership of Analysis, Design, Estimation and Delivery of technical solutions for Finance business requirements and roadmaps, including a deep understanding of Finance and other LOB products and processes. Experience with regulatory reporting frameworks, is preferred.
    • Hands-on expertise with application design and software development across multiple platforms, languages, and tools: Java, Hadoop, Python, Streaming, Flink, Spark, HIVE, MapReduce, Unix, NoSQL and SQL Databases is preferred.
    • Working SQL knowledge and experience working with relational databases, query authoring (SQL), including working familiarity with a variety of databases(DB2, Oracle, SQL Server, Teradata, MySQL, HBASE, Couchbase, MemSQL).
    • Experience in architecting, designing, and building customer dashboards with data visualization tools such as Tableau using accelerator database Jethro.
    • Extensive experience in application, integration, system and regression testing, including demonstration of automation and other CI/CD efforts.
    • Experience with version control softwares like git, svn and CI/CD testing/automation experience.
    • Proficient with Scaled Agile application development methods.
    • Deals well with ambiguous/under-defined problems; Ability to think abstractly.
    • Willingness to learn new technologies and exploit them to their optimal potential, including substantiated ability to innovate and take pride in quickly deploying working software.
    • Ability to enable business capabilities through innovation is a plus.
    • Ability to get results with an emphasis on reducing time to insights and increased efficiency in delivering new Finance product capabilities into the hands of Finance constituents.
    • Focuses on the Customer and Client with effective consultative skills across a multi-functional environment.
    • Ability to communicate effectively verbally and in writing, including effective presentation skills. Strong analytical skills, problem identification and resolution.
    • Delivering business value using creative and effective approaches
    • Possesses strong business knowledge about the Finance organization, including industry standard methodologies.
    • Demonstrates a strategic/enterprise viewpoint and business insights with the ability to identify and resolve key business impediments.


Employment eligibility to work with American Express in the U.S. is required as the company will not pursue visa sponsorship for these positions.

SafetyCulture
  • Surry Hills, Australia
  • Salary: A$120k - 140k

The Role



  • Be an integral member on the team responsible for design, implement and maintain distributed big data capable system with high-quality components (Kafka, EMR + Spark, Akka, etc).

  • Embrace the challenge of dealing with big data on a daily basis (Kafka, RDS, Redshift, S3, Athena, Hadoop/HBase), perform data ETL, and build tools for proper data ingestion from multiple data sources.

  • Collaborate closely with data infrastructure engineers and data analysts across different teams, find bottlenecks and solve the problem

  • Design, implement and maintain the heterogeneous data processing platform to automate the execution and management of data-related jobs and pipelines

  • Implement automated data workflow in collaboration with data analysts, continue to improve, maintain and improve system in line with growth

  • Collaborate with Software Engineers on application events, and ensuring right data can be extracted

  • Contribute to resources management for computation and capacity planning

  • Diving deep into code and constantly innovating


Requirements



  • Experience with AWS data technologies (EC2, EMR, S3, Redshift, ECS, Data Pipeline, etc) and infrastructure.

  • Working knowledge in big data frameworks such as Apache Spark, Kafka, Zookeeper, Hadoop, Flink, Storm, etc

  • Rich experience with Linux and database systems

  • Experience with relational and NoSQL database, query optimization, and data modelling

  • Familiar with one or more of the following: Scala/Java, SQL, Python, Shell, Golang, R, etc

  • Experience with container technologies (Docker, k8s), Agile development, DevOps and CI tools.

  • Excellent problem-solving skills

  • Excellent verbal and written communication skills 

Mix.com
  • Phoenix, AZ

Are you interested in scalability & distributed systems? Do you want to work to help shaping a discovery engine powered by cutting edge technologies and machine learning at scale? If you answered yes to the above questions, Mix's Research and Development is the team for you!


In this role, you'll be part of a small and innovative team comprised of engineers and data scientists working together to understand content by leveraging machine learning and NLP technologies. You will have the opportunity to work on core problems like detection of low quality content or spam, text semantic analysis, video and image processing, content quality assessment and monitoring. Our code operates at massive scale, ingesting, processing and indexing millions of URLs.



Responsibilities

  • Write code to build an infrastructure, which is capable of scaling based on the load
  • Collaborate with researchers and data scientists to integrate innovative Machine Learning and NLP techniques with our serving, cloud and data infrastructure
  • Automate build and deployment process, and setup monitoring and alerting systems
  • Participate in the engineering life-cycle, including writing documentation and conducting code reviews


Required Qualifications

  • Strong knowledge of algorithms, data structures, object oriented programming and distributed systems
  • Fluency in OO programming language, such as  Scala (preferred), Java, C, C++
  • 3+ years demonstrated expertise in stream processing platforms like Apache Flink, Apache Storm and Apache Kafka
  • 2+ years experience with a cloud platform like Amazon Web Services (AWS) or Microsoft Azure
  • 2+ years experience with monitoring frameworks, and analyzing production platforms, UNIX servers and mission critical systems with alerting and self-healing systems
  • Creative thinker and self-starter
  • Strong communication skills


Desired Qualifications

  • Experience with Hadoop, Hive, Spark or other MapReduce solutions
  • Knowledge of statistics or machine learning
Ripple
  • San Francisco, CA
  • Salary: $135k - 185k

Ripple is the world’s only enterprise blockchain solution for global payments. Today the world sends more than $155 trillion* across borders. Yet, the underlying infrastructure is dated and flawed. Ripple connects banks, payment providers, corporates and digital asset exchanges via RippleNet to provide one frictionless experience to send money globally.


Ripple is growing rapidly and we are looking for a results-oriented and passionate Senior Software Engineer, Data to help build and maintain infrastructure and empower the data-driven culture of the company. Ripple’s distributed financial technology outperforms today’s banking infrastructure by driving down costs, increasing processing speeds and delivering end-to-end visibility into payment fees, timing, and delivery.


WHAT YOU’LL DO:



  • Support our externally-facing data APIs and applications built on top of them

  • Build systems and services that abstract the engines and will allow the users to focus on business and application logic via higher-level programming models

  • Build data pipelines and tools to keep pace with the growth of our data and its consumers

  • Identify and analyze requirements and use cases from multiple internal teams (including finance, compliance, analytics, data science, and engineering); work with other technical leads to design solutions for the requirements


WHAT WE’RE LOOKING FOR:



  • Deep experience with distributed systems, distributed data stores, data pipelines, and other tools in cloud services environments (e.g AWS, GCP)

  • Experience with distributed processing compute engines like Hadoop, Spark, and/or GCP data ecosystems (BigTable, BigQuery, Pub/Sub)

  • Experience with stream processing frameworks such as Kafka, Beam, Storm, Flink, Spark streaming

  • Experience building scalable backend services and data pipelines

  • Proficient in Python, Java, or Go

  • Able to support Node.js in production

  • Familiarity with Unix-like operating systems

  • Experience with database internals, database design, SQL and database programming

  • Familiarity with distributed ledger technology concepts and financial transaction/trading data

  • You have a passion for working with great peers and motivating teams to reach their potential

mbr targeting GmbH
  • Berlin, Germany
  • Salary: €60k - 75k

Join our team as Senior Data Engineer!


Why?



  • We have lots of this data you love

  • We have a big and shiny cluster

  • We're a small team: great power and great responsibility for everyone!

  • Bullshit free: no QA, no in-house sales, no scrum, no finger pointing, not that many managers

  • We're smart and wanna become smarter


YOU will …



  • do some of that typical Data Engineering ETL stuff

  • manage our warehouse infrastructure (Kafka, Hadoop, HBase)

  • work with a a lot of different languages and technologies (Scala, Python, Java, Spark, Flink, Hive)

  • help our Data Scientists with their fancy Machine Learning Magic

  • improve our legacy code and make it 10x faster

  • come up with great ideas and introduce new frameworks and languages to our stack

  • learn a lot from your colleagues

inovex GmbH
  • München, Germany

Als Linux Systems Engineer mit Schwerpunkt Hadoop und Search bist du bei unseren Kunden für die Konzeption, Installation und Konfiguration der Linux-basierten Big Data Cluster verantwortlich. Ebenfalls zu deinenAufgaben gehören die Bewertung bestehender Big-Data-Systeme und die zukunftssichere Erweiterung von bestehenden Umgebungen.

Du kümmerst dich dabei ganzheitlich um die Systeme und betreust diese vom Linux-Betriebssystem bis zum Big Data Stack. Für die Automatisierung der oftmals komplexen Big Data Cluster verwendest du bevorzugt Konfigurationsmanagementwerkzeuge.

In unseren interdisziplinären Projektteams spielst du eine gestaltende Rolle und hast dabei oftmals die Entscheidungsfreiheit, wenn es um die Wahl der Werkzeuge geht.


Zur Besetzung der Position suchen wir Experten, die folgende Skills und Eigenschaften mitbringen:



  • Ein erfolgreich abgeschlossenes Studium mit Informatikbezug oder eine vergleichbare Qualifikation wie beispielsweise die Ausbildung zum Fachinformatiker sowie relevante Berufserfahrung

  • Leidenschaft und Begeisterung für neue Technologien und Themen rund um Linux und Big Data

  • Praktische Erfahrung mit Hadoop und gängigen Hadoop Ecosystem Tools sowie erste Erfahrungen mit „Hadoop Security“

  • Idealerweise hast du bereits praktische Erfahrung mit einer oder mehreren der folgenden Technologien bzw. Produkten gesammelt:

    • Flume, Kafka

    • Flink, Hive, Spark

    • Cassandra, Elasticsearch, HBase, MongoDB, CouchDB

    • Amazon EMR, Cloudera, Hortonworks, MapR

    • Java



  • Gute Kenntnisse im Bereich Netzwerk und Storage

  • Vorteilhaft sind Kenntnisse in einem Konfigurationsmanagementwerkzeug (z.B. Puppet, Chef oder Salt)

  • Gute kommunikative Fähigkeiten und sehr gute Deutsch- und Englischkenntnisse in Wort und Schrift

  • Hohe Motivation, gemeinsam mit anderen „inovexperts“ exzellente Projektergebnisse zu erzielen

  • Mobilität und Flexibilität für die Projektarbeit bei unseren Kunden vor Ort

Accenture
  • Atlanta, GA
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions. Using advanced analytic concepts like machine-learning, AI, big data, deep analytics, cloud, mobility, robotics and IOT your contribution will redefine the way entire industries work in every corner of the globe.
Accenture Digitals Applied Intelligence delivers insight driven outcomes at scale to help organizations improve performance. You will be a part of Accentures pivot into the New as an Data Scientist with the Applied Intelligence Human Resources Centre of Excellence. You will be working with a team that identifies and develops advanced analytics, statistical models, machine learning methods and solutions for Accenture Human Resources to improve various business outcome indicators. You will also support project-based analytics planning and implementation in the areas such as Predictive Analytics, Program Evaluation, Digital Analytics, Scheduling, Demand Forecasting and Fulfilment, HR Transformation, Talent Acquisition and Talent Supply Chain.
Role Description
A professional in this position, at this level within Accenture has the following responsibilities:
    • Is well versed and experienced in Advanced Analytics and Program Delivery
    • Identifies, assesses and solves complex business problems for area of responsibility, where analysis of situations or data requires an in-depth evaluation of variable factors
    • Closely follows the strategic direction set by executive management when establishing near term goals
    • Interacts with senior and executive management at a client and/or within Accenture on matters where they may need to gain acceptance on an alternate approach
    • Acts independently to determine methods and procedures on new assignments.
    • Responsible for decisions related to the day to day impact on area of responsibility
    • Manages large - medium sized teams and/or work efforts at a client or within Accenture
    • Understands and helps influence the client strategic direction and helps architect complex solutions
    • Leads a center of excellence in applied intelligence focused on Human Resources within Accenture.
    • Successfully develop, conceptualize, test and scale various statistical and machine learning models
    • Follow multiple approaches for project execution from adapting existing assets to analytics use cases, exploring third-party and open source solutions for speed to execution and for specific use cases, and engaging in fundamental research to develop novel solutions
    • Leverage the vast global network of Accenture to collaborate with Accenture Tech Labs, Accenture Open Innovation and Accenture Operations for creating solutions
    • Collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced analytics projects from design to execution
Basic Qualifications
These are the minimum requirements for an aspirant to be considered for the position applying to
    • Bachelor's degree in data science, mathematics, economics, statistics, engineering and information management or related field of study
    • Minimum 7 years of experience in data science and use of statistical methodologies
    • Minimum 5 years of developing machine learning methods, including familiarity with techniques in clustering, regression, optimization, recommendation, neural networks, and other
    • 7 ye
    • ars of experience in at least one of the following; Sup
        e
      • rvised and Unsupervised Learning, Classification Models, Cluster Analysis, Neural Networks, Non-parametric Methods, Multivariate Statistics, Reliability Models, Markov Models, Stochastic models, Bayesian Models, Deep Learning, Genetic Algorithms, Fuzzy Logic, Inference Systems
    • 7 years working and conceptual knowledge and experience with data science tools, including Python, R, Scala, Julia, or SAS
    • 7 years building and maintaining a large-scale analytics infrastructure used across the business including conducted research, design, implementation, and validation of cutting-edge algorithms to analyze diverse data sources
    • 7 years technical project management of data science driven projects and data science professionals developing and delivering machine learning models that work in a production setting
Preferred Qualifications
    • Preferred Masters or Ph.D. (Computer Science, Statistics, Engineering, Physics, Mathematics, Economics, Industrial/Organizational Psychology or Social Science)
    • Working and conceptual knowledge in relevant domains (Human Resources, Talent Acquisition, Talent Development, Talent Supply Chain) including hands on experience handling data driven decisions
    • HR Certifications
    • Familiarity with relational databases and intermediate level knowledge of SQL
    • Knowledge of UNIX or Linux environments
    • Experience working with large data sets and tools like MapReduce, Hadoop, Hive, etc.
    • Experience working with large data streaming technologies like Spark, Flink, etc.
    • Proficient verbal, written and presentation skills
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture.
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is a federal contractor and an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Accenture
  • Houston, TX
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions. Using advanced analytic concepts like machine-learning, AI, big data, deep analytics, cloud, mobility, robotics and IOT your contribution will redefine the way entire industries work in every corner of the globe.
Accenture Digitals Applied Intelligence delivers insight driven outcomes at scale to help organizations improve performance. You will be a part of Accentures pivot into the New as an Data Scientist with the Applied Intelligence Human Resources Centre of Excellence. You will be working with a team that identifies and develops advanced analytics, statistical models, machine learning methods and solutions for Accenture Human Resources to improve various business outcome indicators. You will also support project-based analytics planning and implementation in the areas such as Predictive Analytics, Program Evaluation, Digital Analytics, Scheduling, Demand Forecasting and Fulfilment, HR Transformation, Talent Acquisition and Talent Supply Chain.
Role Description
A professional in this position, at this level within Accenture has the following responsibilities:
    • Is well versed and experienced in Advanced Analytics and Program Delivery
    • Identifies, assesses and solves complex business problems for area of responsibility, where analysis of situations or data requires an in-depth evaluation of variable factors
    • Closely follows the strategic direction set by executive management when establishing near term goals
    • Interacts with senior and executive management at a client and/or within Accenture on matters where they may need to gain acceptance on an alternate approach
    • Acts independently to determine methods and procedures on new assignments.
    • Responsible for decisions related to the day to day impact on area of responsibility
    • Manages large - medium sized teams and/or work efforts at a client or within Accenture
    • Understands and helps influence the client strategic direction and helps architect complex solutions
    • Leads a center of excellence in applied intelligence focused on Human Resources within Accenture.
    • Successfully develop, conceptualize, test and scale various statistical and machine learning models
    • Follow multiple approaches for project execution from adapting existing assets to analytics use cases, exploring third-party and open source solutions for speed to execution and for specific use cases, and engaging in fundamental research to develop novel solutions
    • Leverage the vast global network of Accenture to collaborate with Accenture Tech Labs, Accenture Open Innovation and Accenture Operations for creating solutions
    • Collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced analytics projects from design to execution
Basic Qualifications
These are the minimum requirements for an aspirant to be considered for the position applying to
    • Bachelor's degree in data science, mathematics, economics, statistics, engineering and information management or related field of study
    • Minimum 7 years of experience in data science and use of statistical methodologies
    • Minimum 5 years of developing machine learning methods, including familiarity with techniques in clustering, regression, optimization, recommendation, neural networks, and other
    • 7 ye
    • ars of experience in at least one of the following; Sup
        e
      • rvised and Unsupervised Learning, Classification Models, Cluster Analysis, Neural Networks, Non-parametric Methods, Multivariate Statistics, Reliability Models, Markov Models, Stochastic models, Bayesian Models, Deep Learning, Genetic Algorithms, Fuzzy Logic, Inference Systems
    • 7 years working and conceptual knowledge and experience with data science tools, including Python, R, Scala, Julia, or SAS
    • 7 years building and maintaining a large-scale analytics infrastructure used across the business including conducted research, design, implementation, and validation of cutting-edge algorithms to analyze diverse data sources
    • 7 years technical project management of data science driven projects and data science professionals developing and delivering machine learning models that work in a production setting
Preferred Qualifications
    • Preferred Masters or Ph.D. (Computer Science, Statistics, Engineering, Physics, Mathematics, Economics, Industrial/Organizational Psychology or Social Science)
    • Working and conceptual knowledge in relevant domains (Human Resources, Talent Acquisition, Talent Development, Talent Supply Chain) including hands on experience handling data driven decisions
    • HR Certifications
    • Familiarity with relational databases and intermediate level knowledge of SQL
    • Knowledge of UNIX or Linux environments
    • Experience working with large data sets and tools like MapReduce, Hadoop, Hive, etc.
    • Experience working with large data streaming technologies like Spark, Flink, etc.
    • Proficient verbal, written and presentation skills
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture.
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is a federal contractor and an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Accenture
  • Dallas, TX
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions. Using advanced analytic concepts like machine-learning, AI, big data, deep analytics, cloud, mobility, robotics and IOT your contribution will redefine the way entire industries work in every corner of the globe.
Accenture Digitals Applied Intelligence delivers insight driven outcomes at scale to help organizations improve performance. You will be a part of Accentures pivot into the New as an Data Scientist with the Applied Intelligence Human Resources Centre of Excellence. You will be working with a team that identifies and develops advanced analytics, statistical models, machine learning methods and solutions for Accenture Human Resources to improve various business outcome indicators. You will also support project-based analytics planning and implementation in the areas such as Predictive Analytics, Program Evaluation, Digital Analytics, Scheduling, Demand Forecasting and Fulfilment, HR Transformation, Talent Acquisition and Talent Supply Chain.
Role Description
A professional in this position, at this level within Accenture has the following responsibilities:
    • Is well versed and experienced in Advanced Analytics and Program Delivery
    • Identifies, assesses and solves complex business problems for area of responsibility, where analysis of situations or data requires an in-depth evaluation of variable factors
    • Closely follows the strategic direction set by executive management when establishing near term goals
    • Interacts with senior and executive management at a client and/or within Accenture on matters where they may need to gain acceptance on an alternate approach
    • Acts independently to determine methods and procedures on new assignments.
    • Responsible for decisions related to the day to day impact on area of responsibility
    • Manages large - medium sized teams and/or work efforts at a client or within Accenture
    • Understands and helps influence the client strategic direction and helps architect complex solutions
    • Leads a center of excellence in applied intelligence focused on Human Resources within Accenture.
    • Successfully develop, conceptualize, test and scale various statistical and machine learning models
    • Follow multiple approaches for project execution from adapting existing assets to analytics use cases, exploring third-party and open source solutions for speed to execution and for specific use cases, and engaging in fundamental research to develop novel solutions
    • Leverage the vast global network of Accenture to collaborate with Accenture Tech Labs, Accenture Open Innovation and Accenture Operations for creating solutions
    • Collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced analytics projects from design to execution
Basic Qualifications
These are the minimum requirements for an aspirant to be considered for the position applying to
    • Bachelor's degree in data science, mathematics, economics, statistics, engineering and information management or related field of study
    • Minimum 7 years of experience in data science and use of statistical methodologies
    • Minimum 5 years of developing machine learning methods, including familiarity with techniques in clustering, regression, optimization, recommendation, neural networks, and other
    • 7 ye
    • ars of experience in at least one of the following; Sup
        e
      • rvised and Unsupervised Learning, Classification Models, Cluster Analysis, Neural Networks, Non-parametric Methods, Multivariate Statistics, Reliability Models, Markov Models, Stochastic models, Bayesian Models, Deep Learning, Genetic Algorithms, Fuzzy Logic, Inference Systems
    • 7 years working and conceptual knowledge and experience with data science tools, including Python, R, Scala, Julia, or SAS
    • 7 years building and maintaining a large-scale analytics infrastructure used across the business including conducted research, design, implementation, and validation of cutting-edge algorithms to analyze diverse data sources
    • 7 years technical project management of data science driven projects and data science professionals developing and delivering machine learning models that work in a production setting
Preferred Qualifications
    • Preferred Masters or Ph.D. (Computer Science, Statistics, Engineering, Physics, Mathematics, Economics, Industrial/Organizational Psychology or Social Science)
    • Working and conceptual knowledge in relevant domains (Human Resources, Talent Acquisition, Talent Development, Talent Supply Chain) including hands on experience handling data driven decisions
    • HR Certifications
    • Familiarity with relational databases and intermediate level knowledge of SQL
    • Knowledge of UNIX or Linux environments
    • Experience working with large data sets and tools like MapReduce, Hadoop, Hive, etc.
    • Experience working with large data streaming technologies like Spark, Flink, etc.
    • Proficient verbal, written and presentation skills
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture.
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is a federal contractor and an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
InnoGames
  • Hamburg, Deutschland

Software Engineer - Big Data


Development Hamburg, Germany Full-Time


Join our agile, cross-functional development team to push forward our Big Data ambitions, using the Hadoop ecosystem and technologies like MapReduce, Flink, Hive and Kafka. You will design and implement systems in a distributed environment to prepare, provide and analyze huge amounts of data.


Your mission:



  • Real- and near-time processing of our more than 1,500,000,000 daily game events

  • Preparing very large amounts of data (ETL) as a basis for further processing in other systems (e.g. Business Intelligence, CRM) using a range of technologies, including SQL (Hive), Flink and Kafka

  • Providing high-availability, centralized systems and services (REST APIs) for our games as well as solutions for subject-specific analysis requirements (e.g. multivariate testing), using technologies like Dropwizard or the Play framework.



Your profile:



  • You have completed your training as an IT specialist, a university degree in computer science or an equivalent qualification

  • Experience in object-oriented programming in Java and at least one other script or programming language (e.g. Bash, PHP, Scala, Go), ideally in an agile environment

  • You enjoy working with large amounts of data in distributed systems and you have very good knowledge of SQL

  • You know your way around the command line of UNIX operating systems

  • Ideally, you develop in a test-driven manner and use the advantages of a continuous integration server (e.g. Jenkins) for deployment

  • As the perfect candidate, you have already gained some experience with Big Data technologies, especially with the Hadoop ecosystem

  • You like to share a good laugh with your colleagues

  • Good English language skills complete your profile



Why join us?



  • Shape the success story of InnoGames with a great team of driven experts in an international culture

  • You will work in a multicultural Kanban-based team with daily stand ups and regular retrospectives

  • Work in a relaxed but solution-oriented and productive environment with clear goals and a focus on professional development

  • A great company culture: we strive for long term, sustainable success. We provide and accept feedback to improve others and ourselves

  • Professional education through developer conferences, unlimited free textbooks and frequent specialist presentations from your colleagues

  • We have regular team events (e.g. curling, cooking, paintball), BBQ together on our roof terrace and have a team lunch each Wednesday

  • You can set up your workspace as you wish: Mac or Linux, two or more monitors, a free choice of IDE as well as other software

  • We pay a share of your local public transportation ticket and even contribute to your company retirement plan

  • Exceptional benefits ranging from flawless relocation support to company gym, smartphone or tablet of your own choice for personal use, roof terrace with BBQ and much more

  • We have fun at work!



Excited to start your journey with InnoGames and join our dynamic team as a Software Engineer – Big Data? We look forward to receiving your application as well as your salary expectations and earliest possible start date through our online application form. Isabella Dettlaff would be happy to answer any questions you may have.

InnoGames, based in Hamburg, is one of the leading developers and publishers of online games with more than 200 million registered players around the world. Currently, more than 400 people from 30 nations are working in the Hamburg-based headquarters. We have been characterized by dynamic growth ever since the company was founded in 2007. In order to further expand our success and to realize new projects, we are constantly looking for young talents, experienced professionals, and creative thinkers.

DIVERSANT, LLC
  • Atlanta, GA

Required Skills:

    • Experience designing and building highly scalable, distributed systems
    • Experience building Microservices and Event-Driven/Reactive Architectures
    • Extensive experience with JVM based applications
    • Experience using Spring framework and libraries
    • Deep understanding of REST based web services
    • Experience with at least one SQL Database (MySQL, Oracle, Postgres, etc) and at least one NoSQL Database (Mongo, Redis, Cassandra, etc)
    • Experience working in cloud environments (AWS preferred), preferably with cloud-native architectures
    • Experience working in an Agile environment


Nice to Have:

    • Experience maintaining legacy J2EE-based applications
    • Experience with functional (Haskell, Clojure) or hybrid OOP/Functional (Scala, Kotlin) languages
    • Knowledge of big data and/or stream processing technologies (Spark, Hadoop, Kafka Streams, Flink, etc)
Impetus
  • Phoenix, AZ
    • Experience writing and deploying production Cloud services. Most successful candidates will have 4+ years of Java or Scala experience.
    • API design and development for RESTful services
    • Knowledge/Experience of Big Data / Hadoop Technologies is a big plus
    • Ability to research and recommend the next generation of architectures and advancements. 
    • Built on AWS   A successful candidate will have experience building products on top of the AWS ecosystem, not just deploying a package to AWS.
    • Experience with S3, Dynamo, EC2, RDS, or Lambda.
    • Being familiar with data processing and analytics architectures/components is a plus. Such as: Spark, Flink, Hadoop, Kafka, Kinesis, Elasticsearch.
Numbrs
  • Zürich, Switzerland

At Numbrs, our engineers don’t just develop things – we have an impact. We change the way how people are managing their finances by building the best products and services for our users. 

Numbrs engineers are innovators, problem-solvers, and hard-workers who are building solutions in big data, mobile technology and much more. We look for professional, highly skilled engineers who evolve, adapt to change and thrive in a fast-paced, value-driven environment.

Join our dedicated technology team that builds massively scalable systems, designs low latency architecture solutions and leverages machine learning technology to turn financial data into action. Want to push the limit of personal finance management? Join Numbrs.


Job Description


You will be a part of a team that is responsible for developing, releasing, monitoring and troubleshooting large scale micro-service based distributed systems with high transaction volume. You enjoy learning new things and are passionate about developing new features, maintaining existing code, fixing bugs, and contributing to overall system design. You are a great teammate who thrives in a dynamic environment with rapidly changing priorities.


All candidates will have



  • a Bachelor's or higher degree in technical field of study or equivalent practical experience

  • hands-on experience with highly concurrent production grade systems

  • knowledge of at least one modern programming language, such as Go, Java, C++ and Scala, etc.

  • excellent troubleshooting and creative problem-solving abilities

  • excellent written and oral communication and interpersonal skills


Ideally, candidates will also have



  • experience with systems for automating deployment, scaling and management of containerised applications, such as Kubernetes and Mesos

  • experience with big data technologies, such as Kafka, Spark, Storm, Flink and Cassandra

  • experience with encryption and cryptography standards


Location: Zurich, Switzerland

hipages
  • Sydney, Australia

We are on a mission to make home improvement effortlessly efficient. Our aim is to create a seamless experience for tradies and homeowners in place of the current unreliable - and unproductive - process that makes it a feat of endurance.


We build technology that solves the frictions of an industry ready for optimisation, by redesigning the tradie/ client relationship and transforming the way trade businesses operate.


To date, over two million Australians have changed the way they find, hire and manage trusted tradies to get a job done around their home.


Your opportunity:


As a Data Engineer you’ll be responsible for collecting, transforming, storing and processing a wide variety of data sets. You will work closely with our data architect, data scientists, data analysts as well as key business stakeholders to devising strategies, prioritise, and deliver business value from our data assets.  


You will be influential in defining the direction of the hipages marketplace as we continue to evolve, apply your experience to transform data into information, govern data quality and advance data-informed decision-making across the business. But most importantly, you’ll be joining a top-notch data science team!


If working in a fun, high-growth, fast-paced company with potential opportunity to develop your career appeals, then read on.


What you'll be doing



  • Create, maintain, evolve and monitor hipages’ data pipeline.

  • Build the infrastructure required for optimal extraction, transformation and loading of data from a wide variety of sources and implement ETL processes.

  • Identify, design and implement internal process improvements such as automating manual tasks, optimizing data delivery and governance as well as evolving infrastructure for improved scalability.

  • Work with stakeholders including the Executive, Product and Data teams to assist with data-related technical issues and support their data infrastructure needs.

  • Collaborate with the analytics and data scientist team members to strive for greater functionality in our data systems and assist them in building and optimizing our product into an innovative industry leader.


To be successful….


In addition to a down-to-earth attitude, a desire to continually master your craft and a good sense of humour, you will also bring:



  • 5+ years’ experience in Data Warehousing, Business Intelligence and Big Data processing

  • Experience in writing ETL transformations in Python and Scala and familiarity with modern workflow systems like Airflow.

  • Experience with pipeline design and implementation with large distributed databases (Kinesis/Kafka, Spark/Beam/Flink, S3/HDFS etc)

  • Fluency in programming languages like Java / Scala / Python / Node.js

  • Experience in collecting, analysing, and synthesizing results from various data sources

  • Familiarity with scaling and deploying machine learning models

  • Strong sense of accountability and understanding of privacy around user data

  • Degree qualified in Computer Science, Information systems or similar

  • Experience supporting and working with cross-functional teams in a dynamic environment

  • Strong project management and organizational skills

  • Proven history of manipulating, processing and extracting value from large disconnected datasets using distributed systems

  • Expertise in build processes supporting data transformation, data structures, metadata, dependency and workload management


Why work for us?


We believe great companies come from great people and we empower our team members to have a voice, to help drive hipages forward and ensure we continue to be a great place to work. A huge focus on your career development (with associated investment), competitive salary package, a multitude of best-practice employee benefits and the opportunity to become an owner in the business via participation in our Employee Share Program are a few of the reasons to work here, not to mention the free brekkie, snacks, fruit and the opportunity to work in the most amazing office space in the heart of the CBD!


Our diverse and inclusive culture drives our success and helps makes us a great place to work - we celebrate the individual!  We encourage our team members to feel free to be themselves so they can unleash their maximum potential. We are a team of down-to-earth people who genuinely work together as a team to "make it happen".  This is hipages' DNA.