OnlyDataJobs.com

R1 RCM
  • Salt Lake City, UT

Healthcare is at an inflection point. Businesses are quickly evolving and new technologies are reshaping the healthcare experience. We are R1 a revenue cycle management company that is passionate about simplifying the patient experience, removing the paperwork hassle and demystifying financial obligations. Our success enables our healthcare clients to focus on what matters most providing excellent clinical care.  


Great people make great companies and we are looking for a great Application Architect to join our team Murray, UT. Our approach to building software is disciplined and quality-focused with an emphasis on creativity, craftsmanship and commitment. We are looking for smart, quality-minded individuals who want to be a part of a high functioning, dynamic team. We believe in treating people fairly and your compensation should reflect that. Bring your passion for software engineering and help us disrupt ourselves as we build the next generation healthcare revenue cycle management products and platforms. Now is the right time to join R1!  


R1 is a leading provider of technology-enabled revenue cycle management services which transform and solve challenges across health systems, hospitals and physician practices. Headquartered in Chicago, R1 is publicly-traded organization with employees throughout the US and international locations.


Our mission is to be the one trusted partner to manage revenue, so providers and patients can focus on what matters most. Our priority is to always do what is best for our clients, patients and each other. With our proven and scalable operating model, we complement a healthcare organizations infrastructure, quickly driving sustainable improvements to net patient revenue and cash flows while reducing operating costs and enhancing the patient experience.


As an Application Architect you apply your problem solving, critical thinking and creative design to architect and build software products that achieve technical, business and customer experience goals.


Responsibilities:


  • Plans software architecture through the whole technology stack from customer facing features, to algorithmic innovation, down through APIs and datasets.
  • Ensures that software patterns and SOLID principles are applied across the organization to system architectures and implementations.
  • Works with product management, business stakeholders and architecture leadership to understand software requirements and helps shape, estimate and plan product roadmaps.
  • Plans and implements proof of concept prototypes.
  • Directly contributes to the test-driven development of product features and functionality, identifying risks and authoring integration tests.
  • Manages and organizes build steps, continuous integration systems and staging environments.
  • Mentors other members of the development team.
  • Evaluates, understands and recommends new technology, languages or development practices that have benefits for implementing.


Required Qualifications:


    8+ ye
    • ars experience programming enterprise web products with Visual Studio, C# and the .NET Framework. Robus
    • t knowledge in software architecture principles including message and service busses, object-oriented programming, continuous integration / continuous delivery, SOLID principles, SaaS, microservices, master data management (MDM) and a deep understanding of design patterns and domain-driven design (DDD). Signi
    • ficant experience working with most the following technologies/languages: C#, .NET/Core, WCF, Entity Framework, UML, LINQ, JavaScript, Angular, Vue.js, HTML, CSS, Lucene, REST, WebApi, XML, TSQL, NoSQL, MS SQL Server, ElasticSearch, MongoDB, Node.js, Jenkins, Docker, Kubernetes, NUnit, NuGet, SpecFlow, GIT. Worki
    • ng knowledge of progressive development processes like scrum, XP, kanban, TDD, BDD and continuous delivery. Stron
    • g sense of ownership and accountability for delivering well designed, high quality enterprise software on schedule. Proli
    • fic learner, willing to refactor your understanding of emerging patterns, practices and processes as much as you refactor your code. Abili
    • ty to articulate and illustrate software complexities to others (both technical and non-technical audiences). Frien
    • dly attitude and available to mentor others, communicating what you know in an encouraging and humble way.
    • Experience working with globally distributed teams.
    • Knowledge of the healthcare revenue cycle, EMRs, practice management systems, FHIR, HL7 or HIPAA is a major plus.


Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions.  Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests.


Our associates are given valuable opportunities to contribute, to innovative and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package.  To learn more, visit: r1rcm.com

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-5-8 years of Java experience, Scala and Python experience a plus

-3+ years of experience as an analyst, data scientist, or related quantitative role.

-3+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-3-5years of Java experience, Scala and Python experience a plus

-2+ years of experience as an analyst, data scientist, or related quantitative role.

-2+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

BIZX, LLC / Slashdot Media / SourceForge.net
  • San Diego, CA

Job Description (your roll):


The Senior Data  Engineer position is a challenging role that bridges the gap between data management and software development. This role reports directly to and works closely with the Director of Data Management while teaming with our software development group. You will work with the team that is designing and implementing the next generation of our internal systems replacing legacy technical debt with state-of-the-art design to enable faster product and feature creation in our big data environment.


Our Industry and Company Environment:

Candidate must have the desire to work and collagerate in a fast-paced entrepreneurial environment in the B2B technology marketing and big data space working with highly motivated co-workers in our downtown San Diego office.


Responsibilities


  • Design interfaces allowing the operations department to fully utilize large data sets
  • Implement machine learning algorithms to sort and organize large data sets
  • Participate in the research, design, and development of software tools
  • Identify, design, and implement process improvements: automating manual processes
  • Optimize data delivery, re-designing infrastructure for greater scalability
  • Analyze and interpret large data sets
  • Build reliable services for gathering & ingesting data from a wide variety of sources
  • Work with peers and stakeholders to plan approach and define success
  • Create efficient methods to clean and curate large data sets


Qualifications

    • Have a B.S., M.S. or Ph.D. in Computer Science or equivalent degree and work experience

    • Deep understanding of developing high efficiency data processing systems

    • Experience with development of applications in mission-critical environments
    • Experience with our stack:
    •      3+ years experience developing in Javascript, PHP, Symfony
    •      3+ years experience developing and implementing machine learning algorithms
    •      4+ years experience with data science tool sets
    •      3+ years MySQL
    •      Experience with ElasticSearch a plus
    •      Experience with Ceph a plus
 

About  BIZX, LLC / Slashdot Media / SourceForge.net


BIZX including its SlashDot Media division is a global leader in on-line professional technology communities such as sourceforge.net serving over 40M website viewers and serving over 150M page views each month to an enthusiastic and engaged audience of IT professionals, decision makers, developers and enthusiasts around the world. Our Passport demand generation platform leverages our huge B2B database and is considered best in class by our list of Fortune 1000 customers. Our impressive growth in the demand generation space is fueled through our use of AI, big data technologies, sophisticated systems automation - and great people.  


Location - 101 W Broadway, San Diego, CA

Delivery Hero SE
  • Berlin, Germany

We are now looking for a tech geek who will grow with our well renowned engineering department as a Senior Engineering Manager - Python/Scala (f/m/d). Join our inquisitive team in the center of Berlin, and start to reinvent food delivery. 



  • Lead and empower an experienced team of engineers, focused on building innovative customer-facing solutions such as customer reviews and ratings, surveys, insights intelligence for restaurants and delivery riders



  • Develop and continuously improve microservices and scalable systems in Python and Scala in our global cloud platform running in multiple regions



  • Work closely with business teams, communicate solutions with non-technical stakeholders and solve challenges



  • Ensure continued service reliability and 24/7 provide technical support for global services



  • Design and implement cutting-edge insights and customer-facing services



  • Practice modern software development methodologies such as continuous delivery, TDD, scrum and collaborate with product managers



  • Participate in code reviews and application debugging and diagnosis.



Your heroic skills:



  • 3 years of hands-on technical leadership and people management experience



  • Excellent knowledge and Hands-on programming experience in developing in Python and/or Scala applications.



  • A completed technical degree in Computer Science or any related fields.



  • Profound knowledge and working experience with unix and systems engineering



  • Several years of experience in design and implementation large scale software systems



  • Experience working with relational databases and NoSQL Technologies and interest in Elasticsearch and Google Cloud and Microservices architectures.



  • Development and co-ownership of applications used by over 100.000 daily users.



  • Curiosity, creative outside-the-box problem solving abilities and an eye for detail.



We offer you:



  • Develop your skills with your educational budget for conferences and external trainings.

  • Exchange ideas and meet fellow developers at regular meetups and in our active guilds.

  • Get to know your colleagues during company parties, hackathons, cultural and sports events.

  • English is our working language, and our colleagues at Delivery Hero come from every corner of the globe, working in diverse, cross-cultural teams.

  • Flexible working hours.

  • Save responsibly with our corporate pension scheme.

  • Enjoy fresh fruits, cereals, beverages, tea and coffee in our lounges. 

  • Take a break with Kicker or table tennis.

  • Take a timeout in our nap room.

  • Learn German with free classes, access our e-learning platform and participate in our inhouse trainings.

  • Enjoy massages or get your hair cut in the office.



Are you the missing ingredient? Send us your CV!



Read about the latest updates from our Tech & Product teams on our blog.


Find our stack here.

inovex GmbH
  • München, Germany

Als Linux Systems Engineer mit Schwerpunkt Hadoop und Search bist du bei unseren Kunden für die Konzeption, Installation und Konfiguration der Linux-basierten Big Data Cluster verantwortlich. Ebenfalls zu deinenAufgaben gehören die Bewertung bestehender Big-Data-Systeme und die zukunftssichere Erweiterung von bestehenden Umgebungen.

Du kümmerst dich dabei ganzheitlich um die Systeme und betreust diese vom Linux-Betriebssystem bis zum Big Data Stack. Für die Automatisierung der oftmals komplexen Big Data Cluster verwendest du bevorzugt Konfigurationsmanagementwerkzeuge.

In unseren interdisziplinären Projektteams spielst du eine gestaltende Rolle und hast dabei oftmals die Entscheidungsfreiheit, wenn es um die Wahl der Werkzeuge geht.


Zur Besetzung der Position suchen wir Experten, die folgende Skills und Eigenschaften mitbringen:



  • Ein erfolgreich abgeschlossenes Studium mit Informatikbezug oder eine vergleichbare Qualifikation wie beispielsweise die Ausbildung zum Fachinformatiker sowie relevante Berufserfahrung

  • Leidenschaft und Begeisterung für neue Technologien und Themen rund um Linux und Big Data

  • Praktische Erfahrung mit Hadoop und gängigen Hadoop Ecosystem Tools sowie erste Erfahrungen mit „Hadoop Security“

  • Idealerweise hast du bereits praktische Erfahrung mit einer oder mehreren der folgenden Technologien bzw. Produkten gesammelt:

    • Flume, Kafka

    • Flink, Hive, Spark

    • Cassandra, Elasticsearch, HBase, MongoDB, CouchDB

    • Amazon EMR, Cloudera, Hortonworks, MapR

    • Java



  • Gute Kenntnisse im Bereich Netzwerk und Storage

  • Vorteilhaft sind Kenntnisse in einem Konfigurationsmanagementwerkzeug (z.B. Puppet, Chef oder Salt)

  • Gute kommunikative Fähigkeiten und sehr gute Deutsch- und Englischkenntnisse in Wort und Schrift

  • Hohe Motivation, gemeinsam mit anderen „inovexperts“ exzellente Projektergebnisse zu erzielen

  • Mobilität und Flexibilität für die Projektarbeit bei unseren Kunden vor Ort

BTI360
  • Ashburn, VA

Our customers are inundated with information from news articles, video feeds, social media and more. We’re looking to help them parse through it faster and focus on the information that’s most important that allows them to make better decisions. We're in the process of building a next-generation analysis and exploitation platform for video, audio, documents, and social media data. This platform will help users identify, discover and triage information via a UI that leverages best in class speech-to-text, machine translation, image recognition, OCR, and entity extraction services.


We're looking for data engineers to develop the infrastructure and systems behind our platform.  The ideal contributor should have experience building and maintaining data and ETL pipelines. They will be expected to work in a collaborative environment, able to communicate well with their teammates and customers. This is a great opportunity to work with a high-performing team in a fun environment.


At BTI360, we’re passionate about building great software and developing our people. Software doesn't build itself; teams of people do. That's why our primary focus is on developing better engineers, better teammates, and better leaders. By putting people first, we give our teammates more opportunities to grow and raise the bar of the software we develop.


Interested in learning more? Apply today!


Required Skills/Experience:



  • U.S. Citizenship - Must be able to obtain a security clearance

  • Bachelors Degree in Computer Science, Computer Engineering, Electrical Engineering or related field

  • Experience with Java, Kotlin, or Scala

  • Experience with scripting languages (Python, Bash, etc.)

  • Experience with object-oriented software development

  • Experience working within a UNIX/Linux environment

  • Experience working with a message-driven architecture (JMS, Kafka, Kinesis, SNS/SQS, etc.)

  • Ability to determine the right tool or technology for the task at hand

  • Works well in a team environment

  • Strong communication skills


Desired Skills:



  • Experience with massively parallel processing systems like Spark or Hadoop

  • Familiarity with data pipeline orchestration tools (Apache Airflow, Apache NiFi, Apache Oozie, etc.)

  • Familiarity in the AWS ecosystem of services (EMR, EKS, RDS, Kinesis, EC2, Lambda, CloudWatch)

  • Experience working with recommendation engines (Spark MLlib, Apache Mahout, etc.)

  • Experience building custom machine learning models with TensorFlow

  • Experience with natural language processing tools and techniques

  • Experience with Kubernetes and/or Docker container environment

  • Ability to identify external data specifications for common data representations

  • Experience building monitoring and alerting mechanisms for data pipelines

  • Experience with search technologies (Solr, ElasticSearch, Lucene)


BTI360 is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status, or any other protected class. 

RentPath
  • Atlanta, GA

Join a winning team!  Become a part of something meaningful!


Looking to join a company in the midst of a digital transformation where the consumer is king and talent, technology and data are our greatest resources? Keep reading to see if this opportunity is of interest to you!


RentPath is looking for a Sr. Analytics Engineer to support the Consumer Product organization and leadership teams as we look for ways to optimize our site experience, troubleshoot changes in conversion, site performance, KPIs, etc. and they will help to build solutions that drive proactive analytics within the organization. You will be a problem solver with a Swiss Army knife set of skills that can be leveraged to break down barriers between business stakeholders and our data and be responsible for a wide variety of data acquisition, manipulation, and cleansing and should be comfortable building solutions that enable analytics teams to perform complex deep dive analyses.  


A Day in the Life.....


  • You conduct thorough analyses to develop/validate consumer segmentations, develop and compute consumer value metrics and support ad-hoc requests for data extraction, modeling and analysis cross-functionally and within a disparate data ecosystem.
  • You will be expected to manage competing priorities and complexity in the face of ambiguity and build streamlined solutions that enable internal partners to effectively obtain and interpret data and platforms used in the operations of our business.
  • You leverage open-source programming languages such as Python to acquire, manipulate, and cleanse data from a variety of sources.
  • You prepare clear and concise data visualizations and presentations to enhance business decision making; experience leveraging Power BI, Tableau or similar software.
  • You are responsible for the development, deployment, and maintenance of dashboards for the consumer product team including product owners.
  • You Work across the organization and leverage cross-functional teams to provide expert analytical service; experience traversing multiple databases to uncover trends.
  • You collect requirements, manage personal project intake, validate and update analyses and dashboards periodically, and provide reports as needed.


What we need from you.....


    • Demonstrated success, experience or proficiency in/with the following: A/B, Multivariate, and other statistical testing methods
  • Experience with web services like REST & SOAP APIs (connecting, gathering data, automation)
    • Experience working in support of product development, collecting requirements, and delivering insight for product improvement
    • Experience supporting the development of analytics solutions leveraging tools like Power BI, Tableau Desktop, and Tableau Online
    • Open source programming languages for data acquisition and manipulation, with a strong preference for object-oriented languages (ex) Python and experiences with cloud technologies (ex) BigQuery and Redshift
    • Familiarity with agile and sprint methodologies
    • Demonstrated experience with the acquisition of and interpretation of business, data and process requirements, with appropriate usage of data visualization tools
    • +4 years experience working with SQL, exposure to NoSQL a plus!
    • Experience leveraging web traffic and consumer analytics tools (Ex. Adobe Analytics,
    • Google Analytics, Mixpanel, Heap, etc.)
    • Familiarity with Optimizely, Google Optimize, or similar A/B testing platforms
    • Demonstrated understanding of data warehousing and data modeling
    • Comfortable narrating data-driven insights and translating technical concepts into simple terminology for a variety of technical and non-technical stakeholders at various levels.
    • Excellent problem-solving skills, including the ability to analyze situations, identify existing or potential problems and recommend sound conclusions and implementation strategies
    • Strong project management skills; ability to prioritize and manage client expectations across multiple projects

Nice to Have:

    • Masters degree in a quantitative field
    • Experience with data mining, ETL, in a Hadoop environment is a plus
    • Experience working with ElasticSearch and Apache Kafka highly preferred but not

required




Why Choose RentPath?

Were a place where you can make an important difference, from day one.  Youll have the opportunity to grow and build, both professionally and in the communities we serve.  Youll work with smart, diverse, and unpretentious people, as we help renters find and enjoy their ideal home.  In fact, we consider ourselves a very well-funded start-up that also has more than 40 years in the industry and strong financial performance.  The challenge of leading our digital transformation has attracted talent from leading companies like Google, Microsoft, HomeAway, and Expedia.  Will you be next?


  
Farfetch UK
  • London, UK

About the team:



We are a multidisciplinary team of Data Scientists and Software Engineers with a culture of empowerment, teamwork and fun. Our team is responsible for large-scale and complex machine learning projects directly providing business critical functionality to other teams and using the latest technologies in the field



Working collaboratively as a team and with our business colleagues, both here in London and across our other locations, you’ll be shaping the technical direction of a critically important part of Farfetch. We are a team that surrounds ourselves with talented colleagues and we are looking for brilliant Software Engineers who are open to taking on plenty of new challenges.



What you’ll do:



Our team works with vast quantities of messy data, such as unstructured text and images collected from the internet, applying machine learning techniques, such as deep learning, natural language processing and computer vision, to transform it into a format that can be readily used within the business. As an Engineer within our team you will help to shape and deliver the engineering components of the services that our team provides to the business. This includes the following:




  • Work with Project Lead to help design and implement new or existing parts of the system architecture.

  • Work on surfacing the team’s output through the construction of ETLs, APIs and web interfaces.

  • Work closely with the Data Scientists within the team to enable them to produce clean production quality code for their machine learning solutions.



Who you are:



First and foremost, you’re passionate about solving complex, challenging and interesting business problems. You have solid professional experience with Python and its ecosystem, with a  thorough approach to testing.



To be successful in this role you have strong experience with:



  • Python 3

  • Web frameworks, such as Flask or Django.

  • Celery, Airflow, PySpark or other processing frameworks.

  • Docker

  • ElasticSearch, Solr or a similar technology.



Bonus points if you have experience with:



  • Web scraping frameworks, such as Scrapy.

  • Terraform, Packer

  • Google Cloud Platform, such as Google BigQuery or Google Cloud Storage.



About the department:



We are the beating heart of Farfetch, supporting the running of the business and exploring new and exciting technologies across web, mobile and instore to help us transform the industry. Split across three main offices - London, Porto and Lisbon - we are the fastest growing teams in the business. We're committed to turning the company into the leading multi-channel platform and are constantly looking for brilliant people who can help us shape tomorrow's customer experience.





We are committed to equality of opportunity for all employees. Applications from individuals are encouraged regardless of age, disability, sex, gender reassignment, sexual orientation, pregnancy and maternity, race, religion or belief and marriage and civil partnerships.

Giesecke+Devrient Currency Technology GmbH
  • München, Deutschland

Arbeiten Sie mit an einer Zukunft, die bewegt. Für unsere Division Currency Management Solutions suchen wir Sie als



Java Software Architekt (m/w/d)



Ihre Aufgaben:




  • Sie gestalten und verantworten die Umsetzung komplexer IIoT Machine Operations, Data Analytics oder Digitallösungen für uns und Cash Center unserer Kunden. Cash Center sind von uns entwickelte voll automatisierte und schlüsselfertige Anlagen. Sie bieten Services von der Produktion, über das Sortieren und die Umlaufprüfung bis hin zur Vernichtung von Banknoten

  • Sie entwickeln skalierbare und zukunftsweisende Architekturen, die für den Cloud-basierten Einsatz wie auch für on-premise geeignet sind

  • Die notwendigen Technologien und Methoden sind Ihnen bestens vertraut und Sie entwickeln diese kontinuierlich im Team weiter

  • Mit Ihrer Erfahrung in der interdisziplinären und verteilten Zusam­men­arbeit und Nutzung von agilen Praktiken wie Scrum und Lean-Startup ge­währ­leis­ten Sie den Produkterfolg im gesamten Lebenszyklus (you build it you run it)

  • Sie kommunizieren und präsentieren verständlich Ihre Lösung, die tech­nische Roadmap inklusive Vision

  • Gemeinsam in einem Team von Architekten und Product Ownern bringen Sie die Gesamtlösung voran und sichern damit unsere Innovationsführerschaft und den Kundenerfolg



Ihr Profil:




  • Studium (Master, FH / Uni) der Informatik oder einer vergleichbaren Fach­rich­tung

  • Nachgewiesene Erfahrung als Software Architekt (m/w/d) von skalierbaren Lösungen mit profundem Wissen in modernen Design Patterns (wie Micro­services, REST, Web-APIs, IIoT, Digital Twins, Cloud, Data Analytics)

  • Praktische Erfahrung mit Data Storage Technologien (NoSQL, SQL, Caching, Hadoop) und erste Erfahrung in Cloud-Umgebungen (wie Micro­soft Azure oder AWS)

  • Erfahrung in Technologien wie Linux, Java, Python, Spring Boot, Traefik, Docker, Kubernetes, Hazelcast, Redis, Kafka, MQTT, PostgreSQL, MongoDB, ElasticSearch, Prometheus, Kibana, Angular, Vue, OAuth, DevOps

  • Sehr gute Deutsch- und Englischkenntnisse in Wort und Schrift






Wir freuen uns auf Ihre Online-Bewerbung unter www.gi-de.com/karriere.




Giesecke+Devrient Currency Technology GmbH · Prinzregentenstraße 159 · 81677 München

Impetus
  • Phoenix, AZ
    • Experience writing and deploying production Cloud services. Most successful candidates will have 4+ years of Java or Scala experience.
    • API design and development for RESTful services
    • Knowledge/Experience of Big Data / Hadoop Technologies is a big plus
    • Ability to research and recommend the next generation of architectures and advancements. 
    • Built on AWS   A successful candidate will have experience building products on top of the AWS ecosystem, not just deploying a package to AWS.
    • Experience with S3, Dynamo, EC2, RDS, or Lambda.
    • Being familiar with data processing and analytics architectures/components is a plus. Such as: Spark, Flink, Hadoop, Kafka, Kinesis, Elasticsearch.
VirtusaPolaris
  • Dallas, TX

Position: Kafka Architect.

Required Skills: 

·       Very good understanding of Event-processing pipelines using Kafka, Java, Zookeeper, Hadoop, Spark, Scala, S3, Spark Streaming.

    • Hands experience in writing code for Producers, Consumers, Event processing with in Kafka, Spark streaming. Good hands on experience in building applications using event driven framework with Kafka.

·       Able to install new Kafka clusters and troubleshoot Kafka related issues in production   environment with in given SLAs.

·       Work in Big Data Environment and familiar with Big data tools like Spark, Hive, Hbase etc.

·       Familiar with Cloud deployments and other AWS tools like S3, Kinesis Streams, Kinesis Firehose, AWS connect etc.

    • 8+ years of experience in building large scale enterprise integration implementation and web services and Microservices etc.
    • Using and developing event driven frameworks and REST services using databases, both RDBMS and NoSQL (e.g. Hadoop, MongoDB, etc.) Development experience with Java.
    • Hands-on experience in Java, Rest API, Kafka, Elasticsearch, SQL, AWS.
    • Hands customer facing experience supporting external developers in complex partner integration projects.

·       Work in a fast-paced agile development environment

·       Fine tune Hadoop applications for high performance and throughput

·       Troubleshoot and debug any Hadoop ecosystem run time issues

·       Understanding enterprise Hadoop environment

    • Ability to use JSON and/or XML formats for message. Able work with Avro, parquet formats.
    • Ability to work as part of a Scrum team, following SAFe agile practices
    • Strong communication and collaboration skills Should be comfortable working in a rapidly transforming organization
 Knowledge of version control such as Git / Bitbucket and Jenkins for builds Understanding of how to integrate code into automated deployment pipelines with CICD. Ability to lead. Ability to work on multiple projects across multiple Business Units. Ability to identify the risks, and providing timely feedback to mitigate them. Strong database concepts. Experience or exposure to salesforce CRM.
Curalate
  • Philadelphia, PA

Curalate is looking for talented full-stack, frontend, and backend developers to join our team.  Our engineering team is deep in the trenches -- tackling some of the gnarliest problems out there at the intersection of computer vision and big data -- and we're looking for a few good coders to join us.


Check out our Engineering Blog to get a window into what we are building at Curalate and what we're excited about!


Responsibilities:

You'll be expected to dive into our stack and start shipping code on day one. We're not hiring code monkeys; you'll be given substantial feature ownership, and we'll expect you to contribute product ideas as well as code.

We're not language zealots; we believe in using the right tool for the job, and are comfortable with a polyglot codebase. Production experience with these technologies is not required - were looking for people that are eager and fast learners. Some of technologies we use:

  • Primary Languages: Scala, Javascript
  • Front end frameworks: React/Redux, Typescript, AngularJS
  • Back end/micro-services stack: Finatra/Finagle, Docker ECS, Terraform
  • Data stores: ElasticSearch, MySQL (RDS), DynamoDB, Redis, Redshift, S3
  • Data processing: AWS Lambda, Kinesis, SQS, Spark
  • Machine Learning: MxNet, Caffe, Scala, Python
  • Computer Vision: OpenCV, Scala, C++
About You:

Although experience with our existing technology stack is great, we're much more interested in hiring developers with exceptional problem solving skills, creative out-of-the-box thinking, and comfort with quickly learning, evaluating, and deploying new technologies. While we're not looking for any specific industry experience you should have a CS degree and/or at least two years of experience and come prepared to join a fast-moving, high-performing team.

Oliver Wyman Labs
  • Boston, MA

Team description


A little bit about us


Oliver Wyman’s Data Science and Engineering team works to solve our clients’ toughest analytical problems, continually pushing forward the state of the art in quantitative problem solving and raising the capabilities of the firm globally. Our team works hand-in-hand with strategy consulting teams, adding expertise where a good solution requires wrangling with a wide variety of data sources including high volume, high velocity and unstructured data; applying specialized data science and machine learning techniques; and developing reusable codebases to accelerate delivery.


Our work is fast paced and expansive. We build models, coalesce data sources, interpret results, and build services and occasionally products that enhance our clients’ ability to derive value from data and upgrade their decision-making capabilities. Our solutions feature the latest in data science tools, machine learning algorithms, AI approaches, software engineering disciplines, and analytical techniques to make an extraordinary impact on clients and societies. We operate at the intersection of exciting, progressive tech and real-world problems faced by some of the world's leading companies. We hire smart, driven people and equip them with the tools and support that they need to get their jobs done.


Our Values and Our Proposition


We believe that our culture is a key pillar of our success and our identity. We take our work seriously, but not ourselves.  We believe happiness, health, and a life outside of work are more important than work itself and are essential ingredients in professional success – no matter what the profession. Ours is a team whose members teach and take care of each other. We want not simply to continue learning and growing but to fundamentally redefine what it means to do consulting and to stretch the boundaries of what we, as a firm, are capable of doing.


Our proposition is simple:



  • You will work with people as passionate and awesome as yourself.

  • You will encounter a variety of technology, industries, projects, and clients.

  • You will deliver work that has real impact in how our clients do business.

  • We will invest in you.

  • We will help you grow your career while remaining hands-on and technical.

  • You will work in smaller, more agile, flatter teams than is the norm elsewhere.

  • You will be empowered and have more autonomy and responsibilities than almost anywhere else.

  • You will help recruit your future colleagues.

  • We offer competitive compensation and benefits.

  • You will work with peers who can learn from you and from whom you can learn.

  • You will work with people who leave egos at the door and encourage an environment of collaboration, fun, and bringing new ideas to the group.


Data Engineer


The Data Engineer is the universal translator between IT, business, software engineers, and data scientists, working directly with clients and project teams. You will work to understand the business problem being solved and provide the data required to do so, delivering at the pace of the consulting teams and iterating data to ensure quality as understandings crystallize.


Our historical focus has been on high-performance SQL data marts for batch analytics, but we are now driving toward new data stores and cluster-based architectures to enable streaming analytics and scaling beyond our current terabyte-level capabilities. Your ability to tune high-performance data pipelines will help us to rapidly deploy some of the latest machine learning algorithms/frameworks and other advanced analytical techniques at scale.


You will serve as a keystone on our larger projects, enabling us to deliver solutions hand-in-hand with consultants, data scientists, and software engineers.


A good candidate will have:



  • Excellent communication skills (verbal and written)

  • Empathy for their colleagues and their clients

  • Signs of initiative and ability to drive things forward

  • Understanding of the overall problem being solved and what flows into it

  • Ability to create and implement data engineering solutions using modern software engineering practices

  • Ability to scale up from “laptop-scale” to “cluster scale” problems, in terms of both infrastructure and problem structure and technique

  • Ability to deliver tangible value very rapidly, working with diverse teams of varying backgrounds

  • Ability to codify best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases

  • A pragmatic approach to software and technology decisions as well as prioritization and delivery

  • Ability to handle multiple workstreams and prioritize accordingly

  • Commitment to delivering value and helping clients succeed

  • Comfort working with both collocated and distributed team members across time zones

  • Comfort working with and developing coding standards

  • Ability to codify best practices for future reuse in the form of accessible, reusable patterns, templates, and codebases

  • Willingness to travel as required for cases (0 up to 40%)


Some things that make our Data Engineers effective:



  • A technical background in computer science, data science, machine learning, artificial intelligence, statistics or other quantitative and computational science

  • A compelling track record of designing and deploying large scale technical solutions, which deliver tangible, ongoing value

    • Direct experience having built and deployed complex production systems that implement modern data science methods at scale and do so robustly

    • Comfort in environments where large projects are time-boxed and therefore consequential design decisions may need to be made and acted upon rapidly

    • Fluency with cluster computing environments and their associated technologies, and a deep understanding of how to balance computational considerations with theoretical properties of potential solutions

    • Ability to context-switch, to provide support to dispersed teams which may need an “expert hacker” to unblock an especially challenging technical obstacle

    • Demonstrated ability to deliver technical projects with a team, often working under tight time constraints to deliver value

    • An ‘engineering’ mindset, willing to make rapid, pragmatic decisions to improve performance, accelerate progress or magnify impact; recognizing that the ‘good’ is not the enemy of the ‘perfect’

    • Comfort with working with distributed teams on code-based deliverables, using version control systems and code reviews



  • Demonstrated expertise working with and maintaining open source data analysis platforms, including but not limited to:

    • Pandas, Scikit-Learn, Matplotlib, TensorFlow, Jupyter and other Python data tools

    • Spark (Scala and PySpark), HDFS, Hive, Kafka and other high-volume data tools

    • Relational databases such as SQL Server, Oracle, Postgres

    • NoSQL storage tools, such as MongoDB, Cassandra, ElasticSearch, and Neo4j



  • Demonstrated fluency in modern programming languages for data science, covering a wide gamut from data storage and engineering frameworks through to machine learning libraries

Exonar
  • Reading, UK
  • Salary: £40k - 75k

Senior Java Developer Key Responsibilities



  • Working within the small engineering team you will be a Java developer with 3 to 5 years experience responsible for all aspects of day-to-day development



  • Writing modular, well tested code that remains easy to maintain as the codebase and business continues to scale



  • Understanding and applying industry best practices and ensuring your code can scale to processing billions of documents



  • Ensuring code quality via code review, automated testing and pair programming as required



  • Prototyping new solutions and exploring new technologies that improve feature quality and accelerate development


Requirements




  • Technical degree or similar




  • 3 to 5 years Java experience writing high quality code preferably in an enterprise environment




  • You like clean code. The number of WTFs in your code is low.




  • Comfortable in a Linux environment




  • Familiarity with automation and build tools (Jenkins/Maven etc)




  • Most importantly - a “can-do” startup attitude




Nice to Have




  • Experience working with one or more of the following technologies: HBase, ElasticSearch, RabbitMQ, Postgres, Scala, Python, Javascript/node.js



  • Exposure to API design, service development, enterprise development patterns and messaging technologies


  • Familiarity with git




  • Familiarity with container based deployment




  • Open source contributions




Background


Exonar is a small software company with a product which crawls and indexes the content of large unstructured data stores to make information reportable and searchable. The product is deployed in large programmes from Cyber Breach and privacy to Cloud Migration. Demand for the product is increasing dramatically in light of European Data Protection Regulation and therefore the pace of change is fast and flexible.


It’s a small team so you may do everything from coding key developments to improving user experience and fixing minor bugs and so much more.


The Culture at Exonar




  • Regular company BBQ’s and social events




  • Fortnightly hackathons




  • Espresso Machine/Beer Fridge/Soft drinks




  • Unlimited holiday!




  • Your choice of hardware



eBay
  • Richmond, UK

Gumtree, part of eBay Classifieds Group, is the UK’s leading classifieds site with over 14.5m unique visitors every month and over 9.5m app downloads. Founded in London in 2000, on Gumtree you can buy and sell everything from cars to home items and find jobs, local services, community events and even somewhere to live.


Based in beautiful Richmond, London, just by the riverside, you will join a diverse team of over 20 engineers, delivering value to millions of people every day. We work in an Agile environment in cross-functional squads, building features and constantly testing with our users.


What you can expect from us:



  • A collaborative, informal, international and playful work environment with self-organised, multi-disciplinary Agile teams

  • Managers and teammates who are invested in your growth as a engineer and as a person

  • Working with modern technologies such as Java 8, Scala, Elasticsearch, NodeJS

  • Access to tools and resources to be the best at your job (MacBook Pro, IDE of your choice, free drinks machine and a variety of other beverages, free breakfast and fruit throughout the day)

  • Loads of other perks


As a senior backend engineer you’ll be relied on to independently develop and deliver high-quality features and finish tasks to a high standard. You’ll be part of an Agile team, together with QA, UX and front-end engineers, focussing on extending and improving our high traffic platform, while mentoring engineers around you, assisting in code reviews, hiring, maintaining and advocating strong development standards.


We’re looking for:



  • Experience in Java development

  • Experience in Scala or NodeJS is desirable

  • Extensive experience in web development, delivering high performing, clean code

  • Comfortable providing and receiving constructive criticism, particularly while participating in code reviews

  • Familiar working in Agile teams and delivering software using Agile methodologies.

  • Be a technical sparring partner for the organisation to help the business create efficient and effective requirements 


Benefits:



  • Flexible working patterns and occasional work-from-home supported.

  • Full medical, dental and vision healthcare cover.

  • Pension scheme.

  • Life and disability insurance.

  • Parental leave policy and Cyclescheme available.

  • Networking, learning and global travel opportunities across eBay Classifieds Group.

  • Regular Tech Talks, Hackathons and workshops.

  • Phenomenal working environment with height-adjustable desks and Aeron chairs.

  • Free breakfast, fruit, snacks, soft drinks, coffee and tea.

  • Free on-site massages, yoga, pilates and fitness bootcamps.


We offer an exciting and meaningful role where you will have a phenomenal opportunity to influence your everyday life and work, and where you are part of an international company with colleagues all over the world.

Adobe
  • Austin, TX

Who are we?

When it comes to engineering, our teams solve problems of scale and work on cutting edge and open-source technologies. As a high-growth, e-commerce company, we are searching for talented and passionate software engineers. Our culture is one that strives on solving difficult problems focusing on product engineering based on hypothesis testing to empower people to come up with ideas.

We address a wide array of computer science problems including advanced web applications, cutting edge user interfaces, scalability and performance of applications. You will work alongside a highly experienced team of engineers from various academic and industry backgrounds.

Our product and engineering team practices iterative development and continuous deployment. We work in small teams, deploy often, and keep our projects short and focused. Engineers rotate between projects and areas of the product to learn and take on new challenges.

What youll do:

·       Define, design, implement and deliver high quality features that meet customer needs.

·       Build and maintain features in our platform ranging from distributed data processing systems to in-browser data visualization tools.

·       Ensure high quality by following coding best practices, code reviews, and providing automated tests.

·       Work closely with product management and quality engineering to ensure we deliver great compelling features.

·       Be passionate and help improve the availability, performance, scalability and security of the product.

·       Ensure strong emphasis on monitoring and metrics for analyzing health and usage of features.

·       Lead and participate in production deployment activities of your features and troubleshoot and resolve issues escalated from the production environment.

·       Evaluate new technologies, and help incorporate them into the technology stack

If you:

·       Want to design, develop and test (including automated, continuous integration) key product features/components of our platform.

·       Love to develop systems that are highly reliable, scalable, but easy to maintain.

·       Live to design to features/components and conduct effective peer code reviews where needed.

·       Actively participate in architecture and design discussions.

Is this you?

·       6+ years of industry software development experience.

·       Strong core Java, design patterns and object-oriented language skills.

·       Strong knowledge of SQL queries and database concepts

·       Testing frameworks like Junit

·       Experience with agile development methodologies.

·       Positive work attitude, self-motivator, quick learner, and a standout colleague.

It is great if you know:

·       ElasticSearch, Hadoop, HBase, Spark, Kafka

·       Yarn, Oozie, Zookeeper

·       MySQL, Postgres

·      PHP

·       Linux

·       Scala

·       Cassandra, Redis, Mongo

Adarga
  • London, UK

NLP Data Scientist

We are looking for a talented NLP Data Scientist to join our team working across a modern, web-focused technology stack. We work in a fast-paced environment, utilising cloud based technologies to deploy our products to customers.

As an NLP Data Scientist, you will be joining Adarga’s expanding AI engineering team. Our AI engineering team have strong technical capabilities within data science combined with the ability to productionise their work.

In this role, you will be building and training NLP models to extract information which we use within a range of client facing analytical products. You will be required to interact with our users and engineers, gathering relevant information to train models and articulate the performance of them, to seek industry leading results. You will be using your linguistic background to prepare and curate NLP datasets, as well as designing and managing annotation tasks for NLP.

Role Specification



  • Responsibility for the scalability, performance, security and delivery of our analytics platform.

  • Support our Data Science and Engineering teams to deliver product against our roadmap.

  • Work in a fast-paced environment, using cloud technologies to deploy products to customers.

  • Research, test and build the NLP techniques that underpin our data analytics framework.

  • Be a part of the team which transitions research from our in-house data science team into a client facing product.



Required Experience



  • Strong communications skills in English.

  • Ability to quickly adopt and leverage new technologies.

  • Knowledge of Information extraction, relation extraction, and linked data.

  • Knowledge of Machine Learning / Deep learning.

  • Curating datasets for NLP tasks.

  • Education in linguistics and NLP.

  • Ability to self-direct project work to develop new NLP capabilities and enhance existing ones.

  • Ability to apply NLP techniques to real world problems.

  • Knowledge of one programming language (we use Python and Java).



Beneficial Experience



  • Database technologies (MongoDB, Neo4J, ElasticSearch, SQL, etc.).

  • Experience in another programming language.

  • Agile software development methodologies.

  • Experience of working in/collaborating with software teams.



Interested?
Send your application via the application form.

Acquisition is not appreciated.

Brooksource
  • Atlanta, GA

We are currently seeking a SQL Developer (Python, SQL) for a media conglomerate. Our client is a leader in their industry and they bring the trends and advertisements from audience science to customers by working on a compressed design cycle and dropping 100+ styles each month.

As a Software Engineer, you would be reporting directly to the Lead Engineer in New York. Youll help define technical standards and drive the architectural and engineering vision in a very dynamic environment. Youll translate business needs into live features and partner directly with business owners who recognize the value of technology. You will have a strong voice in product and ownership over coding, QA, and production support.

Our ideal candidate has experience developing and supporting a wide range of software systems, is self-sufficient, a quick learner, and a strong individual contributor with strong quantitative, analytical, strategic thinking and problem solving skills.

REQUIRED QUALIFICATIONS

·        Bachelors Degree in Computer Science or a related field

·        Hands-on development in SQL

·        Solid understanding of internet technologies and protocols

·        Experience developing APIs and web services in a Unix environment

·        Analytical thinking and problem solving.

·        Proficient with back-end database work specifically with MySQL

·        Professional ETL and Python experience

RESPONSIBILITIES

·        Design, develop, test, and maintain software and database systems which support the Ad-Sales platform

·        Design solutions and technical specifications including automated frameworks.

·        Work with business users and the marketing department to ensure understanding of business process and proper implementation in software

·        Ensure software follows best practices from a coding, testing and performance perspective

·        Create Management Methods and Systems to access efficiently to information stored in database.

NICE TO HAVE QUALIFICATIONS

·        Amazon Elasticsearch

·        JavaScript & Node.JS

·        Data Analysis: SciPy, R, Tableau

Expedia, Inc.
  • Chicago, IL

Are you fascinated by data and building robust data pipelines which process massive amounts of data at scale and speed to provide crucial insights to the end customer?  Are you passionate about making sure customers have the information they need to get the most out of every product they use? Are you ready to help people go places?  This is exactly what we, the Lodging Data Tech (LDT) group in Expedia, do. Our mission is “transforming Expedia’s lodging data assets into Data Products that deliver intelligence and real-time insights for our customers”. We work on creating data assets and products to support a variety of applications which are used by 1000+ market managers, analysts, and external hotel partners.


Our work spans across a wide range of data-sets like lodging booking, clickstream, and web scrape data, across a diverse technology stack ranging from Teradata and MS SQL-server to Hadoop, Spark, Qubole and AWS. We are looking for passionate, creative and innately curious data engineers to join a new team in Chicago to build a unified data service which would power the data needs of all partner facing applications in the lodging line of business.


As a Software Dev Engineer II you are involved in all aspects of software development, including participating in technical designs, implementation, functional analysis, and release for mid-to-large sized projects.


What you’ll do with us:
You will develop, design, debug, and modify components of software applications and tools.
Understand business requirements; perform source to target data mapping, design and implement ETL workflows and data pipelines on the Cloud using Big Data frameworks and/or RDBMS/ETL tools
Support and solve data and/or system issues as needed
Prototype creative solutions quickly by developing minimum viable products and work with seniors and peers in crafting and implementing the technical vision of the team
Communicate and work effectively with geographically distributed multi-functional teams
Participate in code reviews to assess overall code quality and flexibility
Resolve problems and roadblocks as they occur with peers and unblock junior members of the group. Follow through on details and drive issues to closure
Define, develop and maintain artifacts like technical design or partner documentation
Drive for continuous improvement in software and development process within an agile development team
Participate in user story creation in collaboration with the team


Who you are:
Bachelors or master’s degree in computer science or a related major and/or equivalent work experience
Experience using code versioning tools for e.g Git or others
Experience in Agile/Scrum software development practices
Effective verbal and written communication skills with the ability to present complex technical information clearly and concisely
3-7+ years’ experience in Software Engineering specifically in databases, Big Data or Data-warehouse projects.
Proficient knowledge in SQL, database development (T-SQL/PL-SQL) and some experience with data modelling.
Experience working on Big Data framework like Hadoop or Spark.
Experience with any one MPP database system like Teradata, Redshift, DB2, Azure Datawarehouse or Greenplum.
Proficient in at least one programming language like Python/Java/Scala/ on a Unix/Linux environment.
Knowledge or experience working with AWS and AWS services like Redshift, EMR, AWS Lambda a plus.
Prior experience working with NoSQL stores (Hbase, ElasticSearch, Cassandra, Mongodb) a plus.
Familiarity with the e-commerce or travel industry.


Why join us:


Expedia Group recognizes our success is dependent on the success of our people. We are the world's travel platform, made up of the most knowledgeable, passionate, and creative people in our business. Our brands recognize the power of travel to break down barriers and make people's lives better – that responsibility inspires us to be the place where exceptional people want to do their best work, and to provide them the tools to do so.
Whether you're applying to work in engineering or customer support, marketing or lodging supply, at Expedia Group we act as one team, working towards a common goal; to bring the world within reach. We relentlessly strive for better, but not at the cost of the customer. We act with humility and optimism, respecting ideas big and small. We value diversity and voices of all volumes. We are a global organization but keep our feet on the ground so we can act fast and stay simple. Our teams also have the chance to give back on a local level and make a difference through our corporate social responsibility program, Expedia Cares.


Our family of travel brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Egencia®, trivago®, HomeAway®, Orbitz®, Travelocity®, Wotif®, lastminute.com.au®, ebookers®, CheapTickets®, Hotwire®, Classic Vacations®, Expedia® Media Solutions, CarRentals.com™, Expedia Local Expert®, Expedia® CruiseShipCenters®, SilverRail Technologies, Inc., ALICE and Traveldoo®.


Expedia is committed to creating an inclusive work environment with a diverse workforce.   All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.  This employer participates in E-Verify. The employer will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS) with information from each new employee's I-9 to confirm work authorization.