OnlyDataJobs.com

Cloudreach
  • Atlanta, GA

Big dreams often start small. From an idea in a London pub, we have grown into a global cloud enabler which operates across 7 countries and speak over 30 languages.


Our purpose at Cloudreach is to enable innovation. We do this by helping enterprise customers adopt and harness the power of cloud computing. We believe that the growth of a great business can only be fuelled by great people, so join us in our partnership with AWS, Microsoft and Google and help us  build one of the most disruptive companies in the cloud industry. Its not your average job, because Cloudreach is not your average company.


What does the Cloud Enablement team do?

Our Cloud Enablement team helps provide consultative, architectural, program and engineering support for our customers' journeys to the cloud. The word 'Enablement' was chosen carefully, to encompass the idea that we support and encourage a collaborative approach to Cloud adoption, sharing best practices, helping change the culture of teams and strategic support to ensure success.


How will you spend your days ?

    • Build technical solutions required for optimal ingestion, transformation, and loading of data from a wide variety of data sources using open source, AWS, Azure or GCP big data frameworks and services.
    • Work with the product and software team to provide feedback surrounding data-related technical issues and support for data infrastructure needs uncovered during customer engagements / testing.
    • Understand and formulate processing pipelines of large, complex data sets that meet functional / non-functional business requirements.
    • Create and maintain optimal data pipeline architecture
    • Working alongside the Cloud Architect and Cloud Enablement Manager to implement Data Engineering solutions
    • Collaborating with the customers data scientists and data stewards/governors during workshop sessions to uncover more detailed business requirements related to data engineering


What kind of career progression can you expect ?

    • You can grow into a Cloud Data Engineer Lead or a Cloud Data Architect
    • There are opportunities for relocation in our other cloudy hubs

How to stand out ?

    • Experience in building scalable end-to-end data ingestion and processing solutions
    • Good understanding of data infrastructure and distributed computing principles
    • Proficient at implementing data processing workflows using Hadoop and frameworks such as Spark and Flink
    • Good understanding of data governance and how regulations can impact data storage and processing solutions such as GDPR and PCI
    • Ability to identify and select the right tools for a given problem, such as knowing when to use a relational or non-relational database
    • Working knowledge of non-relational and row/columnar based relational databases
    • Experience with Machine Learning toolkits
    • Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
    • This position requires travel up to 70% (M-F) any given week, with an average of 50% per year


What are our cloudy perks?

      A M
    • a
        c
      • Book Pro and iphone or google pixel (your pick!)
      • Unique cloudy culture -- we work hard and play hard
      • Uncapped holidays and your birthday off
      • World-class training and career development opportunities through our own Cloudy University
      • Centrally-located offices
      • Fully stocked kitchen and team cloudy lunches, taking over a restaurant or two every Friday
      • Office amenities like pool tables and Xbox on the big screen TV
      • Working with a collaborative, social team, and growing your skills faster than you will anywhere else
      • Quarterly kick-off events with the opportunity to travel abroad
      • Full benefits and 401k match


    If you want to learn more, check us out on Glassdoor. Not if. When will you join Cloudreach?

Cloudreach
  • Atlanta, GA

Big dreams often start small. From an idea in a London pub, we have grown into a global cloud enabler which operates across 7 countries and speak over 30 languages.


Our purpose at Cloudreach is to enable innovation. We do this by helping enterprise customers adopt and harness the power of cloud computing. We believe that the growth of a great business can only be fueled by great people, so join us in our partnership with AWS, Microsoft and Google and help us build one of the most disruptive companies in the cloud industry. Its not your average job, because Cloudreach is not your average company.


Mission:

The purpose of a Cloud Data Architect is to design solution that enable data scientists and analysts to gain insights into data using data-driven cloud based services and infrastructures. At Cloudreach, they will be subject matter experts and will be responsible for the stakeholder management and technical leadership for data ingestion and processing engagements. A good understanding of cloud platforms and prior experience working with big data tooling and frameworks is required.


What will you do at Cloudreach?

  • Build technical solutions required for optimal ingestion, transformation, and loading of data from a wide variety of data sources using open source, AWS, Azure or GCP big data frameworks and services.
  • Work with the product and software team to provide feedback surrounding data-related technical issues and support for data infrastructure needs uncovered during customer engagements / testing.
  • Understand and formulate processing pipelines of large, complex data sets that meet functional / non-functional business requirements.
  • Create and maintain optimal data pipeline architecture
  • Working alongside Cloud Data Engineers, Cloud System Developers and Cloud Enablement Manager to implement Data Engineering solutions
  • Collaborating with the customers data scientists and data stewards during workshop sessions to uncover more detailed business requirements related to data engineering
  • This position requires travel up to 70% (M-F) any given week, with an average of 50% per year


What do we look for?

The Cloud Data Architect has extensive experience working with big data tools and supporting cloud services, a pragmatic mindset focused on translating functional and non-functional requirements into viable architectures, and ideally a consultancy background, leading a highly skilled team on engagements that implement complex and innovative data solutions for clients.


In addition, the Cloud Data Architect thrives in a collaborative and agile environment with an ability to learn new concepts easily.


  • Technical
      skills:
  • Exper
    • ience in building scalable end-to-end data ingestion and processing solutions Good
    • understanding of data infrastructure and distributed computing principles Profi
    • cient at implementing data processing workflows using Hadoop and frameworks such as Spark and Flink Good
    • understanding of data governance and how regulations can impact data storage and processing solutions such as GDPR and PCI Abili
    • ty to identify and select the right tools for a given problem, such as knowing when to use a relational or non-relational database Worki
    • ng knowledge of non-relational and row/columnar based relational databases Exper
    • ience with Machine Learning toolkits Exper
    • ience with object-oriented and/or functional programming languages, such as Python, Java and Scala
    • Demonstrable working experience
      • A successful history of manipulating, processing and extracting value from large disconnected datasets
      • Delivering production scale data engineering solutions leveraging one or more cloud services
      • Confidently taking responsibility for the technical output of a project
      • Ability to quickly pick up new skills and learn on the job
      • Comfortably working with various stakeholders such as data scientists, architects and other developers
    • Solid communication skills: You can clearly articulate the vision and confidently communicate with all stakeholder levels: Cloudreach, Customer, 3rd Parties and Partners - both verbal and written. You are able to identify core messages and act quickly and appropriately.


    What are our cloudy perks?

    • A MacBook Pro and smartphone.
    • Unique cloudy culture -- we work hard and play hard.
    • Uncapped holidays and your birthday off.
    • World-class training and career development opportunities through our own Cloudy University.
    • Centrally-located offices.
    • Fully stocked kitchen and team cloudy lunches, taking over a restaurant or two every Friday.
    • Office amenities like pool tables and Xbox on the big screen TV.
    • Working with a collaborative, social team, and growing your skills faster than you will anywhere else.
    • Full benefits and 401k match.
    • Kick-off events at cool locations throughout the country.
R1 RCM
  • Salt Lake City, UT

Healthcare is at an inflection point. Businesses are quickly evolving, and new technologies are reshaping the healthcare experience. We are R1 - a revenue cycle management company that is passionate about simplifying the patient experience, removing the paperwork hassle and demystifying financial obligations. Our success enables our healthcare clients to focus on what matters most - providing excellent clinical care.

Great people make great companies and we are looking for a great Lead Software Engineer to join our team in Murray, UT. Our approach to building software is disciplined and quality-focused with an emphasis on creativity, craftsmanship and commitment. We are looking for smart, quality-minded individuals who want to be a part of a high functioning, dynamic team. We believe in treating people fairly and your compensation should reflect that. Bring your passion for software engineering and help us disrupt ourselves as we build the next generation healthcare revenue cycle management products and platforms. Now is the right time to join R1!

We are seeking a highly experienced Lead Platform Engineer to join our team. The lead platform engineer will be responsible for building and maintaining real time, scalable, and resilient platform for product teams and developers. This role will be responsible for performing and supervising design, development and implementation of platform services, tools and frameworks. You will work with other software architects, software engineers, quality engineers, and other team members to design and build platform services. You will also provide technical mentorship to software engineers/developers and related groups.      


Responsibilities:


  • Be responsible for designing and developing software solutions with engineering mindset
  • Ensures SOLID principles and standard design patterns are applied across the organization to system architectures and implementations
  • Acts as a technical subject matter expert: helping fellow engineers, demonstrating technical expertise and engage in solving problems
  • Collaborate with stakeholders to help set and document technical standards
  • Evaluates, understands and recommends new technology, languages or development practices that have benefits for implementing.
  • Participate in and/or lead technical development design sessions to formulate technical designs that minimize maintenance, maximize code reuse and minimize testing time

Required Qualifications:


  • 8+ years experience in building scalable, highly available, distributed solutions and services
  • 4+ experience in middleware technologies: Enterprise Service Bus (ESB), Message Queuing (MQ), Routing, Service Orchestration, Integration, Security, API Management, Gateways
  • Significant experience in RESTful API architectures, specifications and implementations
  • Working knowledge of progressive development processes like scrum, XP, Kanban, TDD, BDD and continuous delivery using Jenkins
  • Significant experience working with most of the following technologies/languages: Java, C#, SQL, Python, Ruby, PowerShell, .NET/Core, WebAPI, Web Sockets, Swagger, JSON, REST, GIT
  • Hand-on experience in micro-services architecture, Kubernetes, Docker
  • Familiarity with Middleware platform Software AG WebMethods is a plus
  • Concept understanding on Cloud platforms, BIG Data, Machine Learning is a major plus
  • Knowledge of the healthcare revenue cycle, EMRs, practice management systems, FHIR, HL7 and HIPAA is a major plus


Desired Qualifications:


  • Strong sense of ownership and accountability for delivering well designed, high quality enterprise software on schedule
  • Prolific learner, willing to refactor your understanding of emerging patterns, practices and processes as much as you refactor your code
  • Ability to articulate and illustrate software complexities to others (both technical and non-technical audiences)
  • Friendly attitude and available to mentor others, communicating what you know in an encouraging and humble way
  • Continuous Learner


Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions.  Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests.


Our associates are given valuable opportunities to contribute, to innovative and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package.  To learn more, visit: r1rcm.com

R1 RCM
  • Salt Lake City, UT

Healthcare is at an inflection point. Businesses are quickly evolving and new technologies are reshaping the healthcare experience. We are R1 a revenue cycle management company that is passionate about simplifying the patient experience, removing the paperwork hassle and demystifying financial obligations. Our success enables our healthcare clients to focus on what matters most providing excellent clinical care.  


Great people make great companies and we are looking for a great Application Architect to join our team Murray, UT. Our approach to building software is disciplined and quality-focused with an emphasis on creativity, craftsmanship and commitment. We are looking for smart, quality-minded individuals who want to be a part of a high functioning, dynamic team. We believe in treating people fairly and your compensation should reflect that. Bring your passion for software engineering and help us disrupt ourselves as we build the next generation healthcare revenue cycle management products and platforms. Now is the right time to join R1!  


R1 is a leading provider of technology-enabled revenue cycle management services which transform and solve challenges across health systems, hospitals and physician practices. Headquartered in Chicago, R1 is publicly-traded organization with employees throughout the US and international locations.


Our mission is to be the one trusted partner to manage revenue, so providers and patients can focus on what matters most. Our priority is to always do what is best for our clients, patients and each other. With our proven and scalable operating model, we complement a healthcare organizations infrastructure, quickly driving sustainable improvements to net patient revenue and cash flows while reducing operating costs and enhancing the patient experience.


As an Application Architect you apply your problem solving, critical thinking and creative design to architect and build software products that achieve technical, business and customer experience goals.


Responsibilities:


  • Plans software architecture through the whole technology stack from customer facing features, to algorithmic innovation, down through APIs and datasets.
  • Ensures that software patterns and SOLID principles are applied across the organization to system architectures and implementations.
  • Works with product management, business stakeholders and architecture leadership to understand software requirements and helps shape, estimate and plan product roadmaps.
  • Plans and implements proof of concept prototypes.
  • Directly contributes to the test-driven development of product features and functionality, identifying risks and authoring integration tests.
  • Manages and organizes build steps, continuous integration systems and staging environments.
  • Mentors other members of the development team.
  • Evaluates, understands and recommends new technology, languages or development practices that have benefits for implementing.


Required Qualifications:


    8+ ye
    • ars experience programming enterprise web products with Visual Studio, C# and the .NET Framework. Robus
    • t knowledge in software architecture principles including message and service busses, object-oriented programming, continuous integration / continuous delivery, SOLID principles, SaaS, microservices, master data management (MDM) and a deep understanding of design patterns and domain-driven design (DDD). Signi
    • ficant experience working with most the following technologies/languages: C#, .NET/Core, WCF, Entity Framework, UML, LINQ, JavaScript, Angular, Vue.js, HTML, CSS, Lucene, REST, WebApi, XML, TSQL, NoSQL, MS SQL Server, ElasticSearch, MongoDB, Node.js, Jenkins, Docker, Kubernetes, NUnit, NuGet, SpecFlow, GIT. Worki
    • ng knowledge of progressive development processes like scrum, XP, kanban, TDD, BDD and continuous delivery. Stron
    • g sense of ownership and accountability for delivering well designed, high quality enterprise software on schedule. Proli
    • fic learner, willing to refactor your understanding of emerging patterns, practices and processes as much as you refactor your code. Abili
    • ty to articulate and illustrate software complexities to others (both technical and non-technical audiences). Frien
    • dly attitude and available to mentor others, communicating what you know in an encouraging and humble way.
    • Experience working with globally distributed teams.
    • Knowledge of the healthcare revenue cycle, EMRs, practice management systems, FHIR, HL7 or HIPAA is a major plus.


Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions.  Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests.


Our associates are given valuable opportunities to contribute, to innovative and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package.  To learn more, visit: r1rcm.com

Impetus
  • Dallas, TX

    Job description:
    As a Big Data Engineer, you will have the opportunity to make a significant impact, both to our organization and those of our Fortune 500 clients. You will work directly with clients at the intersection of business and technology. You will leverage your experience with Hadoop and software engineering to help our clients use insights gleaned from their data to drive Value.

    You will also be given substantial opportunities to develop and expand your technical skillset with emerging Big Data technologies so you can continually innovate, learn, and hit the gas pedal on your career.



    Responsibilities (but not limited to):
    Implementation of various solutions arising out of the large data processing (GBs/ PBs) over various NoSQL, Hadoop, and MPP-based products
    Active participation in the various Architecture and design calls with Big Data customers
    Working with Sr. Architects and providing implementation details to offshore
    Responsible for timely and quality deliveries
    Practical understanding of Hadoop, Hive and spark.

    1+ years of spark experience.

    Experience: 6 to 12 Years

    Desired Skills and Experience:
    Spark, Hadoop, Java

Glocomms
  • Dallas, TX

Data Scientist
Dallas, TX
Investment Bank
Compensation: $160,000-$190,000

(Unlimited PTO, Remote Work Options, Daily Catered Lunches)

Do you enjoy solving challenging puzzles? Protecting critical networks from cyber-attacks? Our Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. We are is leading threat, risk analysis and data science initiatives that are helping to protect the firm and our clients from information and cyber security risks. Our team equips the firm with the knowledge and tools to measure risk, identify and mitigate threats and protect against unauthorized disclosure of confidential information for our clients, internal business functions, and our extended supply chain.

You will be responsible for:
Designing and integrating state-of-the-art technical solutions? A position as a Security Analyst on our Threat Management Center
Applying statistical methodology, machine learning, and Big Data analytics for network modelling, anomaly detection, forensics, and risk management.
Creating innovative methodologies for extracting key parameters from big data coming from various sensors.
Utilize machine learning, statistical data analytics, and predictive analytics to help implement analytics tied to cyber security and hunting methodologies and applications
Designing, developing, testing, and delivering complex analytics in a range of programming environments on large data sets

Minimum Qualifications:
6+ years industry experience focused on Data Science
Hands-on experience in Cyber Security Analytics
Proficient coding capability in at least one of the major programming languages such as Python, R, Java
Proficient in data analysis, model building and data mining
Solid foundation in natural language processing and machine learning
Significant experience with deep learning

Preferred Qualifications
Strong interpersonal, communication and communication skills
Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner
Experience with knowledge management best practices
Penchant to learn about new technology innovation and creativity in applying to business problems

Turnberry Solutions
  • Philadelphia, PA

TITLE:  Data Scientist

Years of Experience: 7+
Education Required: Bachelors Degree or Equivalent Work Experience

Interview Details:
- Phone Interview
- Face to Face Mandatory - Video Call is not an option


Purpose:
Looking for a Data Scientist who will support operations and data architecture teams with insights gained from analyzing company data. You must be that person who can create value out of data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. Such a person proactively fetches information from various sources and analyzes it for better understanding about how the business performs. Additionally, they can utilize AI tools to automate certain processes
 
This person must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive business results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes. This Data Scientist must possess the skill-sets necessary to hit the ground running and must be willing to learn about the mobile phone business while solving problems quickly and efficiently.
 
See Yourself:

  • Working with stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions.
  • Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques and business strategies.
  • Assess the effectiveness and accuracy of new data sources and data gathering techniques.
  • Develop custom data models and algorithms to apply to data sets.
  • Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes.
  • Coordinate with different functional teams to implement models and monitor outcomes.
  • Develop processes and tools to monitor and analyze model performance and data accuracy.

Position Requirements

Minimum Requirements:
  • Four-year degree in a related Computer Science, Math or Statistical field of study.
  • 7+ continuous years' of professional experience as a Data Scientist.
  • Strong problem solving skills with an emphasis on product development.
  • Experience using statistical computer languages (R, Python, SLQ, etc.) to manipulate data and draw insights from large data sets.
  • Experience working with and creating data architectures.
  • Knowledge of a variety of machine learning techniques (clustering, decision tree learning, artificial neural networks, etc.) and their real-world advantages/drawbacks.
  • Knowledge of advanced statistical techniques and concepts (regression, properties of distributions, statistical tests and proper usage, etc.) and experience with applications.
  • A drive to learn and master new technologies and techniques.
  • Coding knowledge and experience with several languages: C, C++, Java, JavaScript, etc.
  • SUPERB communication skills with an emphasis on writing and interpreting abilities
  • Excellent presentation skills. Must have the ability to confirm complex data into digestible formats for non-technical business teams.
Extended Requirements:
  • Knowledge and experience in statistical and data mining techniques: GLM/Regression, Random Forest, Boosting, Trees, text mining, social network analysis, etc.
  • Experience using web services: Redshift, S3, Spark, etc.
  • Experience creating and using advanced machine learning algorithms and statistics: regression, simulation, scenario analysis, modeling, clustering, decision trees, neural networks, etc.
  • Experience analyzing data from 3rd party providers.
  • Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, Gurobi, MySQL, etc.
  • Experience visualizing/presenting data for stakeholders.
Physical Environment and Working Conditions:
Agile open floor plan
Onsite 5 days a week
Expedia Group - Europe
  • London, UK

Expedia needs YOU!


At Hotels.com (part of the Expedia Group) ensuring that our customers find the perfect hotel is an exciting mission for our technology teams.

We are looking for a Software Development Engineer to join our awesome team of bright minds. We are building Hotels.com's data streaming and ingestion platform, reliably handling many thousands of messages every second.

Our platform enables data producers and consumers to easily share and process information. We have built and continue to improve our platform using open source projects such as: Kafka, Kubernetes and Java Spring, all running in the cloud, and we love contributing our work back to the open source community.


What you'll do:


Are you interested in building a world-class, fast-growing data platform in the cloud?



  • Are you keen to work with and contribute to open source projects?

  • Do you have a real passion for clean code and finding elegant solutions to problems?

  • Are you eager to learn streaming, cloud and big data technologies and keep your skills up to date?

  • If any of those are true... Hotels.com is looking for you!

  • We seek an enthusiastic, experienced and reliable engineer who enjoys getting things done.

  • You should have good communication skills and be equally comfortable clearly explaining technical problems and solutions with analysts, engineers and product managers.

  • We welcome your fresh ideas and approaches as we constantly aim to improve our development methodologies.

  • Our team has experience using a wide range of cutting edge technologies and years of streaming, cloud and big data experience.

  • We are always learning and growing, so we guarantee that you won't be bored with us!


We don’t believe in skill matching against a list of buzzwords…

However we do believe in hiring smart, friendly and creative people, with excellent programming abilities, who are on a journey to mastery through craftsmanship. We believe great developers can learn new technologies quickly and effectively but it wouldn't hurt if you have experience with some of the following (or a passion to learn them):

Technologies:
Kafka, Kubernetes, Spring, AWS, Spark Streaming, Hive, Flink, Docker.

Experience:
Modern core and server side Java (concurrency, streams, reactive, lambdas).
Microservice architecture, design, and standard methodologies with an eye towards scale.
Building and debugging highly scalable performant systems.
Actively contributing to Open Source Software.


What you’ll do:



  • Write clean, efficient, thoroughly tested code, backed-up with pair programming and code reviews.

  • Much of our code is Java, but we use all kinds of languages and frameworks.

  • Be part of an agile team that is continuously learning and improving.

  • Develop scalable and highly-performant distributed systems with everything this entails (availability, monitoring, resiliency).

  • Work with our business partners to flesh out and deliver on requirements in an agile manner.

  • Evolve development standards and practices.

  • Take architectural ownership for various critical components and systems.

  • Proactive problem solving at the organization level.

  • Communicate and document solutions and design decisions.

  • Build bridges between technical teams to enable valuable collaborations.


As a team we love to:



  • Favor clean code, and simple, robust architecture.

  • Openly share knowledge in order to grow and develop our team members.

  • Handle massive petabyte-scale data sets.

  • Host and attend meetups and conferences to present our work. This year we've presented at the

  • Dataworks Summit in Berlin and the Devoxx Conference in London.

  • Contribute to Open Source. In recent years our team created Circus Train, Waggle Dance, BeeJU, CORC, Plunger, Jasvorno and contributed to Confluent, aws-alb-ingress-controller, S3Proxy, Cascading, Hive, HiveRunner, Kafka and Terraform. In addition, we are currently working towards open sourcing our streaming platform.

  • Create an inclusive and fun workplace.


Do all of this in a comfortable, modern office, with a massive roof terrace in great location within central London!


We’ll take your career on a journey that’s flexible and right for you; recognizing and rewarding your achievements:

A conversation around flexible working and what will best fit you is encouraged at Hotels.com.
Competitive salaries and many growth opportunities within the wider global Expedia Group.
Option to attend conferences globally and enrich the technology skills you are passionate about.
Cash and Stock rewards for outstanding performance.
Extensive travel rewards and discounts for all employees, perfect for ticking some destinations off your bucket list!


We believe that a diverse and inclusive workforce, is the most awesome workforce…
We believe in being Different. We seek new ideas, different ways of thinking, diverse backgrounds and approaches, because averages can lie and conformity is dangerous.
Expedia is committed to crafting an inclusive work environment with a diverse workforce. You will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

Why join us:
Expedia Group recognizes our success is dependent on the success of our people.  We are the world's travel platform, made up of the most knowledgeable, passionate, and creative people in our business.  Our brands recognize the power of travel to break down barriers and make people's lives better – that responsibility inspires us to be the place where exceptional people want to do their best work, and to provide them the tools to do so. 


Whether you're applying to work in engineering or customer support, marketing or lodging supply, at Expedia Group we act as one team, working towards a common goal; to bring the world within reach.  We relentlessly strive for better, but not at the cost of the customer.  We act with humility and optimism, respecting ideas big and small.  We value diversity and voices of all volumes. We are a global organization but keep our feet on the ground so we can act fast and stay simple.  Our teams also have the chance to give back on a local level and make a difference through our corporate social responsibility program, Expedia Cares.


If you have a hunger to make a difference with one of the most loved consumer brands in the world and to work in the dynamic travel industry, this is the job for you.


Our family of travel brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Egencia®, trivago®, HomeAway®, Orbitz®, Travelocity®, Wotif®, lastminute.com.au®, ebookers®, CheapTickets®, Hotwire®, Classic Vacations®, Expedia® Media Solutions, CarRentals.com™, Expedia Local Expert®, Expedia® CruiseShipCenters®, SilverRail Technologies, Inc., ALICE and Traveldoo®.


We’re excited for you to make Expedia an even more awesome place to work!
So what are you waiting for? Apply now and join us on our journey to become the #1 travel company in the world!



Expedia is committed to creating an inclusive work environment with a diverse workforce. All qualified applicants will receive consideration for employment without regard to race, religion, gender, sexual orientation, national origin, disability or age.

StubHub
  • San Francisco, CA
JOB ID:  R0028748

POSITION:  Senior Machine Learning Engineer

ORGANIZATION: StubHub, www.stubhub.com (an eBay company NASDAQ: EBAY)

StubHub is the world's largest ticket marketplace, enabling fans to buy and sell tickets to tens of thousands of sports, concert, theater and other live entertainment events. StubHub reinvented the ticket resale market in 2000 and continues to lead it through innovation. The company's unique online marketplace, dedicated solely to tickets, provides all fans the choice to buy or sell their tickets in a safe, convenient and highly reliable environment. 



The Opportunity:

We are seeking a creative Senior Machine Learning Engineer with expertise in predictive models, optimization algorithms, software development and cloud computing platforms. You will lead the production and deployment of models that will delight our fans by helping them buy and sell the best tickets at the right price at the right time.

Your models will have a direct impact on StubHub’s bottom line and on the efficiency of the markets on its platform. You will have access to the rich data set of the world’s largest dynamically priced ticketing marketplace. You will work with data scientists to build models that describe customer behavior, predict market trends, quantify risk and provide customers with tools that help them see their favorite performers at the best price and sell their tickets effortlessly when they can’t go. You will optimize the implementation of those models by crafting well engineered software and by finding ways to alter or speed the algorithms being used. You will drive impact by building frameworks to rapidly deliver data and predictions to customer-facing products.

The data science and machine learning group is a small team of extraordinary scientists and engineers based in StubHub's San Francisco headquarters. We work closely with other teams in the company to establish business goals, to incorporate existing insights into our algorithms, to uncover new opportunities for optimizing the business and to establish rigorous methods for gauging performance.



You Will


  • Lead the delivery of algorithms that predict market dynamics and power tools that allow our customers to effortlessly find the tickets they want or sell the tickets they can't use.

  • Work with data scientists to mine our rich data set to discover customer behavior trends and market dynamics.

  • Collaborate with teams throughout the business deploy our models effectively.

  • Inspire your teammates with your knowledge of dynamic systems modeling, algorithms and modern software development techniques.

  • Use state of the art big data and statistical modeling tools in a cloud computing environment.

  • Frame complicated problems in simple terms and enjoy clarifying sophisticated solutions.

  • Bring a passion for discovery, problem solving and groundbreaking technology.


You Have


  • A degree in Computer Science or equivalent quantitative field.

  • Over 5 years of experience personally developing statistical models, algorithms and custom software.

  • A mastery of programming languages like Python, Java and C++.

  • An expertise in building performant software through optimal use of computer memory, efficient I/O techniques and algorithm choice.

  • Authoritative knowledge of modern ML libraries like TensorFlow, Keras, PyTorch and Spark.

  • A cherished memory of an amazing concert or sporting event and a dedication to help others live such experiences

Viome
  • Santa Clara, CA

Viome is a wellness as a service company that applies artificial intelligence and machine learning to biological data – e.g., microbiome, transcriptome and metabolome data – to provide personalized recommendations for healthy living. We are an interdisciplinary and passionate team of experts in biochemistry, microbiology, medicine, artificial intelligence and machine learning.

We are seeking an experienced and energetic individual for a Senior Backend Software Engineer position who can work onsite at our Santa Clara, CA, office.

Responsibilities:



  • Collaborate with AI/ML and Bioinformatics engineers to gather specific requirements to design innovative solutions

  • Support the entire application lifecycle (concept, design, test, release, and support)

  • Produce fully functional web applications

  • Develop REST APIs to support queries from web clients

  • Design database schemas to support applications

  • Develop Web Applications for internal scientists and operation teams

  • Architect distributed solutions to help applications scale

  • Write unit and UI tests to identify malfunctions

  • Troubleshoot and debug to optimize performance



Qualifications:



  • BS/MS degree in Computer Science or related field with 7+ years of relevant experience

  • Strong Scala, Core Java, Spring Boot, Microservices, Akka/Play Framework, HTML, CSS, JavaScript, React, Redux, PostgreSQL

  • Advanced knowledge of build scripts and tools such as Ansible, Jenkins, CircleCI, Apache Maven, Gradle, AWS CodeDeploy and similar

  • Proven experience with relational database design, object-oriented programming principles, event-driven design principles, and distributed processing design principles

  • Solid understanding of information security standards and methodologies

  • Familiarity with cloud technologies such as AWS and distributed processing technologies such as MapReduce, Hadoop and Spark is a big plus

  • Previous experience scaling large backends (1M+ users) is a big plus



Viome is 130+ person start-up offering a successful commercial product that has generated high demand. With offices in California, New Mexico, New York, and Washington, we are looking to hire team members capable of working in dynamic environments, who have a positive attitude and enjoy collaboration. If you have the skills and are excited about Viome’s mission, we’d love to hear from you.

KI labs GmbH
  • München, Germany
  • Salary: €70k - 95k

About Us


At KI labs we design and build state of the art software and data products and solutions for the major brands of Germany and Europe. We aim to push the status quo of technology in corporations, with special focus areas of software, data and culture. Inside, we are a team of software developers, designers, product managers and data scientists, who are passionate about building the products of future today. We believe in open-source and independent teams, follow Agile practices, lean startup method and aim to share this culture with our clients. We are mainly located in Munich and recently Lisbon.


Your Responsibilities



  • Lead and manage a team of talented Data Engineers and Data Scientists in a diverse environment.

  • Ensure success of your team members and foster their career growth.

  • Ensure that the technology stacks, infrastructure, software architecture and development methods we use provide for an efficient project delivery and sustainable company growth.

  • Use your extensive practical expertise to help teams design, build and scale efficient and elegant data products on cloud computing platforms such as AWS, Azure, or Google Cloud Platform.

  • Enable and facilitate a data-driven culture for internal and external clients, and use advanced data pipelines to generate insights;


Skills and qualifications



  • Advanced degree in Computer Science, Engineering, Data Science, Applied Mathematics, Statistics or related fields;

  • Substantial practical experience in software and data engineering.

  • You are an authentic leader, keen on leading by example rather than by direct top-down management;

  • You are committed to operational excellence and to facilitating team communication and collaboration on a daily basis.

  • You like to see the big picture and you have a deep understanding of the building blocks of large-scale data architectures and pipelines.

  • You have passion for data, be it at crunching, transforming or visualising large data streams;

  • You have working knowledge of programming languages such as Python, Java, Scala, or Go

  • You have working knowledge of modern big data tooling and frameworks (Hadoop, Spark, Flink, Kafka, etc.), data storage systems, analytics tools, and ideally machine learning platforms.


Why work with us



  • You will have an opportunity to be at the frontline of innovation together with our prominent clients, influencing the car you drive in five years, the services you have on your flight, and the way you pay for your morning coffee.

  • Working closely with our leadership team, you will have a real chance to influence what our quickly growing company looks like in 3 years.

  • You get a challenging working environment located in the center of Munich and Lisbon and an ambitious team of individuals with unique backgrounds and expertise.

  • You get the chance to work on various interesting projects in short time frames; we do not do work on maintenance or linear projects.

  • We have an open-door work culture where ideas and initiatives are encouraged.

  • We offer a performance-based competitive salary.

Accenture
  • Detroit, MI
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Acxiom
  • Austin, TX
As a Senior Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze the latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You are able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You are also a self-starter able to continuously evaluate new technologies, innovate and deliver solutions for business critical applications


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Lead the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Lead and review Hadoop log files with the help of log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 6+ years of Big Data Administration Experience
  • Extensive knowledge and Hands-on Experience of Hadoop based data manipulation/storage technologies like HDFS, MapReduce, Yarn, Spark/Kafka, HBASE, HIVE, Pig, Impala, R and Sentry/Ranger/Knox
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Experience supporting Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems
  • Great operational expertise such as excellent troubleshooting skills, understanding of system's capacity, bottlenecks, core resource utilizations (CPU, OS, Storage, and Networks)
  • Experience in Hadoop cluster migrations or upgrades
  • Strong scripting skills in Perl, Python, shell scripting, and/or Ruby on Rails
  • Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera, HortonWorks, and/or MapR versions along with monitoring/alerting tools (Nagios, Ganglia, Zenoss, Cloudera Manager)
  • Strong problem solving and critical thinking skills
  • Excellent verbal and written communication skills


What will set you apart:


  • Solid understanding and hands-on experience of Big Data on private/public cloud technologies(AWS/GCP/Azure)
  • DevOps experience (CHEF, Puppet and Ansible)
  • Strong knowledge of JAVA/J2EE and other web technologies

 
Impetus
  • Phoenix, AZ

      Multiple Positions I Multiple Locations : Phoenix, AZ/ Tampa, FL

      Emplyment Type :: Full time || Contract


      As a Big Data Engineer, you will have the opportunity to make a significant impact, both to our organization and those of our Fortune 500 clients. You will work directly with clients at the intersection of business and technology. You will leverage your experience with Hadoop and software engineering to help our clients use insights gleaned from their data to drive Value.


      You will also be given substantial opportunities to develop and expand your technical skillset with emerging Big Data technologies so you can continually innovate, learn, and hit the gas pedal on your career.



Required:
  • 4+ years of IT experience
  • Very good experience in Hadoop, Hive, Spark Batch. (Streaming exp is good to have)
  • Good to have experience with 1 NoSQL - HBase/ Cassandra.
  • Experience with Java/J2EE & Web Services, Scala
  • Writing utilities/program to enhance product capability to fulfill specific customer requirement
  • Learning new technology/solution to solve customer problems
  • Provide feedback/learning to product team
ASML
  • Veldhoven, Netherlands

Introduction



When people think ‘software’, they often think of companies like Google or Microsoft. Even though ASML is classified as a hardware company, we in fact have one of the world´s largest and most pioneering Java communities. The ASML Java  environment is extremely attractive for prospective Java engineers because it combines big data with extreme complexity. From Hadoop retrieval to machine learning to full stack development, the possibilities are endless. At ASML, our Java teams create and implement software designs that run in the most modern semiconductor fabs in the world, helping our customers like Samsung, Intel and TSMC make computer chips faster, smaller, and more efficient. Here, we’re always pushing the boundaries of what’s possible.


We are always looking for talented Java developers who know how to apply the latest Java SE or Java EE technologies, to join the teams responsible for creating software for high volume manufacturing automation in semiconductor fabs. Could this be your next job? Apply now!



Job Mission



As a Java developer you will join one of our multinational Scrum teams to create state-of-the-art software solutions. Teams are composed of five to ten developer, a Scrum Master and a Product Owner. We are committed to following a (scaled) Agile way of working, with sprints and demos every two weeks, aiming for frequent releases of working software. In all teams we cooperate with internal and external experts from different knowledge domains to discover and build the
best solutions possible. We use tools like Continuous Integration with GIT, Jira and Bamboo. We move fast to help our customers reach their goals, and we strive to create reliable and well-tested software, because failures in our software stack can severely impact customers' operations.



Job Description



All these dedicated Java teams work in unison on different products and platforms across ASML. Here’s a brief description of what the different Java teams do: 



  • Create software infrastructure using Java EE, which provides access to SQL and NoSQL storage, reliably manages job queues with switch-over and fail-over features, periodically collects of information from networked systems in the Fab and offers big-data-like storage and computational capabilities;

  • Create on-site solutions that continuously monitors all scanners in a customer’s domain. The server can detect system failures before they happen and identify needed corrective actions.

  • Provide industrial automation tasks that take care of unattended complex adjustments to the manufacturing process, in order to enable highest yields in high volume manufacturing;

  • Implement and validate algorithms that give our customers the power to reach optimal results during manufacturing;

  • Create applications that help fine-tune the manufacturing process, helping process engineers to navigate the complexities of process set-up through excellent UX design.

  • Create visualization and analytics applications for visual performance monitoring, which also help to sift through huge amounts of data to pinpoint potential issues and solutions. For these applications, we use Spotfire and R as main technologies.

  • Select and manage IT infrastructure that helps us run the software on a multi-blade server with plenty of storage. In this area, we use virtualization technologies, Linux, Python and Splunk. In addition to Java.Use emerging technologies to turn vision into reality, e.g. using big data and machine learning;


Responsibilities:



  • Designing and implementing software, working on the product backlog defined by the Product Owner;

  • Ensuring the quality of personal deliverables by designing and implementing automated tests on unit and integration levels;

  • Cooperating with other teams to ensure consistent implementation of the architecture;- and agreeing on interfaces and timing of cross-team deliveries;

  • Troubleshooting, analyzing, and solving integration issues both from internal alpha and beta tests as well as those reported by our customers;

  • Writing or updating product documentation in accordance with company processes;

  • Suggesting improvements to our technical solutions and way of working, and implementing them in alignment with your team and their stakeholders.


Main technologies: Java SE and EE ranging from 6 to the latest version. Junit, Mockito, XML, SQL, Linux, Hibernate, Git, JIR.



Education



A relevant BSc or MSc in the area of IT, electronics or computer engineering.



Experience



If you already have Java software development experience in the high-tech industry you should have the following experience:



  • At least 4 years of Java SE or Java EE development experience;

  • Design and development of server-side software using object-oriented paradigm;

  • Creation of automated unit and integration tests using mocks and stubs;

  • Working with Continuous Integration;

  • Working as a part of a Scrum team;

  • Experience with the following is a plus:

  • Creation of web-based user interface;

  • Affinity with math, data science or machine learning.

  • A nice-to-have skills are experience with Hadoop, Spark, and MongoDB.




Personal skills




  • First of all, you’re passionate about technology and are excited by the idea that your work will impact millions of end-users worldwide;

  • You’re analytical, and product- and quality-oriented;

  • You like to investigate issues, and you’re a creative problem solver;

  • You’re open open-minded, you like to discuss technical challenges, and you want to push the boundaries of technology;

  • You’re an innovator and you constantly seek to improve your knowledge and your work;

  • You take ownership and you support your team - you’re the backbone of your group;

  • You’re client and quality oriented – you don’t settle for second-best solutions, but strive to find the best ways to acquire top results.


Important information


So, you’ve read the vacancy and are about to click the “apply for this job” button. That’s great! To make everything go as smoothly as possible, we would really appreciate it if you could take a look at the
following tips from us.
Although you have the option to upload your Linkedin profile during the application process, we would like to ask you to upload a detailed CV and a personalized cover letter. Adding your Linkedin, Github, or any other relevant profile in your CV is always a good idea!

Before uploading your CV, can you please make sure that the following information is present:



  • Short information about your experience, what: company / sector/ domain / product.

  • Context of the project (how big / complex were the projects / what were the goals).

  • What was the size of the team / who was involved (think of developers, testers, architect, product owners, scrum master) / what was your role in the team.

  • Which languages / tools / frameworks / versions (recent) were used during the project (full stack, server side, client side).

  • In which projects an agile methodology was used.

  • What were the results of the projects (as a team and individually)  / your contribution and achievements.

Accenture
  • Minneapolis, MN
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Atlanta, GA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Philadelphia, PA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • San Diego, CA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .