OnlyDataJobs.com

Accenture
  • Detroit, MI
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Minneapolis, MN
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Atlanta, GA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Philadelphia, PA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • San Diego, CA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
DISYS
  • Minneapolis, MN
Client: Banking/Financial Services
Location: 100% Remote
Duration: 12 month contract-to-hire
Position Title: NLU/NLP Predictive Modeling Consultant


***Client requirements will not allow OPT/CPT candidates for this position, or any other visa type requiring sponsorship. 

This is a new team within the organization set up specifically to perform analyses and gain insights into the "voice of the customer" through the following activities:
Review inbound customer emails, phone calls, survey results, etc.
Review data that is unstructured "natural language" text and speech data
Maintain focus on customer complaint identification and routing
Build machine learning models to scan customer communication (emails, voice, etc)
Identify complaints from non-complaints.
Classify complaints into categories
Identify escalated/high-risk complaints, e.g. claims of bias, discrimination, bait/switch, lying, etc...
Ensure routed to appropriate EO for special

Responsible for:
Focused on inbound retail (home mortgage/equity) emails
Email cleansing: removal of extraneous information (disclaimers, signatures, headers, PII)
Modeling: training models using state of art techniques
Scoring: "productionalizing" models to be consumed by the business
Governance: model documentation and Q/A with model risk group.
Implementation of model monitoring processes

Desired Qualifications:
Real-world experience building/deploying predictive models, any industry (must)
SQL background (must)
Self-starter, able to excel in fast-paced environment w/o a ton of direction (must)
Good communication skills (must)
Experience in text/speech analytics (preferred)
Python, SAS background (preferred)
Linux (nice to have)
Spark (Scala or PySpark) (nice to have)

Perficient, Inc.
  • Dallas, TX

At Perficient, youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.

Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.

About Our Data Governance Practice:


We provide exceptional data integration services in the ETL, Data Catalog, Data Quality, Data Warehouse, Master Data Management (MDM), Metadata Management & Governance space.

Perficient currently has a career opportunity for a Python Developer who resides in the vicinity of Jersey City, NJ or Dallas,TX.

Job Overview:

As a Python developer, you will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment, and support of application developed for our clients. As a member working in a team environment, you will take direction from solution architects and Leads on development activities.


Required skills:

  • 6+ years of experience in architecting, building and maintaining software platforms and large-scale data infrastructures in a commercial or open source environment
  • Excellent knowledge of Python
  • Good knowledge of and hands on experience working with quant/data Python libraries (pandas/numpy etc)
  • Good knowledge of and hands on experience designing APIs in Python (using Django/Flask etc)

Nice to have skills (in the order of priority):

  • Comfortable and Hands on experience with AWS cloud (S3, EC2, EMR, Lambda, Athena, QuickSight etc.) and EMR tools (Hive, Zeppelin etc)
  • Experience building and optimizing big data data pipelines, architectures and data sets.
  • Hands on experience in Hadoop MapReduce or other big data technologies and pipelines (Hadoop, Spark/pyspark, MapReduce, etc.)
  • Bash Scripting
  • Understanding of Machine Learning and Data Science processes and techniques
  • Experience in Java / Scala


Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities, and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues with great benefits are just part of what makes Perficient a great place to work.

Acxiom
  • Austin, TX
The Data Analytics Engineer leverages Acxiom and third-party software to create solutions to business problems defined by specific business requirements. As a Data Analytics Engineer, you will draw upon technical and data processing knowledge to solve moderately complex marketing and data warehousing problems on very large volumes of data.

This position can be homebased. Corporate office locations include: Downers Grove, IL; Conway, AR: Austin, TX; New York City, NY.

 

Responsibilities:


  • Understands requirements to build, enhance, or integrate data programs and processes for one or more Acxiom Client solutions and/or applications. Able to read and interpret application design and functional specifications to write or enhance application code.
  • Interacts with client/stakeholders to understand and resolve problems in a timely manner, prioritizing multiple issue response based on the severity of the case.
  • Develops automation jobs for data orchestration, data analysis and data transfers
  • Provides input on functional requirements and participates /presents review code in code review sessions. Helps accurately estimate requirements in order to deliver client solutions within time, and quality standards.
  • Utilizes standard /Acxiom methodologies to ensure overall solution and data integrity is maintained.
  • Understands Acxiom solution software. Defines solutions standards, policies and procedures.
  • Identifies and diagnoses areas of maintenance and process improvement.
  • Responds to client/stakeholder problems in a timely manner, prioritizing multiple issue response based on the severity of the case.
  • Using a relevant software language, develops/ executes unit test cases and tests software applications that fulfill functional specifications. Documents and interprets test results and corrects application coding errors
  • Draws on past technical experiences to adapt to new programming languages and technologies on


What you will need:


  • 2-4 years of experience in data engineering/programming and data analytics at a large organization
  • Analytic problem-solving skills with the ability to think outside-the-box
  • Analytical thinker that excels at analyzing and understanding data to answer questions
  • Good communication skills: communicate ideas clearly and effectively to other members of the analytics team and to the client at multiple levels (both technical and business)
  • Excellent understanding of data concepts, data architecture, data manipulation/engineering and data engineering design
  • Able to leverage SQL or other data manipulation languages such as SAS, python pandas, R, Spark, etc. to answer business questions effectively
  • Passion for considering how projects fit into the wider business picture
  • Expert programming experience in Python, Shell scripting, or other object oriented and structured programming language
  • Understanding in multiple types of programming languages to be adaptable (statically typed vs. dynamically typed and object oriented vs. procedural)
  • Self-Starter Able to work independently with little guidance
  • Multitasker Able to prioritize and deliver on multiple projects and tasks that are happening
  • Adaptable - Able to adapt to diverse technical challenges and systems
  • Up to 25% travel
  • Up to 50% direct client interaction


What will set you apart:


  • Client facing consulting experience
  • Experience with 3rd party MarTech data (Email send/response, Direct Mail send/response, Prospect Lists) and/or AdTech data (Digital Ad Impressions/Activity, Social, Website activity)
  • Experience working in environments with strong data privacy and data goverence
  • SAS & building macros in SAS
  • Hadoop architecture(Cloudera, Hortonworks, MapR)
  • Hive
  • Spark/PySpark
  • R
  • AWS experience
  • Building reports on Tableau or other BI Tools

 

Farfetch UK
  • London, UK

About the team:



We are a multidisciplinary team of Data Scientists and Software Engineers with a culture of empowerment, teamwork and fun. Our team is responsible for large-scale and complex machine learning projects directly providing business critical functionality to other teams and using the latest technologies in the field



Working collaboratively as a team and with our business colleagues, both here in London and across our other locations, you’ll be shaping the technical direction of a critically important part of Farfetch. We are a team that surrounds ourselves with talented colleagues and we are looking for brilliant Software Engineers who are open to taking on plenty of new challenges.



What you’ll do:



Our team works with vast quantities of messy data, such as unstructured text and images collected from the internet, applying machine learning techniques, such as deep learning, natural language processing and computer vision, to transform it into a format that can be readily used within the business. As an Engineer within our team you will help to shape and deliver the engineering components of the services that our team provides to the business. This includes the following:




  • Work with Project Lead to help design and implement new or existing parts of the system architecture.

  • Work on surfacing the team’s output through the construction of ETLs, APIs and web interfaces.

  • Work closely with the Data Scientists within the team to enable them to produce clean production quality code for their machine learning solutions.



Who you are:



First and foremost, you’re passionate about solving complex, challenging and interesting business problems. You have solid professional experience with Python and its ecosystem, with a  thorough approach to testing.



To be successful in this role you have strong experience with:



  • Python 3

  • Web frameworks, such as Flask or Django.

  • Celery, Airflow, PySpark or other processing frameworks.

  • Docker

  • ElasticSearch, Solr or a similar technology.



Bonus points if you have experience with:



  • Web scraping frameworks, such as Scrapy.

  • Terraform, Packer

  • Google Cloud Platform, such as Google BigQuery or Google Cloud Storage.



About the department:



We are the beating heart of Farfetch, supporting the running of the business and exploring new and exciting technologies across web, mobile and instore to help us transform the industry. Split across three main offices - London, Porto and Lisbon - we are the fastest growing teams in the business. We're committed to turning the company into the leading multi-channel platform and are constantly looking for brilliant people who can help us shape tomorrow's customer experience.





We are committed to equality of opportunity for all employees. Applications from individuals are encouraged regardless of age, disability, sex, gender reassignment, sexual orientation, pregnancy and maternity, race, religion or belief and marriage and civil partnerships.

Pinnacle Group, Inc.
  • Dallas, TX

Development and Support Engineer is responsible for the development, installation, configuration and ongoing support of internally developed software and 3rd Party applications and their integrations. A zero-outage platform will be maintained.

The candidate will work within a global team, liaising between account teams and application owners to ensure that requirements are met and continuity of service is maintained during changes and incidents.

The candidate will be accountable to Operational and Engineering Delivery Management and will be responsible for assigned applications in terms of implementation, incident management, problem management and change management.

The candidate may be the subject matter expert for an application area, whilst maintaining knowledge of secondary application areas to maintain continuity of service.

Out of hours on-call support is an important element of this role.

Education and Experience:-

Bachelors degree in Computer Science, or related field or equivalent work experience. May have Masters degree in related field. Often holds entry-level certification(s) in work field. May hold intermediate-level certification(s) in work field. Typically 5+ years of relevant experience

Technical Delivery:-

Capacity Management

Monitoring, planning and implementation of application capacity

Backup for other team members in case of non-availability

Cost Management

Ensure all services are billed properly at the end of month close

Incident Management

Ensure that incidents are resolve and logged within defined SLA parameters

Perform on-call duties within a defined rota

Problem Management

Overall technical ownership for designated applications

Security, Change and Problem Management responsibilities

Change Management

Coordination with application owners and change managers

Implementation of changes within defined parameters 

Escalation

Available as an escalation point to assist team members

Identification of technical resource to support escalations

Business acumen:-

Fluent English

Ability to work in a global team

Understanding of the business environment, processes and organization.

Strong interpersonal skills

Ability to work in a matrix environment

Ability to follow project plans and work with project managers and application owners

Essential Skills:-

Ability to adapt to changing requirements

Inquisitiveness and a thirst to learn

Ability to think through and resolve complex issues

Knowledge of Change and Incident Management processes

Knowledge of Test Driven Development

Knowledge of programming techniques

Knowledge of coding standards

Practical experience of some or all of the following:

o AngularJS, Java script, Typescript, Restful APIs, Bash, Less, D3, HighCharts, NodeJS

o C#, ASP.NET, Web API, LINQ, Java, JSON

o SQL Server, SSIS, TSQL, Oracle

o Hadoop, Sqoop, Kafka, Hive, Spark, Python, PySpark, Scala and R

Experience with DevOps

o IIS, ASP.Net, Web API, Java

o Apache, AngularJS, JavaScript, Typescript

o SQL Server, SSIS

o Hadoop, Hive, Spark, Kafka, Scala, R

Experience of GitHub and RedMine

Desirable Skills:-

ITIL Foundation

Knowledge of networking technologies

Experience of advanced programming languages such as Java

Experience with Integration Technologies, including Enterprise Service Busses, Microservices, and APIs

Accenture
  • Detroit, MI
Position: Full time
Travel: 100% travel
Recruiter: tim.bennett@accenture.com
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Accenture
  • Atlanta, GA
Position: Full time
Travel: 100% travel
Recruiter: tim.bennett@accenture.com
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Prosum
  • Phoenix, AZ

Introduction:

Decision Support Transformation is an integral part of the Finance BU CFO Group whose objective is to make Finance more efficient and effective while continuing to provide outstanding service and value to our business partners, customers and shareholders. As part of this initiative, we are investing in an integrated technology platform as a key enabler of efficient Finance processes that drive value to the Blue Box.

The Technology Transformation team is focused on holistic technology solutions to enable Finance to drive increased value and continue our digital transformation. This team is responsible for assessing current capabilities and constraints, as well as defining and deploying the future state system/data architecture and governance.

Responsibilities:

This role is responsible for enabling and accelerating Finances adoption of advanced analytics solutions (including Python, PySpark, Spark SQP, Hive, and other tools), as well as machine learning techniques (such as regression, classification, clustering, anomaly detection, market basket analysis, time series data, neural networks, and others).

The responsibilities of this role include:

    • Understand the Finances advanced analytics and forecasting requirements
      • Work with the data scientists across the Finance BU CFO, CP&FA, DSCOE, and other teams to understand their data needs, processes, and advanced analytics/modeling requirements
      • Work with these teams to prioritize features for ongoing sprints and managing a prioritized list of data requirements / user stories
    • Solution design, deployment and user education
      • Educate the Finance data scientists within the BU CFO, CP&A, and other teams about the advanced analytics and machine learning capabilities available
      • Where required, assist with model development leveraging current tools and techniques
      • Where required, develop forecasting and other predictive models utilizing current tools and techniques
      • Lead a Community of Practice for advanced analytics across Finance to facilitate sharing knowledge and best practices, training and education, and methodology standardization
      • Collaborating with our Technologies partners, design a Finance advanced analytics solution platform, leveraging existing (and possibly new) investments
      • Act as the overall Finance Advanced Analytics Subject Matter Experts as part of the Solution Architecture team responsible for delivery of those capabilities. Ensure that the Solution Architecture teams are focused on the correct Finance priorities
      • Identify opportunities to adopt innovative technologies
      • Collaborating with our Technologies partners, Finance data scientists, and other stakeholders, deploy advanced analytics and machine learning solutions across the appropriate Finance user base
      • Ensure methodology and technique standardization across the Finance data scientist groups:
        • Leveraging correct data and model inputs
        • Leveraging common tools and standard technology platforms and methodologies
        • Leveraging work being done by other Finance teams

This position will report to the Director, Finance Advanced Analytics.    

Qualifications:

    • Previous experience working with big data, advanced analytics tools (including Python, PySpark, Spark SQP, Hive, and other tools), and machine learning techniques (such as regression, classification, clustering, anomaly detection, market basket analysis, time series data and neural networks).
    • Previous experience working closely with Technologies to deploy systems to re-engineer processes and solve complex business problems.
    • Transformational approach to Process efficiency and effectiveness.
    • Results driven approach with an ability to convert strategic vision into clear project outcomes.
    • Strong relationship management and conflict resolution skills with a proven track of positively collaborating, influencing and extensively coordinating with internal and external partners.
    • Excellent communication, both written and verbal, facilitation and technical writing skills. Must be able to communicate results to stakeholders including non-finance people.
    • Must be a self-starter with an ability to drive large scale change.
Talent
  • Houston, TX

We need strong technical expertise in Data Engineering, but beyond that this is an opportunity to help us setup a best-practice data science process, to help us determine the direction of future tooling, and to be a central part of a team that will spearhead how the company engages in Data Science.


What you'll be doing:

You will work with other Data Scientists, Data Engineers, Service Designers and Project Managers on interdisciplinary projects, using Mathematics, Statistics and Machine Learning to derive structure and knowledge from raw data.

You are a highly collaborative individual who will challenge others in your team thoughtfully while prioritising impact. You believe in iterative change, experimenting with new approaches, and learning from and teaching others.


What we are looking for:

  • Strong experience with Python and relevant libraries (PySpark, Pandas, etc).
  • The ability to work across structured, semi-structured, and unstructured data, extracting information and identifying irregularities and linkages across disparate data sets.
  • Meaningful experience in Distributed Processing (Spark, Hadoop, EMR, etc).
  • Deep understanding of Information Security principles to ensure compliant handling and management of client data.
  • Experience working collaboratively in a close-knit team and in clearly communicating complex solutions.
  • Experience in traditional data warehousing / ETL tools (SAP HANA, Informatica, Talend, Pentaho, DataStage, etc)
  • Experience and interest in cloud infrastructure (Azure, AWS, Google Platform, Databricks, etc) and containerisation (Kubernetes, Docker, etc). 


What will make you stand out:

  • Experience programming with Julia.
  • Experience or interest in building robust and practical data pipelines on top of cloud infrastructure (Pachyderm, Kubeflow, etc).

Bonuses to include as part of your application:

  • Links to online profiles you use such as Github, Twitter etc.
  • A description of your work history (whether as a resume, or LinkedIn profile).

Talent
  • Houston, TX

We need strong technical expertise in Data Science, but beyond that this is an opportunity to help us setup a best-practice data science process, to help us determine the direction of future tooling, and to be a central part of a team that will spearhead how the company engages in Data Science.


What you'll be doing:

You will work with other Data Scientists, Data Engineers, Service Designers and Project Managers on interdisciplinary projects, using Mathematics, Statistics and Machine Learning to derive structure and knowledge from raw data.

You are a highly collaborative individual who will challenge others in your team thoughtfully while prioritising impact. You believe in iterative change, experimenting with new approaches, and learning from and teaching others.


What we are looking for: 

  • 5+ years experience working with and analysing large data sets.
  • Export knowledge of statistics.
  • Real-world experience in working with product and business teams to: identify important questions and data needs, and apply statistical methods to data to find answers.
  • Strong knowledge of Python and relevant libraries (PySpark, Pandas, etc).
  • The ability to communicate results clearly and a focus on driving impact.
  • An inquisitive nature in diving into data inconsistencies to pinpoint issues.
  • Proficiency at driving the collection of new data and refining existing data sources.
  • Excellent presentation and communication skills, with the ability to explain complex analytical concepts to people from other fields.


What will make you stand out:

  • A PhD or MS in a quantitative field (e.g., Economics, Statistics, Computer Science, Sciences, Engineering, Mathematics).
  • Prior experience with writing and debugging data pipelines using a distributed data framework (Spark, etc).
  • Best practices in software development and in productionising data science.


Bonuses to include as part of your application:

  • Links to online profiles you use such as Github, Twitter, etc.
  • A description of the most interesting data analysis youve done, its key findings and its impact.
  • A link or attachment of code youve written related to data analysis.
  • A description of your work history (whether as a resume, or LinkedIn profile).

Prosum
  • Phoenix, AZ

Introduction:

Decision Support Transformation is an integral part of the Finance BU CFO Group whose objective is to make Finance more efficient and effective while continuing to provide outstanding service and value to our business partners, customers and shareholders. As part of this initiative, we are investing in an integrated technology platform as a key enabler of efficient Finance processes that drive value to the Blue Box.

The Technology Transformation team is focused on holistic technology solutions to enable Finance to drive increased value and continue our digital transformation. This team is responsible for assessing current capabilities and constraints, as well as defining and deploying the future state system/data architecture and governance.

Responsibilities:

This role is responsible for enabling and accelerating Finances adoption of advanced analytics solutions (including Python, PySpark, Spark SQP, Hive, and other tools), as well as machine learning techniques (such as regression, classification, clustering, anomaly detection, market basket analysis, time series data, neural networks, and others).

The responsibilities of this role include:

    • Understand the Finances advanced analytics and forecasting requirements
      • Work with the data scientists across the Finance BU CFO, CP&FA, DSCOE, and other teams to understand their data needs, processes, and advanced analytics/modeling requirements
      • Work with these teams to prioritize features for ongoing sprints and managing a prioritized list of data requirements / user stories
    • Solution design, deployment and user education
      • Educate the Finance data scientists within the BU CFO, CP&A, and other teams about the advanced analytics and machine learning capabilities available
      • Where required, assist with model development leveraging current tools and techniques
      • Where required, develop forecasting and other predictive models utilizing current tools and techniques
      • Lead a Community of Practice for advanced analytics across Finance to facilitate sharing knowledge and best practices, training and education, and methodology standardization
      • Collaborating with our Technologies partners, design a Finance advanced analytics solution platform, leveraging existing (and possibly new) investments
      • Act as the overall Finance Advanced Analytics Subject Matter Experts as part of the Solution Architecture team responsible for delivery of those capabilities. Ensure that the Solution Architecture teams are focused on the correct Finance priorities
      • Identify opportunities to adopt innovative technologies
      • Collaborating with our Technologies partners, Finance data scientists, and other stakeholders, deploy advanced analytics and machine learning solutions across the appropriate Finance user base
      • Ensure methodology and technique standardization across the Finance data scientist groups:
        • Leveraging correct data and model inputs
        • Leveraging common tools and standard technology platforms and methodologies
        • Leveraging work being done by other Finance teams

This position will report to the Director, Finance Advanced Analytics.    

Qualifications:

    • Previous experience working with big data, advanced analytics tools (including Python, PySpark, Spark SQP, Hive, and other tools), and machine learning techniques (such as regression, classification, clustering, anomaly detection, market basket analysis, time series data and neural networks).
    • Previous experience working closely with Technologies to deploy systems to re-engineer processes and solve complex business problems.
    • Transformational approach to Process efficiency and effectiveness.
    • Results driven approach with an ability to convert strategic vision into clear project outcomes.
    • Strong relationship management and conflict resolution skills with a proven track of positively collaborating, influencing and extensively coordinating with internal and external partners.
    • Excellent communication, both written and verbal, facilitation and technical writing skills. Must be able to communicate results to stakeholders including non-finance people.
    • Must be a self-starter with an ability to drive large scale change.
Oliver Wyman Labs
  • Boston, MA

Team description


A little bit about us


Oliver Wyman’s Data Science and Engineering team works to solve our clients’ toughest analytical problems, continually pushing forward the state of the art in quantitative problem solving and raising the capabilities of the firm globally. Our team works hand-in-hand with strategy consulting teams, adding expertise where a good solution requires wrangling with a wide variety of data sources including high volume, high velocity and unstructured data; applying specialized data science and machine learning techniques; and developing reusable codebases to accelerate delivery.


Our work is fast paced and expansive. We build models, coalesce data sources, interpret results, and build services and occasionally products that enhance our clients’ ability to derive value from data and upgrade their decision-making capabilities. Our solutions feature the latest in data science tools, machine learning algorithms, AI approaches, software engineering disciplines, and analytical techniques to make an extraordinary impact on clients and societies. We operate at the intersection of exciting, progressive tech and real-world problems faced by some of the world's leading companies. We hire smart, driven people and equip them with the tools and support that they need to get their jobs done.


Our Values and Our Proposition


We believe that our culture is a key pillar of our success and our identity. We take our work seriously, but not ourselves.  We believe happiness, health, and a life outside of work are more important than work itself and are essential ingredients in professional success – no matter what the profession. Ours is a team whose members teach and take care of each other. We want not simply to continue learning and growing but to fundamentally redefine what it means to do consulting and to stretch the boundaries of what we, as a firm, are capable of doing.


Our proposition is simple:



  • You will work with people as passionate and awesome as yourself.

  • You will encounter a variety of technology, industries, projects, and clients.

  • You will deliver work that has real impact in how our clients do business.

  • We will invest in you.

  • We will help you grow your career while remaining hands-on and technical.

  • You will work in smaller, more agile, flatter teams than is the norm elsewhere.

  • You will be empowered and have more autonomy and responsibilities than almost anywhere else.

  • You will help recruit your future colleagues.

  • We offer competitive compensation and benefits.

  • You will work with peers who can learn from you and from whom you can learn.

  • You will work with people who leave egos at the door and encourage an environment of collaboration, fun, and bringing new ideas to the group.


Data Engineer


The Data Engineer is the universal translator between IT, business, software engineers, and data scientists, working directly with clients and project teams. You will work to understand the business problem being solved and provide the data required to do so, delivering at the pace of the consulting teams and iterating data to ensure quality as understandings crystallize.


Our historical focus has been on high-performance SQL data marts for batch analytics, but we are now driving toward new data stores and cluster-based architectures to enable streaming analytics and scaling beyond our current terabyte-level capabilities. Your ability to tune high-performance data pipelines will help us to rapidly deploy some of the latest machine learning algorithms/frameworks and other advanced analytical techniques at scale.


You will serve as a keystone on our larger projects, enabling us to deliver solutions hand-in-hand with consultants, data scientists, and software engineers.


A good candidate will have:



  • Excellent communication skills (verbal and written)

  • Empathy for their colleagues and their clients

  • Signs of initiative and ability to drive things forward

  • Understanding of the overall problem being solved and what flows into it

  • Ability to create and implement data engineering solutions using modern software engineering practices

  • Ability to scale up from “laptop-scale” to “cluster scale” problems, in terms of both infrastructure and problem structure and technique

  • Ability to deliver tangible value very rapidly, working with diverse teams of varying backgrounds

  • Ability to codify best practices for future reuse in the form of accessible, reusable patterns, templates, and code bases

  • A pragmatic approach to software and technology decisions as well as prioritization and delivery

  • Ability to handle multiple workstreams and prioritize accordingly

  • Commitment to delivering value and helping clients succeed

  • Comfort working with both collocated and distributed team members across time zones

  • Comfort working with and developing coding standards

  • Ability to codify best practices for future reuse in the form of accessible, reusable patterns, templates, and codebases

  • Willingness to travel as required for cases (0 up to 40%)


Some things that make our Data Engineers effective:



  • A technical background in computer science, data science, machine learning, artificial intelligence, statistics or other quantitative and computational science

  • A compelling track record of designing and deploying large scale technical solutions, which deliver tangible, ongoing value

    • Direct experience having built and deployed complex production systems that implement modern data science methods at scale and do so robustly

    • Comfort in environments where large projects are time-boxed and therefore consequential design decisions may need to be made and acted upon rapidly

    • Fluency with cluster computing environments and their associated technologies, and a deep understanding of how to balance computational considerations with theoretical properties of potential solutions

    • Ability to context-switch, to provide support to dispersed teams which may need an “expert hacker” to unblock an especially challenging technical obstacle

    • Demonstrated ability to deliver technical projects with a team, often working under tight time constraints to deliver value

    • An ‘engineering’ mindset, willing to make rapid, pragmatic decisions to improve performance, accelerate progress or magnify impact; recognizing that the ‘good’ is not the enemy of the ‘perfect’

    • Comfort with working with distributed teams on code-based deliverables, using version control systems and code reviews



  • Demonstrated expertise working with and maintaining open source data analysis platforms, including but not limited to:

    • Pandas, Scikit-Learn, Matplotlib, TensorFlow, Jupyter and other Python data tools

    • Spark (Scala and PySpark), HDFS, Hive, Kafka and other high-volume data tools

    • Relational databases such as SQL Server, Oracle, Postgres

    • NoSQL storage tools, such as MongoDB, Cassandra, ElasticSearch, and Neo4j



  • Demonstrated fluency in modern programming languages for data science, covering a wide gamut from data storage and engineering frameworks through to machine learning libraries

Motion Recruitment Partners
  • Houston, TX

Our client, One of the Largest Software Companies in the world, is actively looking for a Data Engineer to join their team in Houston, TX!


Develops software within our platform to clean and transform data to support enterprise use cases.


Data Engineer/ Developer
  •  Must be experienced with Apache Spark (in particular PySpark & Spark SQL)


Benefits & Perks

A competitive benefits package is offered complete with: health, transportation benefits, accrued sick-time off, and a 401k option.

Accenture
  • Philadelphia, PA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Detroit, MI
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .