OnlyDataJobs.com

Citizens Advice
  • London, UK
  • Salary: £40k - 45k

As a Database engineer in the DevOps team here at Citizens Advice you will help us develop and implement our data strategy. You will have the opportunity to work with both core database technologies and big data solutions.


Past


Starting from scratch, we have built a deep tech-stack with AWS services at its core. We created a new CRM system, migrated a huge amount of data to AWS Aurora PG and used AWS RDS to run some of our business critical databases.


You will have gained a solid background and in-depth knowledge of AWS RDS, SQL/Admin against DBMS's such as PostgreSql / MySQL / SQL Server, Dynamo / Aurora. You will have dealt with Data Warehousing, ETL, DB Mirroring/Replication, and DB Security Mechanisms & Techniques.


Present


We use AWS RDS including Aurora as the standard DB implementation for our applications. We parse data in S3 using Spark jobs and we are planning to implement a data lake solution in AWS.


Our tools and technologies include:



  • Postgres on AWS RDS

  • SQL Server for our Data Warehouse

  • Liquibase for managing the DW schema

  • Jenkins 2 for task automation

  • Spark / Parquet / AWS Glue for parsing raw data

  • Docker / docker-compose for local testing


You will be developing, supporting and maintaining automation tools to drive database, reporting and maintenance tasks.


As part of our internal engineering platform offering, R&D time will give you the opportunity to develop POC solutions to integrate with the rest of the business.


Future


You will seek continuous improvement and implement solutions to help Citizens Advice deliver digital products better and quicker.


You will be helping us implement a data lake solution to improve operations and to offer innovative services.


You will have dedicated investment time at Citizens Advice to learn new skills, technologies, research topics or work on tools that make this possible.

FCA Fiat Chrysler Automobiles
  • Detroit, MI

Fiat Chrysler Automobiles is looking to fill the full-time position of a Data Scientist. This position is responsible for delivering insights to the commercial functions in which FCA operates.


The Data Scientist is a role in the Business Analytics & Data Services (BA) department and reports through the CIO. They will play a pivotal role in the planning, execution  and delivery of data science and machine learning-based projects. The bulk of the work with be in areas of data exploration and preparation, data collection and integration, machine learning (ML) and statistical modelling and data pipe-lining and deployment.

The newly hired data scientist will be a key interface between the ICT Sales & Marketing team, the Business and the BA team. Candidates need to be very much self-driven, curious and creative.

Primary Responsibilities:

    • Problem Analysis and Project Management:
      • Guide and inspire the organization about the business potential and strategy of artificial intelligence (AI)/data science
      • Identify data-driven/ML business opportunities
      • Collaborate across the business to understand IT and business constraints
      • Prioritize, scope and manage data science projects and the corresponding key performance indicators (KPIs) for success
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
      • Generate and test hypotheses about the underlying mechanics of the business process.
      • Network with domain experts to better understand the business mechanics that generated the data.
    • Data Collection and Integration:
      • Understand new data sources and process pipelines. Catalog and document their use in solving business problems.
      • Create data pipelines and assets the enable more efficiency and repeatability of data science activities.
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
    • Machine Learning and Statistical Modelling:
      • Apply various ML and advanced analytics techniques to perform classification or prediction tasks
      • Integrate domain knowledge into the ML solution; for example, from an understanding of financial risk, customer journey, quality prediction, sales, marketing
      • Testing of ML models, such as cross-validation, A/B testing, bias and fairness
    • Operationalization:
      • Collaborate with ML operations (MLOps), data engineers, and IT to evaluate and implement ML deployment options
      • (Help to) integrate model performance management tools into the current business infrastructure
      • (Help to) implement champion/challenger test (A/B tests) on production systems
      • Continuously monitor execution and health of production ML models
      • Establish best practices around ML production infrastructure
    • Other Responsibilities:
      • Train other business and IT staff on basic data science principles and techniques
      • Train peers on specialist data science topics
      • Promote collaboration with the data science COE within the organization.

Basic Qualifications:

    • A bachelors  in computer science, data science, operations research, statistics, applied mathematics, or a related quantitative field [or equivalent work experience such as, economics, engineering and physics] is required. Alternate experience and education in equivalent areas such as economics, engineering or physics, is acceptable. Experience in more than one area is strongly preferred.
    • Candidates should have three to six years of relevant project experience in successfully launching, planning, executing] data science projects. Preferably in the domains of automotive or customer behavior prediction.
    • Coding knowledge and experience in several languages: for example, R, Python, SQL, Java, C++, etc.
    • Experience of working across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service, and others.
    • Experience with distributed data/computing and database tools: MapReduce, Hadoop, Hive, Kafka, MySQL, Postgres, DB2 or Greenplum, etc.
    • All candidates must be self-driven, curious and creative.
    • They must demonstrate the ability to work in diverse, cross-functional teams.
    • Should be confident, energetic self-starters, with strong moderation and communication skills.

Preferred Qualifications:

    • A master's degree or PhD in statistics, ML, computer science or the natural sciences, especially physics or any engineering disciplines or equivalent.
    • Experience in one or more of the following commercial/open-source data discovery/analysis platforms: RStudio, Spark, KNIME, RapidMiner, Alteryx, Dataiku, H2O, SAS Enterprise Miner (SAS EM) and/or SAS Visual Data Mining and Machine Learning, Microsoft AzureML, IBM Watson Studio or SPSS Modeler, Amazon SageMaker, Google Cloud ML, SAP Predictive Analytics.
    • Knowledge and experience in statistical and data mining techniques: generalized linear model (GLM)/regression, random forest, boosting, trees, text mining, hierarchical clustering, deep learning, convolutional neural network (CNN), recurrent neural network (RNN), T-distributed Stochastic Neighbor Embedding (t-SNE), graph analysis, etc.
    • A specialization in text analytics, image recognition, graph analysis or other specialized ML techniques such as deep learning, etc., is preferred.
    • Ideally, the candidates are adept in agile methodologies and well-versed in applying DevOps/MLOps methods to the construction of ML and data science pipelines.
    • Knowledge of industry standard BA tools, including Cognos, QlikView, Business Objects, and other tools that could be used for enterprise solutions
    • Should exhibit superior presentation skills, including storytelling and other techniques to guide and inspire and explain analytics capabilities and techniques to the organization.
FlixBus
  • Berlin, Germany

Your Tasks – Paint the world green



  • Holistic cloud-based infrastructure automation

  • Distributed data processing clusters as well as data streaming platforms based on Kafka, Flink and Spark

  • Microservice platforms based on Docker

  • Development infrastructure and QA automation

  • Continuous Integration/Delivery/Deployment


Your Profile – Ready to hop on board



  • Experience in building and operating complex infrastructure

  • Expert-level: Linux, System Administration

  • Experience with Cloud Services, Expert-Level with either AWS or GCP  

  • Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShift

  • Mindset: "Automate Everything", "Infrastructure as Code", "Pipelines as Code", "Everything as Code"

  • Hands-on experience with "Infrastructure as Code" tools: TerraForm, CloudFormation, Packer

  • Experience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)

  • Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELK

  • At least basic knowledge in designing and implementing Service Level Agreements

  • Solid knowledge of Network and general Security Engineering

  • At least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, Bamboo

  • At least basic hands-on DBA experience, experience with data backup and recovery

  • Experience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory

American Express
  • Phoenix, AZ

Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every single day. Today, creative ideas, insight and new points of view are at the core of how we craft a more powerful, personal and fulfilling experience for all our customers. So if youre passionate about a career building breakthrough software and making an impact on an audience of millions, look no further.

There are hundreds of chances for you to make your mark on Technology and life at American Express. Heres just some of what youll be doing:

    • Take your place as a core member of an Agile team driving the latest application development practices.
    • Find your opportunity to execute new technologies, write code and perform unit tests, as well as working with data science, algorithms and automation processing
    • Engage your collaborative spirit by Collaborate with fellow engineers to craft and deliver recommendations to Finance, Business, and Technical users on Finance Data Management. 


Qualifications:

  

Are you up for the challenge?


    • 4+ years of Software Development experience.
    • BS or MS Degree in Computer Science, Computer Engineering, or other Technical discipline including practical experience effectively interpreting Technical and Business objectives and challenges and designing solutions.
    • Ability to effectively collaborate with Finance SMEs and partners of all levels to understand their business processes and take overall ownership of Analysis, Design, Estimation and Delivery of technical solutions for Finance business requirements and roadmaps, including a deep understanding of Finance and other LOB products and processes. Experience with regulatory reporting frameworks, is preferred.
    • Hands-on expertise with application design and software development across multiple platforms, languages, and tools: Java, Hadoop, Python, Streaming, Flink, Spark, HIVE, MapReduce, Unix, NoSQL and SQL Databases is preferred.
    • Working SQL knowledge and experience working with relational databases, query authoring (SQL), including working familiarity with a variety of databases(DB2, Oracle, SQL Server, Teradata, MySQL, HBASE, Couchbase, MemSQL).
    • Experience in architecting, designing, and building customer dashboards with data visualization tools such as Tableau using accelerator database Jethro.
    • Extensive experience in application, integration, system and regression testing, including demonstration of automation and other CI/CD efforts.
    • Experience with version control softwares like git, svn and CI/CD testing/automation experience.
    • Proficient with Scaled Agile application development methods.
    • Deals well with ambiguous/under-defined problems; Ability to think abstractly.
    • Willingness to learn new technologies and exploit them to their optimal potential, including substantiated ability to innovate and take pride in quickly deploying working software.
    • Ability to enable business capabilities through innovation is a plus.
    • Ability to get results with an emphasis on reducing time to insights and increased efficiency in delivering new Finance product capabilities into the hands of Finance constituents.
    • Focuses on the Customer and Client with effective consultative skills across a multi-functional environment.
    • Ability to communicate effectively verbally and in writing, including effective presentation skills. Strong analytical skills, problem identification and resolution.
    • Delivering business value using creative and effective approaches
    • Possesses strong business knowledge about the Finance organization, including industry standard methodologies.
    • Demonstrates a strategic/enterprise viewpoint and business insights with the ability to identify and resolve key business impediments.


Employment eligibility to work with American Express in the U.S. is required as the company will not pursue visa sponsorship for these positions.

The HT Group
  • Austin, TX

Full Stack Engineer, Java/Scala Direct Hire Austin

Do you have a track record of building both internal- and external-facing software services in a dynamic environment? Are you passionate about introducing disruptive and innovative software solutions for the shipping and logistics industry? Are you ready to deliver immediate impact with the software you create?

We are looking for Full Stack Engineers to craft, implement and deploy new features, services, platforms, and products. If you are curious, driven, and naturally explore how to build elegant and creative solutions to complex technical challenges, this may be the right fit for you. If you value a sense of community and shared commitment, youll collaborate closely with others in a full-stack role to ship software that delivers immediate and continuous business value. Are you up for the challenge?

Tech Tools:

  • Application stack runs entirely on Docker frontend and backend
  • Infrastructure is 100% Amazon Web Services and we use AWS services whenever possible. Current examples: EC2 Elastic Container Service (Docker), Kinesis, SQS, Lambda and Redshift
  • Java and Scala are the languages of choice for long-lived backend services
  • Python for tooling and data science
  • Postgres is the SQL database of choice
  • Actively migrating to a modern JavaScript-centric frontend built on Node, React/Relay, and GraphQL as some of our core UI technologies

Responsibilities:

  • Build both internal and external REST/JSON services running on our 100% Docker-based application stack or within AWS Lambda
  • Build data pipelines around event-based and streaming-based AWS services and application features
  • Write deployment, monitoring, and internal tooling to operate our software with as much efficiency as we build it
  • Share ownership of all facets of software delivery, including development, operations, and test
  • Mentor junior members of the team and coach them to be even better at what they do

Requirements:

  • Embrace the AWS + DevOps philosophy and believe this is an innovative approach to creating and deploying products and technical solutions that require software engineers to be truly full-stack
  • Have high-quality standards, pay attention to details, and love writing beautiful, well-designed and tested code that can stand the test of time
  • Have built high-quality software, solved technical problems at scale and believe in shipping software iteratively and often
  • Proficient in and have delivered software in Java, Scala, and possibly other JVM languages
  • Developed a strong command over Computer Science fundamentals
SafetyCulture
  • Surry Hills, Australia
  • Salary: A$120k - 140k

The Role



  • Be an integral member on the team responsible for design, implement and maintain distributed big data capable system with high-quality components (Kafka, EMR + Spark, Akka, etc).

  • Embrace the challenge of dealing with big data on a daily basis (Kafka, RDS, Redshift, S3, Athena, Hadoop/HBase), perform data ETL, and build tools for proper data ingestion from multiple data sources.

  • Collaborate closely with data infrastructure engineers and data analysts across different teams, find bottlenecks and solve the problem

  • Design, implement and maintain the heterogeneous data processing platform to automate the execution and management of data-related jobs and pipelines

  • Implement automated data workflow in collaboration with data analysts, continue to improve, maintain and improve system in line with growth

  • Collaborate with Software Engineers on application events, and ensuring right data can be extracted

  • Contribute to resources management for computation and capacity planning

  • Diving deep into code and constantly innovating


Requirements



  • Experience with AWS data technologies (EC2, EMR, S3, Redshift, ECS, Data Pipeline, etc) and infrastructure.

  • Working knowledge in big data frameworks such as Apache Spark, Kafka, Zookeeper, Hadoop, Flink, Storm, etc

  • Rich experience with Linux and database systems

  • Experience with relational and NoSQL database, query optimization, and data modelling

  • Familiar with one or more of the following: Scala/Java, SQL, Python, Shell, Golang, R, etc

  • Experience with container technologies (Docker, k8s), Agile development, DevOps and CI tools.

  • Excellent problem-solving skills

  • Excellent verbal and written communication skills 

Riccione Resources
  • Dallas, TX

Sr. Data Engineer Hadoop, Spark, Data Pipelines, Growing Company

One of our clients is looking for a Sr. Data Engineer in the Fort Worth, TX area! Build your data expertise with projects centering on large Data Warehouses and new data models! Think outside the box to solve challenging problems! Thrive in the variety of technologies you will use in this role!

Why should I apply here?

    • Culture built on creativity and respect for engineering expertise
    • Nominated as one of the Best Places to Work in DFW
    • Entrepreneurial environment, growing portfolio and revenue stream
    • One of the fastest growing mid-size tech companies in DFW
    • Executive management with past successes in building firms
    • Leader of its technology niche, setting the standards
    • A robust, fast-paced work environment
    • Great technical challenges for top-notch engineers
    • Potential for career growth, emphasis on work/life balance
    • A remodeled office with a bistro, lounge, and foosball

What will I be doing?

    • Building data expertise and owning data quality for the transfer pipelines that you create to transform and move data to the companys large Data Warehouse
    • Architecting, constructing, and launching new data models that provide intuitive analytics to customers
    • Designing and developing new systems and tools to enable clients to optimize and track advertising campaigns
    • Using your expert skills across a number of platforms and tools such as Ruby, SQL, Linux shell scripting, Git, and Chef
    • Working across multiple teams in high visibility roles and owning the solution end-to-end
    • Providing support for existing production systems
    • Broadly influencing the companys clients and internal analysts

What skills/experiences do I need?

    • B.S. or M.S. degree in Computer Science or a related technical field
    • 5+ years of experience working with Hadoop and Spark
    • 5+ years of experience with Python or Ruby development
    • 5+ years of experience with efficient SQL (Postgres, Vertica, Oracle, etc.)
    • 5+ years of experience building and supporting applications on Linux-based systems
    • Background in engineering Spark data pipelines
    • Understanding of distributed systems

What will make my résumé stand out?

    • Ability to customize an ETL or ELT
    • Experience building an actual data warehouse schema

Location: Fort Worth, TX

Citizenship: U.S. citizens and those authorized to work in the U.S. are encouraged to apply. This company is currently unable to provide sponsorship (e.g., H1B).

Salary: 115 130k + 401k Match

---------------------------------------------------


~SW1317~

Gravity IT Resources
  • Miami, FL

Overview of Position:

We undertaking an ambitious digital transformation across Sales, Service, Marketing, and eCommerce. We are looking for a web data analytics wizard with prior experience in digital data preparation, discovery, and predictive analytics.

The data scientist/web analyst will work with external partners, digital business partners, enterprise analytics, and technology team to strategically plan and develop datasets, measure web analytics, and execute on predictive and prescriptive use cases. The role demands the ability to (1) Learn quickly (2) Work in a fast-paced, team-driven environment (3) Manage multiple efforts simultaneously (4) Adept at using large datasets and using models to test effectiveness of different courses of action (5) Promote data driven decision making throughout the organization (6) Define and measure success of capabilities we provide the organization.


Primary Duties and Responsibilities

    Analy
    • ze data captured through Google Analytics and develop meaningful actionable insights on digital behavior. Put t
    • ogether a customer 360 data frame by connecting CRM Sales, Service, Marketing cloud data with Commerce Web behavior data and wrangle the data into a usable form. Use p
    • redictive modelling to increase and optimize customer experiences across online & offline channels. Evalu
    • ate customer experience and conversions to provide insights & tactical recommendations for web optimization
    • Execute on digital predictive use cases and collaborate with enterprise analytics team to ensure use of best tools and methodologies.
    • Lead support for enterprise voice of customer feedback analytics.
    • Enhance and maintain digital data library and definitions.

Minimum Qualifications

  • Bachelors degree in Statistics, Computer Science, Marketing, Engineering or equivalent
  • 3 years or more of working experience in building predictive models.
  • Experience in Google Analytics or similar web behavior tracking tools is required.
  • Experience in R is a must with working knowledge of connecting to multiple data sources such as amazon redshift, salesforce, google analytics, etc.
  • Working knowledge in machine learning algorithms such as Random Forest, K-means, Apriori, Support Vector machine, etc.
  • Experience in A/B testing or multivariate testing.
  • Experience in media tracking tags and pixels, UTM, and custom tracking methods.
  • Microsoft Office Excel & PPT (advanced).

Preferred Qualifications

  • Masters degree in statistics or equivalent.
  • Google Analytics 360 experience/certification.
  • SQL workbench, Postgres.
  • Alteryx experience is a plus.
  • Tableau experience is a plus.
  • Experience in HTML, JavaScript.
  • Experience in SAP analytics cloud or SAP desktop predictive tool is a plus
Signify Health
  • Dallas, TX

Position Overview:

Signify Health is looking for a savvy Data Engineer to join our growing team of deep learning specialists. This position would be responsible for evolving and optimizing data and data pipeline architectures, as well as, optimizing data flow and collection for cross-functional teams. The Data Engineer will support software developers, database architects, data analysts, and data scientists. The ideal candidate would be self-directed, passionate about optimizing data, and comfortable supporting the Data Wrangling needs of multiple teams, systems and products.

If you enjoy providing expert level IT technical services, including the direction, evaluation, selection, configuration, implementation, and integration of new and existing technologies and tools while working closely with IT team members, data scientists, and data engineers to build our next generation of AI-driven solutions, we will give you the opportunity to grow personally and professionally in a dynamic environment. Our projects are built on cooperation and teamwork, and you will find yourself working together with other talented, passionate and dedicated team member, all working towards a shared goal.

Essential Job Responsibilities:

  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data models for greater scalability, etc.
  • Leverage Azure for extraction, transformation, and loading of data from a wide variety of data sources in support of AI/ML Initiatives
  • Design and implement high performance data pipelines for distributed systems and data analytics for deep learning teams
  • Create tool-chains for analytics and data scientist team members that assist them in building and optimizing AI workflows
  • Work with data and machine learning experts to strive for greater functionality in our data and model life cycle management capabilities
  • Communicate results and ideas to key decision makers in a concise manner
  • Comply with applicable legal requirements, standards, policies and procedures including, but not limited to the Compliance requirements and HIPAA.


Qualifications:Education/Licensing Requirements:
  • High school diploma or equivalent.
  • Bachelors degree in Computer Science, Electrical Engineer, Statistics, Informatics, Information Systems, or another quantitative field. or related field or equivalent work experience.


Experience Requirements:
  • 5+ years of experience in a Data Engineer role.
  • Experience using the following software/tools preferred:
    • Experience with big data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with AWS or Azure cloud services.
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C#, etc.
  • Strong work ethic, able to work both collaboratively, and independently without a lot of direct supervision, and solid problem-solving skills
  • Must have strong communication skills (written and verbal), and possess good one-on-one interpersonal skills.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
  • 2 years of experience in data modeling, ETL development, and Data warehousing
 

Essential Skills:

  • Fluently speak, read, and write English
  • Fantastic motivator and leader of teams with a demonstrated track record of mentoring and developing staff members
  • Strong point of view on who to hire and why
  • Passion for solving complex system and data challenges and desire to thrive in a constantly innovating and changing environment
  • Excellent interpersonal skills, including teamwork and negotiation
  • Excellent leadership skills
  • Superior analytical abilities, problem solving skills, technical judgment, risk assessment abilities and negotiation skills
  • Proven ability to prioritize and multi-task
  • Advanced skills in MS Office

Essential Values:

  • In Leadership Do whats right, even if its tough
  • In Collaboration Leverage our collective genius, be a team
  • In Transparency Be real
  • In Accountability Recognize that if it is to be, its up to me
  • In Passion Show commitment in heart and mind
  • In Advocacy Earn trust and business
  • In Quality Ensure what we do, we do well
Working Conditions:
  • Fast-paced environment
  • Requires working at a desk and use of a telephone and computer
  • Normal sight and hearing ability
  • Use office equipment and machinery effectively
  • Ability to ambulate to various parts of the building
  • Ability to bend, stoop
  • Work effectively with frequent interruptions
  • May require occasional overtime to meet project deadlines
  • Lifting requirements of
Mix.com
  • Phoenix, AZ

Are you interested in scalability & distributed systems? Do you want to work to help shaping a discovery engine powered by cutting edge technologies and machine learning at scale? If you answered yes to the above questions, Mix's Research and Development is the team for you!


In this role, you'll be part of a small and innovative team comprised of engineers and data scientists working together to understand content by leveraging machine learning and NLP technologies. You will have the opportunity to work on core problems like detection of low quality content or spam, text semantic analysis, video and image processing, content quality assessment and monitoring. Our code operates at massive scale, ingesting, processing and indexing millions of URLs.



Responsibilities

  • Write code to build an infrastructure, which is capable of scaling based on the load
  • Collaborate with researchers and data scientists to integrate innovative Machine Learning and NLP techniques with our serving, cloud and data infrastructure
  • Automate build and deployment process, and setup monitoring and alerting systems
  • Participate in the engineering life-cycle, including writing documentation and conducting code reviews


Required Qualifications

  • Strong knowledge of algorithms, data structures, object oriented programming and distributed systems
  • Fluency in OO programming language, such as  Scala (preferred), Java, C, C++
  • 3+ years demonstrated expertise in stream processing platforms like Apache Flink, Apache Storm and Apache Kafka
  • 2+ years experience with a cloud platform like Amazon Web Services (AWS) or Microsoft Azure
  • 2+ years experience with monitoring frameworks, and analyzing production platforms, UNIX servers and mission critical systems with alerting and self-healing systems
  • Creative thinker and self-starter
  • Strong communication skills


Desired Qualifications

  • Experience with Hadoop, Hive, Spark or other MapReduce solutions
  • Knowledge of statistics or machine learning
Ripple
  • San Francisco, CA
  • Salary: $135k - 185k

Ripple is the world’s only enterprise blockchain solution for global payments. Today the world sends more than $155 trillion* across borders. Yet, the underlying infrastructure is dated and flawed. Ripple connects banks, payment providers, corporates and digital asset exchanges via RippleNet to provide one frictionless experience to send money globally.


Ripple is growing rapidly and we are looking for a results-oriented and passionate Senior Software Engineer, Data to help build and maintain infrastructure and empower the data-driven culture of the company. Ripple’s distributed financial technology outperforms today’s banking infrastructure by driving down costs, increasing processing speeds and delivering end-to-end visibility into payment fees, timing, and delivery.


WHAT YOU’LL DO:



  • Support our externally-facing data APIs and applications built on top of them

  • Build systems and services that abstract the engines and will allow the users to focus on business and application logic via higher-level programming models

  • Build data pipelines and tools to keep pace with the growth of our data and its consumers

  • Identify and analyze requirements and use cases from multiple internal teams (including finance, compliance, analytics, data science, and engineering); work with other technical leads to design solutions for the requirements


WHAT WE’RE LOOKING FOR:



  • Deep experience with distributed systems, distributed data stores, data pipelines, and other tools in cloud services environments (e.g AWS, GCP)

  • Experience with distributed processing compute engines like Hadoop, Spark, and/or GCP data ecosystems (BigTable, BigQuery, Pub/Sub)

  • Experience with stream processing frameworks such as Kafka, Beam, Storm, Flink, Spark streaming

  • Experience building scalable backend services and data pipelines

  • Proficient in Python, Java, or Go

  • Able to support Node.js in production

  • Familiarity with Unix-like operating systems

  • Experience with database internals, database design, SQL and database programming

  • Familiarity with distributed ledger technology concepts and financial transaction/trading data

  • You have a passion for working with great peers and motivating teams to reach their potential

HelloFresh US
  • New York, NY

HelloFresh is hiring a Data Scientist to join our Supply Chain Analytics Team! In this exciting role, you will develop cutting edge insights using a wealth of data about our suppliers, ingredients, operations, and customers to improve the customer experience, drive operational efficiencies and build new supply chain capabilities. To succeed in this role, you’ll need to have a genuine interest in using data and analytic techniques to solve real business challenges, and a keen interest to make a big impact on a fast-growing organization.


You will...



  • Own the development and deployment of quantitative models to make routine and strategic operational decisions to plan the fulfillment of orders and identify the supply chain capabilities we need to build to continue succeeding in the business

  • Solve complex optimization problems with linear programming techniques

  • Collaborate across operational functions (e.g. supply chain planning, logistics, procurement, production, etc) to identify and prioritize projects

  • Communicate results and recommendations to stakeholders in a business oriented manner with clear guidelines which can be implemented across functions in the supply chain

  • Work with complex datasets across various platforms to perform descriptive, prescriptive, predictive, and exploratory analyses


At a minimum, you have...



  • Advanced degree in Statistics, Economics, Applied Mathematics, Computer Science, Data Science, Engineering or a related field

  • 2 - 5 years’ experience delivering analytical solutions to complex business problems

  • Knowledge of linear programming optimization techniques (familiarity with software like CPLEX, AMPL, etc is a plus)

  • Fluency in managing and analyzing large data sets of data with advanced tools, such as R and Python etc.

  • Experience extracting and transforming data from structured databases such as: MySQL, PostgreSQL, etc.


You are...



  • Results-oriented - You love transforming data into meaningful outcomes

  • Gritty - When you encounter obstacles you find solutions, not excuses

  • Intellectually curious – You love to understand why things are the way they are, how things work, and challenge the status quo

  • A team player – You favor team victories over individual success

  • A structured problem solver – You possess strong organizational skills and consistently demonstrate a methodical approach to all your work

  • Agile – You thrive in fast-paced and dynamic environments and are comfortable working autonomously

  • A critical thinker – You use logic to identify opportunities, evaluate alternatives, and synthesize and present critical information to solve complex problems



Our team is diverse, high-performing and international, helping us to create a truly inspiring work environment in which you will thrive!


It is the policy of HelloFresh not to discriminate against any employee or applicant for employment because of race, color, religion, sex, sexual orientation, gender identity, national origin, age, marital status, genetic information, disability or because he or she is a protected veteran.

Computer Staff
  • Fort Worth, TX

We have been retained by our client located in Fort Worth, Texas (south Ft Worth area), to deliver a Risk Modeler on a regular full-time basis.   We prefer SAS experience but are interviewing candidates with R, SPSS, WPS, MatLab or similar statistical package experience if candidate has experience from financial loan credit risk analysis industry. Enjoy all the resources of a big company, none of problems that small companies have. This company has doubled in size in 3 years. We have a keen interest in finding a business minded statistical modeling candidate with some credit risk experience to build statistical models within the marketing, direct mail areas of financial services, lending, loans. We are seeking a candidate with statistical modeling, and data analysis skills, interested in creating better ways to solve problems in order to increase loan originations, and decrease loan defaults, and more. Our client is in business to find prospective borrowers, originate loans, provide loans, service loans, process loans and collect loan payments. The team works with third party data vendors, credit reporting agencies and data service providers, data augmentation, address standardization, fraud detection; decision sciences, analytics, and this position includes create of statistical models. They support the one of the, if not the largest profile of decision management in the US.  


We require experience with statistical analysis tools such as SAS, Matlab, R, WPS or SPSS or Python if to do statistical analysis. This is a statistical modeling, risk modeling, model building, decision science, data analysis and statistical analysis type of role requiring SQL and/or SQL Server experience and critical thinking skills to solve problems.   We prefer candidates with experience with data analysis, SQL queries, joins (left, inner, outer, right), reporting from data warehouses with tools such as Tableau, COGNOS, Looker, Business Objects. We prefer candidates with financial and loan experience especially knowledge of loan originations, borrower profiles or demographics, modeling loan defaults, statistical analysis i.e. Gini Coefficients and K-S test / Kolmogorov-Smirnov test for credit scoring and default prediction and modeling.


However, primarily critical thinking skills, and statistical modeling and math / statistics skills are needed to fulfill the tasks of this very interesting and important role, including playing an important role growing your skills within this small risk/modeling team. Take on challenges in the creation and use of statistical models. There is no use for Hadoop, or any NoSQL databases in this position this is not a big data type of position. no "big data" type things needed. There is no Machine Learning or Artificial Intelligence needed in this role. Your role is to create and use those statistical models. Create statistical models for direct mail in financial lending space to reach the right customers with the right profiles / demographics / credit ratings, etc. Take credit risk, credit analysis, loan data and build a new model, or validate the existing model, or recalibrate it or rebuild it completely.   The models are focused on delivering answers to questions or solutions to problems within these areas financial loan lending: Risk Analysis, Credit Analysis, Direct Marketing, Direct Mail, and Defaults. Logistical regression in SAS or Knowledge Studio, and some light use of Looker as the B.I. tool on top of SQL Server data.   Deliver solutions or ways for this business to make improvements in these areas and help the business be more profitable. Seek answers to questions. Seek solutions to problems. Create models. Dig into the data. Explore and find opportunities to improve the business. Expected to fit within the boundaries of defaults or loan values and help drive the business with ideas to get a better models in place, or explore data sources to get better models in place. Use critical thinking to solve problems.


Answer questions or solve problems such as:

What are the statistical models needed to produce the answers to solve risk analysis and credit analysis problems?

What are customer profiles have the best demographics or credit risk for loans to send direct mail items to as direct marketing pieces?

Why are loan defaults increasing or decreasing? What is impacting the increase or decrease of loan defaults?  



Required Skills

Bachelors degree in Statistics or Finance or Economics or Management Information Systems or Math or Quantitative Business Analysis or Analytics any other related math or science or finance degree. Some loan/lending business domain work experience.

Masters degree preferred, but not required.

Critical thinking skills.

must have SQL skills (any database SQL Server, MS Access, Oracle, PostgresSQL, Postgres) and the ability to write queries, joins, inner joins, left joins, right joins, outer joins. SQL Server is highly preferred.

Any statistical analysis systems / packages experience including statistical modeling experience, and excellent math skills:   SAS, Matlab, R, WPS, SPSS or Python with R language if used in statistical analysis. Must have significant statistical modeling skills and experience.



Preferred Skills:
Loan Credit Analysis highly preferred.   SAS highly preferred.
Experience with Tableu, Cognos, Business Objects, Looker or similar data warehouse data slicing and dicing and data warehouse reporting tools.   Creating reports from data warehouse data, or data warehouse reporting. SQL Server SSAS but only to pull reports. Direct marketing, direct mail marketing, loan/lending to somewhat higher risk borrowers.



Employment Type:   Regular Full-Time

Salary Range: $85,000 130,000 / year    

Benefits:  health, medical, dental, vision only cost employee about $100 per month.
401k 4% matching after 1 year, Bonus structure, paid vacation, paid holidays, paid sick days.

Relocation assistance is an option that can be provided, for a very well qualified candidate. Local candidates are preferred.

Location: Fort Worth, Texas
(area south of downtown Fort Worth, Texas)

Immigration: US citizens and those authorized to work in the US are encouraged to apply. We are unable to sponsor H1b candidates at this time.

Please apply with resume (MS Word format preferred), and also Apply with your Resume or apply with your Linked In Profile via the buttons on the bottom of this Job Posting page:  

http://www.computerstaff.com/?jobIdDescription=314  


Please call 817-424-1411 or please send a Text to 817-601-7238 to inquire or to follow up on your resume application. Yes, we recommend you call to leave a message, or send a text with your name, at least.   Thank you for your attention and efforts.

Apporchid Inc
  • Philadelphia, PA

Java- Techcnial lead

Job description:

Experienced Java/J2EE technical lead with proven expertise in implementing, managing enterprise scale Hadoop architectures and environments. Setup highly available App Orchid Java Product platform in AWS with industry standard security frameworks. Collaborates with application developers to support, manage, enhance and tactical roadmaps to support large and highly visible Product environment deployments.

Roles and Responsibilities:

  • Work with Solution Architects and Business leaders to understand the architectural roadmaps that support and fulfill business strategy.
  • Lead and design custom solutions on our App Orchid Product Platform
  • Act as a Tech Lead and Engineer mentoring colleagues with less experience
  • Collaboration with a high-performing, forward-focused team, Product Owner(s) and Business stakeholders engagement
  • Enable and influence the timely and successful delivery of business data capabilities and/or technology objectives
  • Opportunity to expand your communication, analytical, interpersonal, and organization capabilities
  • Experience working in a fast paced environment driving business outcomes leveraging Agile to its fullest
  • Enhance your entrepreneurial mindset network opportunity and influencing outcomes
  • Supporting environment that fosters can-do attitude and opportunity for growth and advancement based on consistent demonstrative performance.
  • Expertise in system administration and programming skills. Storage, performance tuning and capacity management of Big Data.
  • Good understanding of Hadoop eco system such as HDFS, YARN, Map Reduce, HBase, Spark, and Hive.
  • Experience in setup of SSL and integration with Active Directory.
  • Good exposure to CI/CD
  • Oversee technical deliverables for invest and maintenance projects through the software development life cycle, including validating the completeness of estimates, quality and accuracy of technical designs, build and implementation.
  • Proactively address technical issues and risks that could impact project schedule and/or budget
  • Work closely with stakeholders to design and document automation solutions that align with the business needs and also consistent with the architectural vision.
  • Facilitate continuity between Sourcing Partners, other IT Groups and Enterprise Architecture.
  • Work closely with the architecture team to ensure that the technical solution designs and implementation are consistent with the architectural vision, as well as to drive the business through technical innovation through the use of newly identified and leading technologies.
  • Own and drive adoption of DevOps tools and best practices (including conducting (automated) code reviews, reducing/eliminating technical debt, and delivering vulnerability free code) across the application portfolio.

Qualifications

  • Bachelor's degree or equivalent work experience
  • Eight to Ten years (or more) experience as Java/J2EE Technical lead/Sr developer in a large production environment.
  • A deep understanding of Big Data,  Java, Elastic Search, Kibana, Postgresql, TestNG, Gradle
  • Good verbal and written communication skill
  • Demonstrated experience in working on large projects or small teams
  • Working knowledge of Red Hat Linux and Windows operating systems
  • Expert knowledge in Java programming language, SQL and microservices  
  • Good understanding of Cloud technologies, especially AWS stack
  • At least 8 years experience with developing and implementing applications

Desired Skills and Experience

  • Proficient with Java development
  • Ability to quickly learn new technologies and enable/train other analysts
  • Ability to work independently as well as in a team environment on moderate to highly complex issues
  • High technical aptitude and demonstrated progression of technical skills - continuous improvement
  • Ability to automate software/application installations and configurations hosted on Linux servers.
mbr targeting GmbH
  • Berlin, Germany
  • Salary: €60k - 75k

Join our team as Senior Data Engineer!


Why?



  • We have lots of this data you love

  • We have a big and shiny cluster

  • We're a small team: great power and great responsibility for everyone!

  • Bullshit free: no QA, no in-house sales, no scrum, no finger pointing, not that many managers

  • We're smart and wanna become smarter


YOU will …



  • do some of that typical Data Engineering ETL stuff

  • manage our warehouse infrastructure (Kafka, Hadoop, HBase)

  • work with a a lot of different languages and technologies (Scala, Python, Java, Spark, Flink, Hive)

  • help our Data Scientists with their fancy Machine Learning Magic

  • improve our legacy code and make it 10x faster

  • come up with great ideas and introduce new frameworks and languages to our stack

  • learn a lot from your colleagues

inovex GmbH
  • München, Germany

Als Linux Systems Engineer mit Schwerpunkt Hadoop und Search bist du bei unseren Kunden für die Konzeption, Installation und Konfiguration der Linux-basierten Big Data Cluster verantwortlich. Ebenfalls zu deinenAufgaben gehören die Bewertung bestehender Big-Data-Systeme und die zukunftssichere Erweiterung von bestehenden Umgebungen.

Du kümmerst dich dabei ganzheitlich um die Systeme und betreust diese vom Linux-Betriebssystem bis zum Big Data Stack. Für die Automatisierung der oftmals komplexen Big Data Cluster verwendest du bevorzugt Konfigurationsmanagementwerkzeuge.

In unseren interdisziplinären Projektteams spielst du eine gestaltende Rolle und hast dabei oftmals die Entscheidungsfreiheit, wenn es um die Wahl der Werkzeuge geht.


Zur Besetzung der Position suchen wir Experten, die folgende Skills und Eigenschaften mitbringen:



  • Ein erfolgreich abgeschlossenes Studium mit Informatikbezug oder eine vergleichbare Qualifikation wie beispielsweise die Ausbildung zum Fachinformatiker sowie relevante Berufserfahrung

  • Leidenschaft und Begeisterung für neue Technologien und Themen rund um Linux und Big Data

  • Praktische Erfahrung mit Hadoop und gängigen Hadoop Ecosystem Tools sowie erste Erfahrungen mit „Hadoop Security“

  • Idealerweise hast du bereits praktische Erfahrung mit einer oder mehreren der folgenden Technologien bzw. Produkten gesammelt:

    • Flume, Kafka

    • Flink, Hive, Spark

    • Cassandra, Elasticsearch, HBase, MongoDB, CouchDB

    • Amazon EMR, Cloudera, Hortonworks, MapR

    • Java



  • Gute Kenntnisse im Bereich Netzwerk und Storage

  • Vorteilhaft sind Kenntnisse in einem Konfigurationsmanagementwerkzeug (z.B. Puppet, Chef oder Salt)

  • Gute kommunikative Fähigkeiten und sehr gute Deutsch- und Englischkenntnisse in Wort und Schrift

  • Hohe Motivation, gemeinsam mit anderen „inovexperts“ exzellente Projektergebnisse zu erzielen

  • Mobilität und Flexibilität für die Projektarbeit bei unseren Kunden vor Ort

GE Capital
  • Ann Arbor, MI
  • ***Please Note: This Role is in Van Buren, MI (30 minutes drive from Ann Arbor)


Role Summary

Serve as analytics & visualization developer to build innovative solutions to support a broad range of analysis and outcomes. Partner with teams to create wing-to-wing transactional views, trends and anomalies leveraging GE's data lake. Look for new ways to harness the data we have for insights and actionable outcomes. 

 
In This Role, You Will

Essential Responsibilities: 


  • Develop Spotfire reports utilizing advanced data visualization techniques and related SQL.
  • Leverage Treasury Data Lake and data virtualization technologies (Denodo) to deliver new capabilities on tablet and mobile platforms.
  • Work on an agile team using Rally to quickly prototype and iterate on ideas
  • Lead the research and evaluation of emerging technology, industry and market trends to assist in project development and/or operational support activities
  • Technical analysis working with PostgreSQL & AWS native services
  • Partner with business teams to define requirements & user stories
  • Building and implementing analytical models with R and Python


Qualifications/Requirements
  • Bachelors degree from an accredited university or college in Computer Science or Information Systems
  • One or more years experience of design & development of data centric applications leveraging data from enterprise data warehouses.


Eligibility Requirements:

  • Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job


Technical Expertise

Desired Characteristics:
  • 1 year+ experience with BI visualization and/or reporting tools (Expert level knowledge of Modern BI Platforms like Spotfire, Qlik, Tableau etc.); a data and reporting guru.
  • Experience with web technologies such as ASP, HTML and CSS Integration of same with Data Visualization tools (e.g. extensions) a plus
  • Experience with scripting languages like Java Script, Python etc.
  • Exposure to advanced analytic & data science applications
  • Excellent BI application development skills, as demonstrated by having led, designed and implemented successful web and mobile projects
  • Ability to clearly articulate creative ideas to senior leaders
  • Ability to guide and direct technical team members through the SDLC
  • Ability to hit tight deadlines and work under pressure
  • SAP and/or Oracle ERP systems exposure a plus
  • Passion for learning new technologies and eagerness to collaborate with other creative minds
  • Strong desire for exploring, evaluating and understanding new technologies
Accenture
  • Atlanta, GA
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions. Using advanced analytic concepts like machine-learning, AI, big data, deep analytics, cloud, mobility, robotics and IOT your contribution will redefine the way entire industries work in every corner of the globe.
Accenture Digitals Applied Intelligence delivers insight driven outcomes at scale to help organizations improve performance. You will be a part of Accentures pivot into the New as an Data Scientist with the Applied Intelligence Human Resources Centre of Excellence. You will be working with a team that identifies and develops advanced analytics, statistical models, machine learning methods and solutions for Accenture Human Resources to improve various business outcome indicators. You will also support project-based analytics planning and implementation in the areas such as Predictive Analytics, Program Evaluation, Digital Analytics, Scheduling, Demand Forecasting and Fulfilment, HR Transformation, Talent Acquisition and Talent Supply Chain.
Role Description
A professional in this position, at this level within Accenture has the following responsibilities:
    • Is well versed and experienced in Advanced Analytics and Program Delivery
    • Identifies, assesses and solves complex business problems for area of responsibility, where analysis of situations or data requires an in-depth evaluation of variable factors
    • Closely follows the strategic direction set by executive management when establishing near term goals
    • Interacts with senior and executive management at a client and/or within Accenture on matters where they may need to gain acceptance on an alternate approach
    • Acts independently to determine methods and procedures on new assignments.
    • Responsible for decisions related to the day to day impact on area of responsibility
    • Manages large - medium sized teams and/or work efforts at a client or within Accenture
    • Understands and helps influence the client strategic direction and helps architect complex solutions
    • Leads a center of excellence in applied intelligence focused on Human Resources within Accenture.
    • Successfully develop, conceptualize, test and scale various statistical and machine learning models
    • Follow multiple approaches for project execution from adapting existing assets to analytics use cases, exploring third-party and open source solutions for speed to execution and for specific use cases, and engaging in fundamental research to develop novel solutions
    • Leverage the vast global network of Accenture to collaborate with Accenture Tech Labs, Accenture Open Innovation and Accenture Operations for creating solutions
    • Collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced analytics projects from design to execution
Basic Qualifications
These are the minimum requirements for an aspirant to be considered for the position applying to
    • Bachelor's degree in data science, mathematics, economics, statistics, engineering and information management or related field of study
    • Minimum 7 years of experience in data science and use of statistical methodologies
    • Minimum 5 years of developing machine learning methods, including familiarity with techniques in clustering, regression, optimization, recommendation, neural networks, and other
    • 7 ye
    • ars of experience in at least one of the following; Sup
        e
      • rvised and Unsupervised Learning, Classification Models, Cluster Analysis, Neural Networks, Non-parametric Methods, Multivariate Statistics, Reliability Models, Markov Models, Stochastic models, Bayesian Models, Deep Learning, Genetic Algorithms, Fuzzy Logic, Inference Systems
    • 7 years working and conceptual knowledge and experience with data science tools, including Python, R, Scala, Julia, or SAS
    • 7 years building and maintaining a large-scale analytics infrastructure used across the business including conducted research, design, implementation, and validation of cutting-edge algorithms to analyze diverse data sources
    • 7 years technical project management of data science driven projects and data science professionals developing and delivering machine learning models that work in a production setting
Preferred Qualifications
    • Preferred Masters or Ph.D. (Computer Science, Statistics, Engineering, Physics, Mathematics, Economics, Industrial/Organizational Psychology or Social Science)
    • Working and conceptual knowledge in relevant domains (Human Resources, Talent Acquisition, Talent Development, Talent Supply Chain) including hands on experience handling data driven decisions
    • HR Certifications
    • Familiarity with relational databases and intermediate level knowledge of SQL
    • Knowledge of UNIX or Linux environments
    • Experience working with large data sets and tools like MapReduce, Hadoop, Hive, etc.
    • Experience working with large data streaming technologies like Spark, Flink, etc.
    • Proficient verbal, written and presentation skills
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture.
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is a federal contractor and an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Accenture
  • Houston, TX
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions. Using advanced analytic concepts like machine-learning, AI, big data, deep analytics, cloud, mobility, robotics and IOT your contribution will redefine the way entire industries work in every corner of the globe.
Accenture Digitals Applied Intelligence delivers insight driven outcomes at scale to help organizations improve performance. You will be a part of Accentures pivot into the New as an Data Scientist with the Applied Intelligence Human Resources Centre of Excellence. You will be working with a team that identifies and develops advanced analytics, statistical models, machine learning methods and solutions for Accenture Human Resources to improve various business outcome indicators. You will also support project-based analytics planning and implementation in the areas such as Predictive Analytics, Program Evaluation, Digital Analytics, Scheduling, Demand Forecasting and Fulfilment, HR Transformation, Talent Acquisition and Talent Supply Chain.
Role Description
A professional in this position, at this level within Accenture has the following responsibilities:
    • Is well versed and experienced in Advanced Analytics and Program Delivery
    • Identifies, assesses and solves complex business problems for area of responsibility, where analysis of situations or data requires an in-depth evaluation of variable factors
    • Closely follows the strategic direction set by executive management when establishing near term goals
    • Interacts with senior and executive management at a client and/or within Accenture on matters where they may need to gain acceptance on an alternate approach
    • Acts independently to determine methods and procedures on new assignments.
    • Responsible for decisions related to the day to day impact on area of responsibility
    • Manages large - medium sized teams and/or work efforts at a client or within Accenture
    • Understands and helps influence the client strategic direction and helps architect complex solutions
    • Leads a center of excellence in applied intelligence focused on Human Resources within Accenture.
    • Successfully develop, conceptualize, test and scale various statistical and machine learning models
    • Follow multiple approaches for project execution from adapting existing assets to analytics use cases, exploring third-party and open source solutions for speed to execution and for specific use cases, and engaging in fundamental research to develop novel solutions
    • Leverage the vast global network of Accenture to collaborate with Accenture Tech Labs, Accenture Open Innovation and Accenture Operations for creating solutions
    • Collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced analytics projects from design to execution
Basic Qualifications
These are the minimum requirements for an aspirant to be considered for the position applying to
    • Bachelor's degree in data science, mathematics, economics, statistics, engineering and information management or related field of study
    • Minimum 7 years of experience in data science and use of statistical methodologies
    • Minimum 5 years of developing machine learning methods, including familiarity with techniques in clustering, regression, optimization, recommendation, neural networks, and other
    • 7 ye
    • ars of experience in at least one of the following; Sup
        e
      • rvised and Unsupervised Learning, Classification Models, Cluster Analysis, Neural Networks, Non-parametric Methods, Multivariate Statistics, Reliability Models, Markov Models, Stochastic models, Bayesian Models, Deep Learning, Genetic Algorithms, Fuzzy Logic, Inference Systems
    • 7 years working and conceptual knowledge and experience with data science tools, including Python, R, Scala, Julia, or SAS
    • 7 years building and maintaining a large-scale analytics infrastructure used across the business including conducted research, design, implementation, and validation of cutting-edge algorithms to analyze diverse data sources
    • 7 years technical project management of data science driven projects and data science professionals developing and delivering machine learning models that work in a production setting
Preferred Qualifications
    • Preferred Masters or Ph.D. (Computer Science, Statistics, Engineering, Physics, Mathematics, Economics, Industrial/Organizational Psychology or Social Science)
    • Working and conceptual knowledge in relevant domains (Human Resources, Talent Acquisition, Talent Development, Talent Supply Chain) including hands on experience handling data driven decisions
    • HR Certifications
    • Familiarity with relational databases and intermediate level knowledge of SQL
    • Knowledge of UNIX or Linux environments
    • Experience working with large data sets and tools like MapReduce, Hadoop, Hive, etc.
    • Experience working with large data streaming technologies like Spark, Flink, etc.
    • Proficient verbal, written and presentation skills
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture.
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is a federal contractor and an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Accenture
  • Dallas, TX
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions. Using advanced analytic concepts like machine-learning, AI, big data, deep analytics, cloud, mobility, robotics and IOT your contribution will redefine the way entire industries work in every corner of the globe.
Accenture Digitals Applied Intelligence delivers insight driven outcomes at scale to help organizations improve performance. You will be a part of Accentures pivot into the New as an Data Scientist with the Applied Intelligence Human Resources Centre of Excellence. You will be working with a team that identifies and develops advanced analytics, statistical models, machine learning methods and solutions for Accenture Human Resources to improve various business outcome indicators. You will also support project-based analytics planning and implementation in the areas such as Predictive Analytics, Program Evaluation, Digital Analytics, Scheduling, Demand Forecasting and Fulfilment, HR Transformation, Talent Acquisition and Talent Supply Chain.
Role Description
A professional in this position, at this level within Accenture has the following responsibilities:
    • Is well versed and experienced in Advanced Analytics and Program Delivery
    • Identifies, assesses and solves complex business problems for area of responsibility, where analysis of situations or data requires an in-depth evaluation of variable factors
    • Closely follows the strategic direction set by executive management when establishing near term goals
    • Interacts with senior and executive management at a client and/or within Accenture on matters where they may need to gain acceptance on an alternate approach
    • Acts independently to determine methods and procedures on new assignments.
    • Responsible for decisions related to the day to day impact on area of responsibility
    • Manages large - medium sized teams and/or work efforts at a client or within Accenture
    • Understands and helps influence the client strategic direction and helps architect complex solutions
    • Leads a center of excellence in applied intelligence focused on Human Resources within Accenture.
    • Successfully develop, conceptualize, test and scale various statistical and machine learning models
    • Follow multiple approaches for project execution from adapting existing assets to analytics use cases, exploring third-party and open source solutions for speed to execution and for specific use cases, and engaging in fundamental research to develop novel solutions
    • Leverage the vast global network of Accenture to collaborate with Accenture Tech Labs, Accenture Open Innovation and Accenture Operations for creating solutions
    • Collaborate with other data scientists, subject matter experts, sales, and delivery teams from Accenture locations around the globe to deliver strategic advanced analytics projects from design to execution
Basic Qualifications
These are the minimum requirements for an aspirant to be considered for the position applying to
    • Bachelor's degree in data science, mathematics, economics, statistics, engineering and information management or related field of study
    • Minimum 7 years of experience in data science and use of statistical methodologies
    • Minimum 5 years of developing machine learning methods, including familiarity with techniques in clustering, regression, optimization, recommendation, neural networks, and other
    • 7 ye
    • ars of experience in at least one of the following; Sup
        e
      • rvised and Unsupervised Learning, Classification Models, Cluster Analysis, Neural Networks, Non-parametric Methods, Multivariate Statistics, Reliability Models, Markov Models, Stochastic models, Bayesian Models, Deep Learning, Genetic Algorithms, Fuzzy Logic, Inference Systems
    • 7 years working and conceptual knowledge and experience with data science tools, including Python, R, Scala, Julia, or SAS
    • 7 years building and maintaining a large-scale analytics infrastructure used across the business including conducted research, design, implementation, and validation of cutting-edge algorithms to analyze diverse data sources
    • 7 years technical project management of data science driven projects and data science professionals developing and delivering machine learning models that work in a production setting
Preferred Qualifications
    • Preferred Masters or Ph.D. (Computer Science, Statistics, Engineering, Physics, Mathematics, Economics, Industrial/Organizational Psychology or Social Science)
    • Working and conceptual knowledge in relevant domains (Human Resources, Talent Acquisition, Talent Development, Talent Supply Chain) including hands on experience handling data driven decisions
    • HR Certifications
    • Familiarity with relational databases and intermediate level knowledge of SQL
    • Knowledge of UNIX or Linux environments
    • Experience working with large data sets and tools like MapReduce, Hadoop, Hive, etc.
    • Experience working with large data streaming technologies like Spark, Flink, etc.
    • Proficient verbal, written and presentation skills
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture.
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is a federal contractor and an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.