OnlyDataJobs.com

Accenture
  • San Diego, CA
Organization: Accenture Applied Intelligence
Position: Artificial Intelligence Engineer - Consultant
The digital revolution is changing everything. Its everywhere transforming how we work and play. Accenture Digitals 36,000 professionals are driving these exciting changes and bringing them to life across 40 industries in more than 120 countries. At the forefront of digital, youll create it, own it and make it a reality for clients looking to better serve their connected customers and operate always-on enterprises. Join us and become an integral part of our experienced digital team with the credibility, expertise and insight clients depend on.
Accenture Applied Intelligence, part of Accenture Digital, helps clients to use analytics and artificial intelligence to drive actionable insights, at scale. We apply sophisticated algorithms, data engineering and visualization to extract business insights and help clients turn those insights into actions that drive tangible outcomesto improve their performance and disrupt their markets. Accenture Applied Intelligence is a leader in big data analytics, with deep industry and technical experience. We provide services and solutions that include Analytics Advisory, Data Science, Data Engineering and Analytics-as-a-Service.
Role Description
As an AI engineer, you will facilitate the transfer of advanced AI technologies from the research labs to the domain testbeds and thus the real world. You will participate in the full research to deployment pipeline. You will help conceptualize and develop research experiments, and then implement the systems to execution these experiments. You will lead or work with a team and interact closely with deep experience machine learning engineering and research and the industry partners. You will attend reading groups and seminars, master research techniques and engineering practices, and design research tools and experimental testbeds. You will apply state-of-the-art AI algorithms, explore new solutions, and build working prototypes. You will also learn to deploy the systems and solutions at scale.
Responsibilities
    • Use Deep Learning and Machine Learning to create scalable solutions for business problems.
    • Deliver Deep Learning/Machine Learning projects from beginning to end, including business understanding, data aggregation, data exploration, model building, validation and deployment.
    • Define Architecture Reference Assets - Apply Accenture methodology, Accenture reusable assets, and previous work experience to delivery consistently high quality work. Deliver written or oral status reports regularly. Stay educated on new and emerging market offerings that may be of interest to our clients. Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems
    • Work hands on to demonstrate and prototype integrations in customer environments. Primary upward interaction is with direct supervisor. May interact with peers and/or management levels at a client and/or within Accenture.
    • Solution and Proposal Alignment - Through a formal sales process, work with the Sales team to identify and qualify opportunities. Conduct full technical discovery, identifying pain points, business and technical requirements, as is and to be scenarios.
    • Understand the strategic direction set by senior management as it relates to team goals. Use considerable judgment to define solution and seeks guidance on complex problems.
Qualifications
    • Bachelors degree in AI, Computer Science, Engineering, Statistics, Physics.
    • Minimum of 1 year of experience in production deployed solutions using artificial intelligence or machine learning techniques.
    • Minimum of 1 years previous consulting or client service delivery experience
    • Minimum of 2 years of experience with system integration architectures, private and public cloud architectures, pros/cons, transformation experience
    • Minimum of 1 year of full lifecycle deployment experience
Preferred Skills
    • Masters or PhD in Analytics, Statistic or other quantitative disciplines
    • Deep learning architectures: convolutional, recurrent, autoencoders, GANs, ResNets
    • Experience in Cognitive tools like Microsoft Bot Framework & Cognitive Services, IBM Watson, Amazon AI services
    • Deep understanding of Data structures and Algorithms
    • Deep experience in Python, C# (.NET), Scala
    • Deep knowledge with MxNet, CNTK, R, H20, TensorFlow, PyTorch
    • Highly desirable to have experience in: cuDNN, NumPY, SciPy
Professional Skill Requirements
    • Recent success in contributing to a team-oriented environment
    • Proven ability to work creatively and analytically in a problem-solving environment
    • Excellent communication (written and oral) and interpersonal skills
    • Demonstrated leadership in professional setting; either military or civilian
    • Demonstrated teamwork and collaboration in a professional setting; either military or civilian
    • Ability to travel extensively
OUR COMMITMENT TO YOU
    • Your entrepreneurial spirit and vision will be rewarded, and your success will fuel opportunities for career advancement.
    • You will make a difference for some pretty impressive clients. Accenture serves 94 of the Fortune Global 100 and more than 80 percent of the Fortune Global 500.
    • You will be an integral part of a market-leading analytics organization, including the largest and most diversified group of digital, technology, business process and outsourcing professionals in the world. You can leverage our global team to support analytics innovation workshops, rapid capability development, enablement and managed services.
    • You will have access to Accentures deep industry and functional expertise. We operate across more than 40 industries and have hundreds of offerings addressing key business and technology issues. Through our global network, we bring unparalleled experience and comprehensive capabilities across industries and business functions, and extensive research on the worlds most successful companies. You will also be able to tap into the continuous innovation of our Accenture Technology Labs and Innovation Centers, as well as top universities such as MIT through our academic alliance program.
    • You will have access to distinctive analytics assets that we use to accelerate delivering value to our clients including more than 550 analytics assets underpinned by a strong information management and BI technology foundation. Accenture has earned more than 475 patents and patents pending globally for software assets, data- and analytic-related methodologies and content.
    • As the worlds largest independent technology services provider, we are agnostic about technology but have very clear viewpoints about what is most appropriate for a clients particular challenge. You will have access to our alliances with market-leading technology providers and collaborative relationships with emerging players in the analytics and big data spacethe widest ecosystem in the industry. These alliances bring together Accentures extensive analytics capabilities and alliance providers technology, experience and innovation to power analytics-based solutions.
    • You will have access to the best talent. Accenture has a team of more than 36,000 digital professionals including technical architects, big data engineers, data scientists and business analysts, as well as user digital strategists and experience designers.
    • Along with a competitive salary, Accenture offers a comprehensive package that includes generous paid time off, 401K match and an employee healthcare plan. Learn more about our extensive rewards and benefits here: Benefits .
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Burnett Specialists / Choice Specialists
  • Houston, TX

Medical Economics & Informatics Analyst

SUMMARY

This position is responsible for supporting business analysis, utilization analysis, performance results monitoring, reports development, and analytical support within the Medical Economics and Informatics Department.

MAJOR RESPONSIBILITIES

Extract, manage, and analyze operational, claims and performance data to identify trends, patterns, insights, and outliers within data.

Translates transactional data into client ready deliverables using visualization tools available, such as Tableau.

Drives a repeatable analytic process, and consistently deliver best-in-class reporting to multiple business stakeholders.

Studies client specific data at multiple geographical and clinical levels to identify utilization and cost trends and provide recommendations and insights.

Contributes in analyzing and developing reports on client specific utilization trends, program savings and performance to be shared with internal Client Services team.

Provides analytical and technical support for development and QA of Client Level Dashboards and other recurrent reporting.

Collaborates on design/development/automatization of Standard Client Reporting Package, including dashboards.

Contributes on data deliverables & takeaways for Quarterly Business Review meetings.

Responsible for ad-hoc Client and Markets requests for mature programs.

Collaborates on processes to integrate new client data (claims/membership) and perform quality control tests.

QUALIFICATIONS & REQUIREMENTS

·         Bachelor Degree or higher in Healthcare Informatics, Health Care Statistics, Public Health Economics, Epidemiology, Mathematics, Computer Science/IT, related field or equivalent experience.

·         3+ years experience in the Healthcare Industry and/or Managed Care Organizations

·         Experience in analytics/informatics and report development is required

·         Experience using Medical Claims data (medical cost, utilization, cost benefit analysis, etc.) is required

·         Experience with Pharmacy claims data and healthcare records is preferred

·         1+ years experience with analytics in data warehouse environment

·         Experience using SQL for report writing and data management

      Direct experience in business intelligence applications, advanced data visualization tools and/or statistical analysis software (such as: SQL/MySQL/R, SAS, Tableau, Minitab, Matlab, Crystal Reports, Business Objects Desktop (Web based) Intelligence, etc.)

      Intermediate to advanced skills with Microsoft Office tools (MS Word, Excel, PowerPoint, Visio, Project) necessary to document, track and present information related to company program/products/clients

      Knowledge of healthcare financial business cycle, healthcare quality reporting and analysis, benchmarking is required

      Knowledge of health system functions, terminology and standard ICD-10 and CPT coding systems is highly desirable

      Excellent critical and analytical thinking skills are highly desirable

      Ability to compile information and prepare reports that are easily translatable for client delivery

Trinity Industries, Inc.
  • Dallas, TX
Description

TrinityRail is searching for a Senior Data Analyst. The successful candidate will turn data into information, information into insight and insight into business decisions. The analyst should have experience in data mining and transformation, creating visualizations, and statistical analysis. Presentation skills are required to review variance explanations, explain models, and lead conversations with the leadership team.

What You'll Be Doing
  • Work with stakeholders and subject matter experts to derive and understand business outcomes
  • Assist in generating hypothesis that drives strategic questions about the Rail business
  • Works with data engineers to facilitate technical design of complex data sourcing, transformation and aggregation logic, ensuring business analytics requirements are met
  • Possess strong competencies with SQL and data transformation tools to understand and prepare data
  • Brings deep expertise in data visualization tools such as Qliksense and techniques in translating business analytics needs into data visualization and semantic data access requirements
  • Statistical knowledge of evaluating the usefulness and relevance of model outputs
  • Build and refine statistical models to gain better insight (primarily R Studio)
  • Machine learning experience a plus

What You'll Need

Qualifications
  • Bachelors Degree in Mathematics, Economics, Computer Science, Information Management, Statistics, Finance or Accounting, Masters preferred
  • 5+ years relevant experience
  • Advanced skills in Visualization, SQL, R, Microsoft Excel, PowerPoint, and at a minimum intermediate experience in all other Microsoft Office Products
  • Technical expertise regarding data models, database design/Development, data mining and other segmentation techniques
  • Interpret data, analyze results using statistical techniques and provide ongoing reports
  • Must possess effective communication skills, both verbal and written
  • Strong organizational, time management and multi-tasking skills
  • Experience with data conversion, interface and report development
  • Adept at queries, report writing and presenting findings
  • Process improvement and automation a plus
  • Work with management to prioritize business and information needs
MINDBODY Inc.
  • Irvine, CA
  • Salary: $96k - 135k

The Senior Data Engineer focuses on designing, implementing and supporting new and existing data solutions- data processing, and data sets to support various advanced analytical needs. You will be designing, building and supporting data pipelines consuming data from multiple different source systems and transforming it into valuable and insightful information. You will have the opportunity to contribute to end-to-end platform design for our cloud architecture and work multi-functionally with operations, data science and the business segments to build batch and real-time data solutions. The role will be part of a team supporting our Corporate, Sales, Marketing, and Consumer business lines.


 
MINIMUM QUALIFICATIONS AND REQUIREMENTS:



  • 7+ years of relevant experience in one of the following areas: Data engineering, business intelligence or business analytics

  • 5-7 years of supporting a large data platform and data pipelining

  • 5+ years of experience in scripting languages like Python etc.

  • 5+ years of experience with AWS services including S3, Redshift, EMR andRDS

  • 5+ years of experience with Big Data Technologies (Hadoop, Hive, HBase, Pig, Spark, etc.)

  • Expertise in database design and architectural principles and methodologies

  • Experienced in Physical data modeling

  • Experienced in Logical data modeling

  • Technical expertise should include data models, database design and data mining



PRINCIPAL DUTIES AND RESPONSIBILITIES:



  • Design, implement, and support a platform providing access to large datasets

  • Create unified enterprise data models for analytics and reporting

  • Design and build robust and scalable data integration (ETL) pipelines using SQL, Python, and Spark.

  • As part of an Agile development team contribute to architecture, tools and development process improvements

  • Work in close collaboration with product management, peer system and software engineering teams to clarify requirements and translate them into robust, scalable, operable solutions that work well within the overall data architecture

  • Coordinate data models, data dictionaries, and other database documentation across multiple applications

  • Leads design reviews of data deliverables such as models, data flows, and data quality assessments

  • Promotes data modeling standardization, defines and drives adoption of the standards

  • Work with Data Management to establish governance processes around metadata to ensure an integrated definition of data for enterprise information, and to ensure the accuracy, validity, and reusability of metadata

Avaloq Evolution AG
  • Zürich, Switzerland

The position


Are you passionate about data? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

A challenging role as Senior Data Scientist in a demanding, dynamic and international software company using the latest innovations in predictive analytics and visualization techniques. You will be driving the creation of statistical and machine learning models from prototyping until the final deployment.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.





Your profile


  • PhD or Master degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • 5+ years of experience in Statistical Modelling, Anomaly Detection, Machine Learning algorithms both Supervised and Unsupervised

  • Proven experience in applying data science methods to business problems

  • Ability to explain complex analytical concepts to people from other fields

  • Proficiency in at least one of the following: Python, R, Java/Scala, SQL and/or SAS

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, stream processing)

  • Expertise in text mining and natural language processing is a strong plus

  • Familiarity with network analysis and/or graph databases is a plus

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Experience in leading teams and mentoring others

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Experience in the financial industry is a strong plus

  • Fluent in English; German, Italian and French a plus



Professional requirements




  • Use machine learning tools and statistical techniques to produce solutions for customer demands and complex problems

  • Participate in pre-sales and pre-project analysis to develop prototypes and proof-of-concepts

  • Analyse customer behaviour and needs enabling customer-centric product development

  • Liaise and coordinate with internal infrastructure and architecture team regarding setting up and running a BigData & Analytics platform

  • Strengthen data science within Avaloq and establish a data science centre of expertise

  • Look for opportunities to use insights/datasets/code/models across other functions in Avaloq



Main place of work
Zurich

Contact
Avaloq Evolution AG
Alina Tauscher, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

careers@avaloq.com
www.avaloq.com/en/open-positions

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
Avaloq Evolution AG
  • Zürich, Switzerland

The position


Are you passionate about data architecture? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

Responsible for selecting appropriate technologies from open source, commercial on-premises and cloud-based offerings. Integrating a new generation of tools within the existing environment to ensure access to accurate and current data. Consider not only the functional requirements, but also the non-functional attributes of platform quality such as security, usability, and stability.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.


Your profile


  • PhD, Master or Bachelor degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, data lakes, stream processing)

  • Practical experience with Container Platforms (OpenShift) and/or containerization software (Kubernetes, Dockers)

  • Hands-on experience developing data extraction and transformation pipelines (ETL process)

  • Expert knowledge in RDBMS, NoSQL and Data Warehousing

  • Familiar with information retrieval software such as Elastic Search/Lucene/SOLR

  • Firm understanding of major programming/scripting languages like Java/Scala, Linux, PHP, Python and/or R

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Fluent in English; German, Italian and French a plus





 Professional requirements


  • Be a thought leader for best practice how to develop and deploy data science products & services

  • Provide an infrastructure to make data driven insights scalable and agile

  • Liaise and coordinate with stakeholders regarding setting up and running a BigData and analytics platform

  • Lead the evaluation of business and technical requirements

  • Support data-driven activities and a data-driven mindset where needed



Main place of work
Zurich

Contact
Avaloq Evolution AG
Anna Drozdowska, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

www.avaloq.com/en/open-positions

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
ITCO Solutions, Inc.
  • Austin, TX

The Sr. Engineer will be building pipelines using Spark ScalaMust Haves:
Expertise in the Big Data processing and ETL Pipeline
Designing large scaling ETL pipelines - batch and realtime
Expertise in Spark Scala coding and Data Frame API (rather than the SQL based APIs)
Expertise in core Data Frame APIs
Expertise in doing unit testing Spark Data frame API based code
Strong in Scripting knowledge using Python and shell scripting
Experience and expertise in working on performance tuning of large scale data pipelines

GeoPhy
  • New York, NY

We're already working with some of the largest real estate lenders and investors across the globe, and we believe that our AVM will truly disrupt the commercial real estate industry.  Using your machine learning and analytical skills, you will contribute to the development of GeoPhy's core information products. This includes working on the development of our flagship product, the Automated Valuation Model (AVM) that we've developed for the commercial real estate market.



What you'll be responsible for



  • Developing and maintaining predictive valuation algorithms for the commercial real estate market, based on stochastic modeling

  • Identifying and analyzing new data sources to improve model accuracy, closely working with our data sourcing teams

  • Conducting statistical analysis to identify patterns and insights, and process and feature engineer data as needed to support model development and business products

  • Bringing models to production, in collaboration with the development and data engineering teams 

  • Supporting data sourcing strategy and the validation of related infrastructure and technology

  • Contributing to the development of methods in data data science, including: statistical analysis and model development related to real estate, economics, the built environment, or financial markets



What we're looking for



  • Creative and intellectually curious with hands-on experience as a data scientist

  • Flexible, resourceful, and a reliable team player

  • Rigorous analyst, critical thinker, and problem solver with experience in hypothesis testing and experimental design

  • Excellent at communicating, including technical documentation and presenting work across a variety of audiences

  • Experienced working with disparate data sources and the engineering and statistical challenges that presents, particularly with time series, socio-economic-demographic (SED) data, and/or geo-spatial data

  • Strong at data exploration and visualization

  • Experienced implementing predictive models across a full suite of statistical learning algorithms (regression/classification, unsupervised/semi-supervised/supervised)

  • Proficient in Python or R as well as critical scientific and numeric programming packages and tools

  • Intermediate knowledge of SQL

  • Full working proficiency in English

  • An MSc/PhD degree in Computer Science, Mathematics, Statistics or a related subject, or commensurate technical experience



Bonus points for



  • International mind set

  • Experience in an Agile organization

  • Knowledge or experience with global real estate or financial markets

  • Experience with complex data and computing architectures, including cloud services and distributed computing

  • Direct experience implementing models in production or delivering a data product to market



What’s in it for you?



  • You will have the opportunity to accelerate our rapidly growing organisation.

  • We're a lean team, so your impact will be felt immediately.

  • Personal learning budget.

  • Agile working environment with flexible working hours and location.

  • No annual leave allowance; take time off whenever you need.

  • We embrace diversity and foster inclusion. This means we have a zero-tolerance policy towards discrimination,

  • GeoPhy is a family and pet friendly company.

  • Get involved in board games, books, and lego.

SafetyCulture
  • Surry Hills, Australia
  • Salary: A$120k - 140k

The Role



  • Be an integral member on the team responsible for design, implement and maintain distributed big data capable system with high-quality components (Kafka, EMR + Spark, Akka, etc).

  • Embrace the challenge of dealing with big data on a daily basis (Kafka, RDS, Redshift, S3, Athena, Hadoop/HBase), perform data ETL, and build tools for proper data ingestion from multiple data sources.

  • Collaborate closely with data infrastructure engineers and data analysts across different teams, find bottlenecks and solve the problem

  • Design, implement and maintain the heterogeneous data processing platform to automate the execution and management of data-related jobs and pipelines

  • Implement automated data workflow in collaboration with data analysts, continue to improve, maintain and improve system in line with growth

  • Collaborate with Software Engineers on application events, and ensuring right data can be extracted

  • Contribute to resources management for computation and capacity planning

  • Diving deep into code and constantly innovating


Requirements



  • Experience with AWS data technologies (EC2, EMR, S3, Redshift, ECS, Data Pipeline, etc) and infrastructure.

  • Working knowledge in big data frameworks such as Apache Spark, Kafka, Zookeeper, Hadoop, Flink, Storm, etc

  • Rich experience with Linux and database systems

  • Experience with relational and NoSQL database, query optimization, and data modelling

  • Familiar with one or more of the following: Scala/Java, SQL, Python, Shell, Golang, R, etc

  • Experience with container technologies (Docker, k8s), Agile development, DevOps and CI tools.

  • Excellent problem-solving skills

  • Excellent verbal and written communication skills 

Coolblue
  • Rotterdam, Netherlands
As an Advanced Data Analyst / Data Scientist you use the data of millions of visitors to help Coolblue act smarter.

Pros and cons

  • Youre going to be working as a true Data Scientist. One who understands why you get the results that you do and apply this information to other experiments.
  • Youre able to use the right tools for every job.
  • Your job starts with a problem and ends with you monitoring your own solution.
  • You have to crawl underneath the foosball table when you lose a game.

Description Data Scientist

Your challenge in this sprint is improving the weekly sales forecasting models for the Christmas period. Your cross-validation strategy is ready, but before you can begin, you have to query the data from our systems and process them in a way that allows you to view the situation with clarity.

First, you have a meeting with Matthias, whos worked on this problem before. During your meeting, you conclude that Christmas has a non-linear effect on sales.  Thats why you decide to experiment with a multiplicative XGBoost in addition to your Regularised-Regression model. You make a grid with various features and parameters for both models and analyze the effects of both approaches. You notice your Regression is overfitting, which means XGBoost isnt performing and the forecast isnt high enough, so you increase the regularization and appoint the Christmas features to XGBoost alone.

Nice! You improved the precision of the Christmas forecast with an average of 2%. This will only yield results once the algorithm has been implemented, so you start thinking about how you want to implement this.

Your specifications

  • You have at least 4 years of experience in a similar function.
  • You have a university degree, MSC, or PHD in Mathematics, Computer Science, or Statistics.
  • You have experience with Machine Learning techniques, such as Gradient Boosting, Random Forest, and Neutral Networks, and you have proven experience with successfully applying these (or similar) techniques in a business environment.
  • You have some experience with Data mining, SQL, BigQuery, NoSQL, R, and monitoring.
  • You're highly knowledgeable about Python.
  • You have experience with Big Data technologies, such as Spark and Hadoop.

Included by default.

  • Money.
  • Travel allowance and a retirement plan.
  • 25 leave days. As long as you promise to come back.
  • A discount on all our products.
  • A picture-perfect office at a great location. You could crawl to work from Rotterdam Central Station. Though we recommend just walking for 2 minutes.
  • A horizontal organisation in the broadest sense. You could just go and have a beer with the boss.

Review



'I believe I'm working in a great team of enthusiastic and smart people, with a good mix of juniors and seniors. The projects that we work on are very interesting and diverse, think of marketing, pricing and recommender systems. For each project we try to use the latest research and machine learning techniques in order to create the best solutions. I like that we are involved in the projects start to end, from researching the problem to experimenting, to putting it in production, and to creating the monitoring dashboards and delivering the outputs on a daily basis to our stakeholders. The work environment is open, relaxed and especially fun'
- Cheryl Zandvliet, Data Scientist
Accenture
  • San Diego, CA
The digital revolution is changing everything. Its everywhere transforming how we work and play. Accenture Digitals 36,000 professionals are driving these exciting changes and bringing them to life across 40 industries in more than 120 countries. At the forefront of digital, youll create it, own it and make it a reality for clients looking to better serve their connected customers and operate always-on enterprises. Join us and become an integral part of our experienced digital team with the credibility, expertise and insight clients depend on.
Accenture Analytics, part of Accenture Digital, helps clients to use analytics and artificial intelligence to drive actionable insights, at scale. We apply sophisticated algorithms, data engineering and visualization to extract business insights and help clients turn those insights into actions that drive tangible outcomesto improve their performance and disrupt their markets. Accenture Analytics is a leader in Analytics, with deep industry and technical experience. We provide services and solutions that include Analytics Advisory, Data Science, Data Engineering and Analytics-as-a-Service.
ROLE DESCRIPTION: Artificial Intelligence Solution Architect Senior Manager
Accenture Analytics partners with our largest clients to define and enable insights-led transformation . As an Artificial Intelligence Senior Manager (Solution Architect) will be responsible for providing solution blueprints to develop intelligent applications to meet todays growing demand on AI. Relying on commercial and public technologies, our consulting professionals implement scalable, high performance intelligent solutions that meet the need of todays corporate and digital applications.
Responsibilities
    • Solution and Proposal Alignment - Through a formal sales process, work with the Sales team to identify and qualify opportunities. Conduct full technical discovery, identifying pain points, business and technical requirements, as is and to be scenarios.
  • Understand the strategic direction set by senior management as it relates to team goals. Use considerable judgment to define solution and seeks guidance on complex problems.
    Define Architecture Reference Assets - Apply Accenture methodology, Accenture reusable assets, and previous work experience to delivery consistently high quality work. Deliver written or oral status reports regularly. Stay educated on new and emerging market offerings that may be of interest to our clients. Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems
  • Targ
      eted Solution blueprint - Architect client solutions to meet gathered requirements. Assess the full technology stack of services required including app development, network, compute, storage, management and automation. Identify cloud ecosystem components across a variety of vendors that align with business objectives and meet technical design requirements. Compare and contrast alternatives across both technical and business parameters which support the define cost and service requirements.Work
      hands on to demonstrate and prototype integrations in customer environments. Primary upward interaction is with direct supervisor. May interact with peers and/or management levels at a client and/or within Accenture.
  • Center of Excellence - Prepare and deliver product messaging in an effort to highlight value proposition and unique differentiators. . Establish methods and procedures on new assignments with guidance. Manage small teams and/or work efforts (if in an independent contributor role) at a client or within Accenture
  • Basic Qualifications
      • Bachelor's degree in Computer Science, Engineering, Statistics, Technical Science
      • Minimum of 7 years preparing and delivering product messaging in an effort to highlight value proposition and unique differentiators
      • Minimum of 4 years previous consulting or client service delivery experience
      • Minimum of 5 years of experience with system integration architectures, private and public cloud architectures, pros/cons, transformation experience
      • Minimum of 5 years of experience in deep learning, machine learning or artificial intelligence applications like virtual agent, RPA, video or image analytics, text analytics
      • Minimum of 3 years of client assessment and roadmap experience
      • Minimum of 2 years of sales pursuits and oral presentations
      • Minimum of 3 years of full lifecycle deployment experience
    Preferred Skills
      • Prior experience with in data engineering technologies like Spark, no SQL DB, lambda
      • Bachelors degree in quantitative discipline (Engineering, Economics, Statistics, Operations Research, Computer Science)
      • Masters or PhD in Analytics, Statistic or other quantitative disciplines
      • Certification on Cognitive tools like Microsoft Bot Framework & Cognitive Services, IBM Watson, Amazon AI services
      • Experience in Deep Learning approach
      • Prior delivery experience with Robotic Process Automation
      • Basic understanding of Machine Learning techniques & industry applications
      • Experience in and understanding of data and information management, especially in Big Data trends.
      • Exposure to cloud technologies like Azure, Bluemix etc.
    CALL TO ACTION
    Now is the time to become a digital disrupter. The opportunity is herewhats stopping you from pursuing it?
    Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
    Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
    Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
    Equal Employment Opportunity
    All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
    Accenture is committed to providing veteran employment opportunities to our service men and women.
    ConocoPhillips
    • Houston, TX
    Our Company
    ConocoPhillips is the worlds largest independent E&P company based on production and proved reserves. Headquartered in Houston, Texas, ConocoPhillips had operations and activities in 17 countries, $71 billion of total assets, and approximately 11,100 employees as of Sept. 30, 2018. Production excluding Libya averaged 1,221 MBOED for the nine months ended Sept. 30, 2018, and proved reserves were 5.0 billion BOE as of Dec. 31, 2017.
    Employees across the globe focus on fulfilling our core SPIRIT Values of safety, people, integrity, responsibility, innovation and teamwork. And we apply the characteristics that define leadership excellence in how we engage each other, collaborate with our teams, and drive the business.
    Description
    The purpose of this role is to enable and support Citizen Data Scientists (CDS) to develop analytical workflows and to manage the adoption and implementation of the latest innovations within the ConocoPhillips preferred analytics tools for Citizen Data Science.
    This position will enable analytics tools and solutions for customers including; the facilitation of solution roadmap, the adoption of new analytics functionality, the integration between applications based on value driven workflows, the support and training of users on the new capabilities.
    Responsibilities May Include
    • Work with customers to enable the latest data analytics capabilities
    • Understand and help implement the latest innovations available within ConocoPhillips preferred analytics platform including Spotfire, Statistica, ArcGIS Big Data (Spatial Analytics), Teradata and Python
    • Help users with the implementation of analytics workflows through integration of the analytics applications
    • Manage analytics solutions roadmap and implementation timeline enabling geoscience customers to take advantage of the latest features or new functionality
    • Communicate with vendors and COP community on analytics technology functionality upgrades, prioritized enhancements and adoption
    • Test and verify that existing analytics workflows are supported within the latest version of the technology
    • Guide users on how to enhance their current workflows with the latest analytics technology
    • Facilitate problem solving with analytics solutions
    • Work with other AICOE teams to validate and implement new technology or version upgrades into production
    Specific Responsibilities May Include
      Provi
      • de architectural guidance for building integrated analytical solutions Under
      • stands analytics product roadmaps, product development and the implementation of new featuresPromo
      • tes new analytics product features within customer base and demonstrates how it enables analytics workflowsManag
      • e COP analytics product adoption roadmapCaptu
      • re product enhancement list and coordinate prioritization with the vendorTest
      • new capabilities and map them to COP business workflowsCoord
      • inate with the AICOE team the timely upgrades of the new features Provi
      • des support to CDS for:
      • analytics modelling best practices
      • know how implementation of analytics workflows based on new technology
    • Liaise with the AICOE Infrastructure team for timely technology upgrades
    • Work on day to day end user support activities for Citizen Data Science tools; Advanced Spotfire, Statistica, GIS Big Data
    • Provides technical consulting and guidance to Citizen Data Scientist for the design and development of complex analytics workflows
    • Communicates analytics technology roadmap to end users
    • Communicates and demonstrates the value of new features to COP business
    • Train and mentor Citizen Data Science on analytics solutions
    Basic/Required
    • Legally authorized to work in the United States
    • Bachelor's degree in Information Technology, Computer Sciences, Geoscience, Engineering, Statistics or related field
    • 5+ years of experience in oil & gas and geoscience data and workflows
    • 3+ years of experience with Tibco Spotfire
    • 3+ years of experience Teradata or using SQL databases
    • 1+ years of experience with ArcGIS spatial analytics tools
    • Advanced knowledge and experience of Integration platform
    Preferred
    • Masters degree in Analytics or related field
    • 1+ years of experience with Tibco Statistica or equivalent statistics-based analytics package
    • Prior experience in implementing and supporting visual, prescriptive and predictive analytics
    • In-depth understanding of the analytics applications and integration points
    • Experience implementing data science workflows in Oil & Gas
    • Takes ownership of actions and follows through on commitments by courageously dealing with important problems, holding others accountable, and standing up for what is right
    • Delivers results through realistic planning to accomplish goals
    • Generates effective solutions based on available information and makes timely decisions that are safe and ethical
    To be considered for this position you must complete the entire application process, which includes answering all prescreening questions and providing your eSignature on or before the requisition closing date of February 27, 2019.
    Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.
    ConocoPhillips is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, veteran status, gender identity or expression, genetic information or any other legally protected status.
    Job Function
    Information Management-Information Technology
    Job Level
    Individual Contributor/Staff Level
    Primary Location
    NORTH AMERICA-USA-TEXAS-HOUSTON
    Organization
    ANALYTICS INNOVATION
    Line of Business
    Corporate Staffs
    Job Posting
    Feb 13, 2019, 4:51:37 PM
    ConocoPhillips
    • Houston, TX
    Our Company
    ConocoPhillips is the worlds largest independent E&P company based on production and proved reserves. Headquartered in Houston, Texas, ConocoPhillips had operations and activities in 17 countries, $71 billion of total assets, and approximately 11,100 employees as of Sept. 30, 2018. Production excluding Libya averaged 1,221 MBOED for the nine months ended Sept. 30, 2018, and proved reserves were 5.0 billion BOE as of Dec. 31, 2017.
    Employees across the globe focus on fulfilling our core SPIRIT Values of safety, people, integrity, responsibility, innovation and teamwork. And we apply the characteristics that define leadership excellence in how we engage each other, collaborate with our teams, and drive the business.
    Description
    The Sr. Analytics Analyst will be part of the Production, Drilling, and Projects Analytics Services Team within the Analytics Innovation Center of Excellence that enables data analytics across the ConocoPhillips global enterprise. This role works with business units and global functions to help strategically design, implement, and support data analytics solutions. This is a full-time position that provides tremendous career growth potential within ConocoPhillips.
    Responsibilities May Include
    • Complete end to end delivery of data analytics solutions to the end user
    • Interacting closely with both business and developers while gathering requirements, designing, testing, implementing and supporting solutions
    • Gather business and technical specifications to support analytic, report and database development
    • Collect, analyze and translate user requirements into effective solutions
    • Build report and analytic prototypes based on initial business requirements
    • Provide status on the issues and progress of key business projects
    • Providing regular reporting on the performance of data analytics solutions
    • Delivering regular updates and maintenance on data analytics solutions
    • Championing the data analytics solutions and technologies at ConocoPhillips
    • Integrate data for data models used by the customers
    • Deliver Data Visualizations used for data driven decision making
    • Provide strategic technology direction while supporting the needs of the business
    Basic/Required
    • Legally authorized to work in the United States
    • 5+ years of related IT experience
    • 5+ year of Structure Querying Language experience (ANSI SQL, T-SQL, PL/SQL)
    • 3+ years hands-on experience delivering solutions with an Analytics Tools i.e. (Spotfire, SSRS, Power BI, Tableau, Business Objects)
    Preferred
    • Bachelor's Degree in Information Technology or Computer Science
    • 5+ years of Oil and Gas Industry experience
    • 5+ years hands-on experience delivering solutions with Informatica PowerCenter
    • 5+ years architecting data warehouses and/or data lakes
    • 5+ years with Extract Transform and Load (ETL) tools and best practices
    • 3+ years hands-on experience delivering solutions with Teradata
    • 1+ years developing analytics models with R or Python
    • 1+ years developing visualizations using R or Python
    • Experience with Oracle (11g, 12c) and SQL Server (2008 R2, 2010, 2016) and Teradata 15.x
    • Experience with Hadoop technologies (Hortonworks, Cloudera, SQOOP, Flume, etc.)
    • Experience with AWS technologies (S3, SageMaker, Athena, EMR, Redshift, Glue, etc.)
    • Thorough understanding of BI/DW concepts, proficient in SQL, and data modeling
    • Familiarity with ETL tools (Informatica, etc.) and ETL processes
    • Solutions oriented individual; learn quickly, understand complex problems, and apply useful solutions
    • Ability to work in a fast-paced environment independently with the customer
    • Ability to work as a team player
    • Ability to work with business and technology users to define and gather reporting and analytics requirements
    • Strong analytical, troubleshooting, and problem-solving skills experience in analyzing and understanding business/technology system architectures, databases, and client applications to recognize, isolate, and resolve problems
    • Demonstrates the desire and ability to learn and utilize new technologies in data analytics solutions
    • Strong communication and presentation skills
    • Takes ownership of actions and follows through on commitments by courageously dealing with important problems, holding others accountable, and standing up for what is right
    • Delivers results through realistic planning to accomplish goals
    • Generates effective solutions based on available information and makes timely decisions that are safe and ethical
    To be considered for this position you must complete the entire application process, which includes answering all prescreening questions and providing your eSignature on or before the requisition closing date of February 20, 2019.
    Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.
    ConocoPhillips is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, veteran status, gender identity or expression, genetic information or any other legally protected status.
    Job Function
    Information Management-Information Technology
    Job Level
    Individual Contributor/Staff Level
    Primary Location
    NORTH AMERICA-USA-TEXAS-HOUSTON
    Organization
    ANALYTICS INNOVATION
    Line of Business
    Corporate Staffs
    Job Posting
    Feb 13, 2019, 4:56:49 PM
    State Farm
    • Atlanta, GA

    WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

      Perfo
      • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
      • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
      • with IT teams to support data collection, integration, and retention requirements based on business need Ident
      • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
      • es work efforts which require the use of sophisticated project planning techniques Appli
      • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
      • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
      • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
      • Collaborates with business subject matter experts to select relevant sources of information
      • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
      • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
      • Develop expertise with State Farm datasets, data repositories, and data movement processes
      • Assists on projects/requests and may lead specific tasks within the project scope
      • Prepares and manipulates data for use in development of statistical models
      • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


    Additional Details:

    For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


    Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


    Skills & Professional Experience

    ·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

    ·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

    ·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

    ·        Experience with unstructured data sets: text analytics, image recognition etc.

    ·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

    ·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

    ·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

    ·        Exercise sound judgment to diagnose & resolve problems within area of expertise

    ·        Familiarity with CI/CD development methods, Git and Docker a plus


    Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


    Remote work option is not available.


    There is no sponsorship for an employment visa for the position at this time.


    Competencies desired:
    Critical Thinking
    Leadership
    Initiative
    Resourcefulness
    Relationship Building
    State Farm
    • Dallas, TX

    WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

      Perfo
      • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
      • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
      • with IT teams to support data collection, integration, and retention requirements based on business need Ident
      • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
      • es work efforts which require the use of sophisticated project planning techniques Appli
      • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
      • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
      • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
      • Collaborates with business subject matter experts to select relevant sources of information
      • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
      • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
      • Develop expertise with State Farm datasets, data repositories, and data movement processes
      • Assists on projects/requests and may lead specific tasks within the project scope
      • Prepares and manipulates data for use in development of statistical models
      • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


    Additional Details:

    WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

      Perfo
      • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
      • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
      • with IT teams to support data collection, integration, and retention requirements based on business need Ident
      • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
      • es work efforts which require the use of sophisticated project planning techniques Appli
      • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
      • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
      • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
      • Collaborates with business subject matter experts to select relevant sources of information
      • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
      • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
      • Develop expertise with State Farm datasets, data repositories, and data movement processes
      • Assists on projects/requests and may lead specific tasks within the project scope
      • Prepares and manipulates data for use in development of statistical models
      • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


    Additional Details:

    For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


    Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


    Skills & Professional Experience

    ·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

    ·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

    ·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

    ·        Experience with unstructured data sets: text analytics, image recognition etc.

    ·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

    ·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

    ·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

    ·        Exercise sound judgment to diagnose & resolve problems within area of expertise

    ·        Familiarity with CI/CD development methods, Git and Docker a plus


    Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


    Remote work option is not available.


    There is no sponsorship for an employment visa for the position at this time.


    Competencies desired:
    Critical Thinking
    Leadership
    Initiative
    Resourcefulness
    Relationship Building
    Riccione Resources
    • Dallas, TX

    Sr. Data Engineer Hadoop, Spark, Data Pipelines, Growing Company

    One of our clients is looking for a Sr. Data Engineer in the Fort Worth, TX area! Build your data expertise with projects centering on large Data Warehouses and new data models! Think outside the box to solve challenging problems! Thrive in the variety of technologies you will use in this role!

    Why should I apply here?

      • Culture built on creativity and respect for engineering expertise
      • Nominated as one of the Best Places to Work in DFW
      • Entrepreneurial environment, growing portfolio and revenue stream
      • One of the fastest growing mid-size tech companies in DFW
      • Executive management with past successes in building firms
      • Leader of its technology niche, setting the standards
      • A robust, fast-paced work environment
      • Great technical challenges for top-notch engineers
      • Potential for career growth, emphasis on work/life balance
      • A remodeled office with a bistro, lounge, and foosball

    What will I be doing?

      • Building data expertise and owning data quality for the transfer pipelines that you create to transform and move data to the companys large Data Warehouse
      • Architecting, constructing, and launching new data models that provide intuitive analytics to customers
      • Designing and developing new systems and tools to enable clients to optimize and track advertising campaigns
      • Using your expert skills across a number of platforms and tools such as Ruby, SQL, Linux shell scripting, Git, and Chef
      • Working across multiple teams in high visibility roles and owning the solution end-to-end
      • Providing support for existing production systems
      • Broadly influencing the companys clients and internal analysts

    What skills/experiences do I need?

      • B.S. or M.S. degree in Computer Science or a related technical field
      • 5+ years of experience working with Hadoop and Spark
      • 5+ years of experience with Python or Ruby development
      • 5+ years of experience with efficient SQL (Postgres, Vertica, Oracle, etc.)
      • 5+ years of experience building and supporting applications on Linux-based systems
      • Background in engineering Spark data pipelines
      • Understanding of distributed systems

    What will make my résumé stand out?

      • Ability to customize an ETL or ELT
      • Experience building an actual data warehouse schema

    Location: Fort Worth, TX

    Citizenship: U.S. citizens and those authorized to work in the U.S. are encouraged to apply. This company is currently unable to provide sponsorship (e.g., H1B).

    Salary: 115 130k + 401k Match

    ---------------------------------------------------


    ~SW1317~

    Gravity IT Resources
    • Miami, FL

    Overview of Position:

    We undertaking an ambitious digital transformation across Sales, Service, Marketing, and eCommerce. We are looking for a web data analytics wizard with prior experience in digital data preparation, discovery, and predictive analytics.

    The data scientist/web analyst will work with external partners, digital business partners, enterprise analytics, and technology team to strategically plan and develop datasets, measure web analytics, and execute on predictive and prescriptive use cases. The role demands the ability to (1) Learn quickly (2) Work in a fast-paced, team-driven environment (3) Manage multiple efforts simultaneously (4) Adept at using large datasets and using models to test effectiveness of different courses of action (5) Promote data driven decision making throughout the organization (6) Define and measure success of capabilities we provide the organization.


    Primary Duties and Responsibilities

      Analy
      • ze data captured through Google Analytics and develop meaningful actionable insights on digital behavior. Put t
      • ogether a customer 360 data frame by connecting CRM Sales, Service, Marketing cloud data with Commerce Web behavior data and wrangle the data into a usable form. Use p
      • redictive modelling to increase and optimize customer experiences across online & offline channels. Evalu
      • ate customer experience and conversions to provide insights & tactical recommendations for web optimization
      • Execute on digital predictive use cases and collaborate with enterprise analytics team to ensure use of best tools and methodologies.
      • Lead support for enterprise voice of customer feedback analytics.
      • Enhance and maintain digital data library and definitions.

    Minimum Qualifications

    • Bachelors degree in Statistics, Computer Science, Marketing, Engineering or equivalent
    • 3 years or more of working experience in building predictive models.
    • Experience in Google Analytics or similar web behavior tracking tools is required.
    • Experience in R is a must with working knowledge of connecting to multiple data sources such as amazon redshift, salesforce, google analytics, etc.
    • Working knowledge in machine learning algorithms such as Random Forest, K-means, Apriori, Support Vector machine, etc.
    • Experience in A/B testing or multivariate testing.
    • Experience in media tracking tags and pixels, UTM, and custom tracking methods.
    • Microsoft Office Excel & PPT (advanced).

    Preferred Qualifications

    • Masters degree in statistics or equivalent.
    • Google Analytics 360 experience/certification.
    • SQL workbench, Postgres.
    • Alteryx experience is a plus.
    • Tableau experience is a plus.
    • Experience in HTML, JavaScript.
    • Experience in SAP analytics cloud or SAP desktop predictive tool is a plus
    Signify Health
    • Dallas, TX

    Position Overview:

    Signify Health is looking for a savvy Data Engineer to join our growing team of deep learning specialists. This position would be responsible for evolving and optimizing data and data pipeline architectures, as well as, optimizing data flow and collection for cross-functional teams. The Data Engineer will support software developers, database architects, data analysts, and data scientists. The ideal candidate would be self-directed, passionate about optimizing data, and comfortable supporting the Data Wrangling needs of multiple teams, systems and products.

    If you enjoy providing expert level IT technical services, including the direction, evaluation, selection, configuration, implementation, and integration of new and existing technologies and tools while working closely with IT team members, data scientists, and data engineers to build our next generation of AI-driven solutions, we will give you the opportunity to grow personally and professionally in a dynamic environment. Our projects are built on cooperation and teamwork, and you will find yourself working together with other talented, passionate and dedicated team member, all working towards a shared goal.

    Essential Job Responsibilities:

    • Assemble large, complex data sets that meet functional / non-functional business requirements
    • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data models for greater scalability, etc.
    • Leverage Azure for extraction, transformation, and loading of data from a wide variety of data sources in support of AI/ML Initiatives
    • Design and implement high performance data pipelines for distributed systems and data analytics for deep learning teams
    • Create tool-chains for analytics and data scientist team members that assist them in building and optimizing AI workflows
    • Work with data and machine learning experts to strive for greater functionality in our data and model life cycle management capabilities
    • Communicate results and ideas to key decision makers in a concise manner
    • Comply with applicable legal requirements, standards, policies and procedures including, but not limited to the Compliance requirements and HIPAA.


    Qualifications:Education/Licensing Requirements:
    • High school diploma or equivalent.
    • Bachelors degree in Computer Science, Electrical Engineer, Statistics, Informatics, Information Systems, or another quantitative field. or related field or equivalent work experience.


    Experience Requirements:
    • 5+ years of experience in a Data Engineer role.
    • Experience using the following software/tools preferred:
      • Experience with big data tools: Hadoop, Spark, Kafka, etc.
      • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
      • Experience with AWS or Azure cloud services.
      • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
      • Experience with object-oriented/object function scripting languages: Python, Java, C#, etc.
    • Strong work ethic, able to work both collaboratively, and independently without a lot of direct supervision, and solid problem-solving skills
    • Must have strong communication skills (written and verbal), and possess good one-on-one interpersonal skills.
    • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
    • A successful history of manipulating, processing and extracting value from large disconnected datasets.
    • Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
    • 2 years of experience in data modeling, ETL development, and Data warehousing
     

    Essential Skills:

    • Fluently speak, read, and write English
    • Fantastic motivator and leader of teams with a demonstrated track record of mentoring and developing staff members
    • Strong point of view on who to hire and why
    • Passion for solving complex system and data challenges and desire to thrive in a constantly innovating and changing environment
    • Excellent interpersonal skills, including teamwork and negotiation
    • Excellent leadership skills
    • Superior analytical abilities, problem solving skills, technical judgment, risk assessment abilities and negotiation skills
    • Proven ability to prioritize and multi-task
    • Advanced skills in MS Office

    Essential Values:

    • In Leadership Do whats right, even if its tough
    • In Collaboration Leverage our collective genius, be a team
    • In Transparency Be real
    • In Accountability Recognize that if it is to be, its up to me
    • In Passion Show commitment in heart and mind
    • In Advocacy Earn trust and business
    • In Quality Ensure what we do, we do well
    Working Conditions:
    • Fast-paced environment
    • Requires working at a desk and use of a telephone and computer
    • Normal sight and hearing ability
    • Use office equipment and machinery effectively
    • Ability to ambulate to various parts of the building
    • Ability to bend, stoop
    • Work effectively with frequent interruptions
    • May require occasional overtime to meet project deadlines
    • Lifting requirements of
    Mix.com
    • Phoenix, AZ

    Are you interested in scalability & distributed systems? Do you want to work to help shaping a discovery engine powered by cutting edge technologies and machine learning at scale? If you answered yes to the above questions, Mix's Research and Development is the team for you!


    In this role, you'll be part of a small and innovative team comprised of engineers and data scientists working together to understand content by leveraging machine learning and NLP technologies. You will have the opportunity to work on core problems like detection of low quality content or spam, text semantic analysis, video and image processing, content quality assessment and monitoring. Our code operates at massive scale, ingesting, processing and indexing millions of URLs.



    Responsibilities

    • Write code to build an infrastructure, which is capable of scaling based on the load
    • Collaborate with researchers and data scientists to integrate innovative Machine Learning and NLP techniques with our serving, cloud and data infrastructure
    • Automate build and deployment process, and setup monitoring and alerting systems
    • Participate in the engineering life-cycle, including writing documentation and conducting code reviews


    Required Qualifications

    • Strong knowledge of algorithms, data structures, object oriented programming and distributed systems
    • Fluency in OO programming language, such as  Scala (preferred), Java, C, C++
    • 3+ years demonstrated expertise in stream processing platforms like Apache Flink, Apache Storm and Apache Kafka
    • 2+ years experience with a cloud platform like Amazon Web Services (AWS) or Microsoft Azure
    • 2+ years experience with monitoring frameworks, and analyzing production platforms, UNIX servers and mission critical systems with alerting and self-healing systems
    • Creative thinker and self-starter
    • Strong communication skills


    Desired Qualifications

    • Experience with Hadoop, Hive, Spark or other MapReduce solutions
    • Knowledge of statistics or machine learning
    Biswas Information Technology Solutions
    • Herndon, VA

    We are seeking a junior-mid level Data Science Engineer to analyze large amounts of raw data from different sources to extract valuable business insights to aid in better business decision-making. Analytical mind, problem-solving skills and passion for machine-learning and research are critical for this role. You will be part of a highly passionate development team that is in the process of refining our Data Science toolkit that includes a wide set of predictive, recommendation, and inference modeling for our AI product — ranging from time-series forecasting, sentiment analysis, custom object-detection, to named-entity recognition, text summarization, and geometric deep learning.



    Responsibilities



    • Identify valuable data sources and automate collection processes

    • Preprocessing of structured and unstructured data

    • Discover trends and patterns in large amounts of data

    • Build predictive models and machine-learning algorithms

    • Present information using data visualization techniques

    • Propose solutions and strategies to business challenges

    • Collaborate with engineering and product development teams



    Requirements



    • Strong fundamentals in training, evaluating, and benchmarking machine learning models

    • Strong in Python Numpy, Pandas, keras (tensorflow or pytorch is a plus)

    • Familiar with Feature selection, Feature Extraction (especially for deep learning is a plus++)

    • Familiarity of common Hyper-optimization Techniques with different AI models.

    • Experience handling large data sets

    • Familiarity with BI tools (e.g. Tableau) and data frameworks (e.g. Hadoop)

    • Strong math skills (e.g. statistics, algebra)

    • Problem-solving aptitude

    • Excellent communication and presentation skills

    • 3 to 5 years of experience in the above is preferred