OnlyDataJobs.com

Freeport-McMoRan
  • Phoenix, AZ

Provide management and leadership to the Big Data project teams. Directs initiatives in the Freeport-McMoRan Big Data program. Provides analytical direction, expertise and support for the Big Data program; this includes project leadership for initiatives, coordination with business subject matter experts and travel to mine sites. This will be a global role that will coordinate with site and corporate stakeholders to ensure global alignment on service and project delivery. The role will also work with business operations management to ensure the program is focusing in areas most beneficial to the company.


  • Work closely with business, engineering and technology teams to develop solution to data-intensive business problems
  • Supervise internal and external science teams
  • Perform quality control of deliverables
  • Prepare reports and presentations, and communicate with Executives
  • Provide thought leadership in algorithmic and process innovations, and creativity in solving unconventional problems
  • Use statistical and programming tools such as R and Python to analyze data and develop machine-learning models
  • Perform other duties as required


Minimum Qualifications


  • Bachelors degree in an analytical field (statistics, mathematics, etc.) and eight (8) years of relevant work experience, OR
  • Masters degree in an analytical field (statistics, mathematics, etc.) and six (6) years of relevant work experience, OR
  • Proven track record of collaborating with business partners to translate business problems and needs into data-based analytical solutions
  • Proficient in predictive modeling:
  • Linear and logistic regression
  • Tree based techniques (CART, Random Forest, Gradient Boosting)
  • Time-Series Analysis
  • Anomaly detection
  • Survival Analysis
  • Strong Experience with SQL/Hive environments
  • Skilled with R and/or Python analysis environments
  • Experience with Big Data tools for machine learning, R, Hive, Python
  • Good communication skills


Preferred Qualifications


  • Doctorate degree in an analytical field
  • Willing and able to travel 20-30% or more


Criteria/Conditions


  • Ability to understand and apply verbal and written work and safety-related instructions and procedures given in English
  • Ability to communicate in English with respect to job assignments, job procedures, and applicable safety standards
  • Must be able to work in a potentially stressful environment
  • Position is in busy, non-smoking office located in downtown Phoenix, AZ
  • Location requires mobility in an office environment; each floor is accessible by elevator
  • Occasionally work will be performed in a mine, outdoor or manufacturing plant setting
  • Must be able to frequently sit, stand and walk
  • Must be able to frequently lift and carry up to ten (10) pounds
  • Personal protective equipment is required when performing work in a mine, outdoor, manufacturing or plant environment, including hard hat, hearing protection, safety glasses, safety footwear, and as needed, respirator, rubber steel-toe boots, protective clothing, gloves and any other protective equipment as required
  • Freeport-McMoRan promotes a drug/alcohol-free work environment through the use of mandatory pre-employment drug testing and on-going random drug testing as allowed by applicable State laws


Freeport-McMoRan has reviewed the jobs at its various office and operating sites and determined that many of these jobs require employees to perform essential job functions that pose a direct threat to the safety or health of the employees performing these tasks or others. Accordingly, the Company has designated the following positions as safety-sensitive:


  • Site-based positions, or positions which require unescorted access to site-based operational areas, which are held by employees who are required to receive MSHA, OSHA, DOT, HAZWOPER and/or Hazard Recognition Training; or
  • Positions which are held by employees who operate equipment, machinery or motor vehicles in furtherance of performing the essential functions of their job duties, including operating motor vehicles while on Company business or travel (for this purpose motor vehicles includes Company owned or leased motor vehicles and personal motor vehicles used by employees in furtherance of Company business or while on Company travel); or
  • Positions which Freeport-McMoRan has designated as safety sensitive positions in the applicable job or position description and which upon further review continue to be designated as safety-sensitive based on an individualized assessment of the actual duties performed by a specifically identified employee.


Equal Opportunity Employer/Protected Veteran/Disability


Requisition ID
1900606 

Freeport-McMoRan
  • Phoenix, AZ

Supports the activities for all Freeport-McMoRan Big Data programs. Provides analytical support and expertise for the Big Data program; this includes coordination with business subject matter experts and travel to mine sites. The role will provide analyses and statistical models as part of Big Data projects, and may be the project lead on analytics initiatives. The role will also provide visualizations and descriptive results of the analysis. This will be a global role that will coordinate with site and corporate stakeholders to ensure alignment on project delivery.


    Work
    • closely with business, engineering and technology teams to analyze data-intensive business problems.
    • Research and develop appropriate statistical methodology to translate these business problems into analytics solutions
    • Perform quality control of deliverables
    • Develop visualizations of results and prepare deliverable reports and presentations, and communicate with business partners
    • Provide thought leadership in algorithmic and process innovations, and creativity in solving unconventional problems
    • Develop, implement and maintain analytical solutions in the Big Data environment
    • Work with onshore and offshore resources to implement and maintain analytical solutions
    • Perform variable selection and other standard modeling tasks
    • Produce model performance metrics
    • Use statistical and programming tools such as R and Python to analyze data and develop machine-learning models
    • Perform other duties as requested


Minimum Qualifications


  • Bachelors degree in an analytical field (statistics, mathematics, etc.) and five (5) years of relevant work experience, OR 
  • Masters degree in an analytical field (statistics, mathematics, etc.) and three (3) years of relevant work experience

  • Proven track record of collaborating with business partners to translate operational problems and needs into data-based analytical solutions

  • Proficient in predictive modeling:

  • Linear and logistic regression

  • Tree based techniques (CART, Random Forest, Gradient Boosting)

  • Time-Series Analysis

  • Anomaly detection

  • Survival Analysis

  • Strong experience with SQL/Hive environments

  • Skilled with R and/or Python analysis environments

  • Experience with Big Data tools for machine learning, R, Hive, Python

  • Good communication skills


Preferred Qualifications


  • Masters degree in an analytical field
  • Willing and able to travel 20-30% or more


Criteria/Conditions


  • Ability to understand and apply verbal and written work and safety-related instructions and procedures given in English
  • Ability to communicate in English with respect to job assignments, job procedures, and applicable safety standards

  • Must be able to work in a potentially stressful environment

  • Position is in busy, non-smoking office located in Phoenix, AZ

  • Location requires mobility in an office environment; each floor is accessible by elevator and internal staircase

  • Occasionally work may be performed in a mine, outdoor or manufacturing plant setting

  • Must be able to frequently sit, stand and walk

  • Must be able to frequently lift and carry up to ten (10) pounds

  • Personal protective equipment is required when performing work in a mine, outdoor, manufacturing or plant environment, including hard hat, hearing protection, safety glasses, safety footwear, and as needed, respirator, rubber steel-toe boots, protective clothing, gloves and any other protective equipment as required

  • Freeport-McMoRan promotes a drug/alcohol free work environment through the use of mandatory pre-employment drug testing and on-going random drug testing as per applicable State Laws


Freeport-McMoRan has reviewed the jobs at its various office and operating sites and determined that many of these jobs require employees to perform essential job functions that pose a direct threat to the safety or health of the employees performing these tasks or others. Accordingly, the Company has designated the following positions as safety-sensitive:


  • Site-based positions, or positions which require unescorted access to site-based operational areas, which are held by employees who are required to receive MSHA, OSHA, DOT, HAZWOPER and/or Hazard Recognition Training; or
  • Positions which are held by employees who operate equipment, machinery or motor vehicles in furtherance of performing the essential functions of their job duties, including operating motor vehicles while on Company business or travel (for this purpose motor vehicles includes Company owned or leased motor vehicles and personal motor vehicles used by employees in furtherance of Company business or while on Company travel); or
  • Positions which Freeport-McMoRan has designated as safety sensitive positions in the applicable job or position description and which upon further review continue to be designated as safety-sensitive based on an individualized assessment of the actual duties performed by a specifically identified employee.


Equal Opportunity Employer/Protected Veteran/Disability


Requisition ID
1900604 

Huntech USA LLC
  • San Diego, CA

Great opportunity to work with the leader in semiconductor industry who unveiled the worlds first 7 nanometer PC platform, created from the ground up for the next generation of personal computing by bringing new features with thin and light designs, allowing for new form factors in the always-on, always-connected category. It features the new octa-core CPU, the fastest CPU ever designed and built, with a larger cache than previous compute platforms, faster multi-tasking and increased productivity for users, disrupting the performance expectations of current thin, light and fanless PC designs. This platform is currently sampling to customers and is expected to begin shipping in commercial devices in Q3 of 2019.


Staff Data Analyst

You will study the performance of the Global Engineering Grid/ Design workflows across engineering grid and provide insights in effective analytics in support of Grid 2.0 program. You will conduct research, design statistical studies and analyze data in support of Grid 2.0 program.  This job will challenge you to dive deep into the engineering grid/ design flow world and understand the unique challenges in operating engineering grid at a scale unrivaled in the industry.  You should have experience working in an EDA or manufacturing environment and comfortable workings in an environment where problems are not always well-defined.


Responsibilities:

  • Identify and pursue opportunities to improve the efficiency of global engineering grid and design workflows.
  • Develop systems to invest, analyze, and take automated action across real-time feeds of high volume data.
  • Research and implement new analytics approaches effective deployment of machine learning/ data modeling to solve business problems Identify patterns and trends from large, high-dimensional data sets; manipulate data into digestible and actionable reports.
  • Make business recommendations (e.g. cost-benefit, experiment analysis) with effectivepresentations of findings at multiple levels of stakeholders through visual displays of quantitative information.
  • Plan effectively to set priorities and manage projects, identify roadblocks and come up technical options.


Leverage your 8+ years of experience articulating business questions and using mathematical techniques to arrive at an answer using available data. 3 - 4 yrs advance Tableau is a must. Experience translating analysis results into business recommendations. Experience with statistical software (e.g., R, Python, MATLAB, pandas, scala) and database languages like SQL Experience with data warehousing concepts (Hadoop, mapR) and visualization tools (e.g. QlikView, Tableau, Angular, Thoughtspot). Strong business acumen,critical thinking ability, and attention to detail.


Background in data science, applied mathematics, or computational science and a history of solving difficult problems using a scientific approach with MS or BS degree in a quantitative discipline (e.g., Statistics, Applied Mathematics, Operations Research, Computer Science, Electrical Engineering) and understand how to design scientific studies. You should be familiar with the state of the art in machine learning/ data modeling/ forecasting and optimization techniques in a big data environment.



Data Analytics Software Test Engineer

As a member of the Corporate Engineering Services Group software test team, you will be responsible for testing various cutting edge data analytics products and solutions. You will be working with a dynamic engineering team to develop test plans, execute test plans, automate test cases, and troubleshoot and resolve issues.


Leverage your 1+ years of experience in the following:

  • Testing and systems validation for commercial software systems.
  • Testing of systems deployed in AWS Cloud.
  • Knowledge of SQL and databases.
  • Developing and implementing software and systems test plans.
  • Test automation development using Python or Java.
  • Strong problem solving and troubleshooting skills.
  • Experience in testing web-based and Android applications.
  • Familiar with Qualcomm QXDM and APEX tools.
  • Knowledge of software development in Python.
  • Strong written and oral communication skills
  • Working knowledge of JIRA and GitHub is preferred.


Education:

  • Required: Bachelor's, Computer Engineering and/or Computer Networks & Systems and/or Computer Science and/or Electrical Engineering
  • Preferred: Master's, Computer Engineering and/or Computer Networks & Systems and/or Computer Science and/or Electrical Engineering or equivalent experience


Interested? Please send a resume to our Founder & CEO, Raj Dadlani at raj@huntech.com and he will respond to interested candidates within 24 hours of resume receipt. We are dealing with a highly motivated hiring manager and shortlisting viable candidates by February 22, 2019.

MRE Consulting, Ltd.
  • Houston, TX

Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.


Our client is seeking to hire an Enterprise Data Architect. The position reports to the VP IT. The Data Architect is responsible for providing a standard common business vocabulary across all applications and data elements, expressing and defining strategic data requirements, outlining high level integrated designs to meet the various business unit requirements, and aligning with the overall enterprise strategy and related business architecture.


Essential Duties & Responsibilities:
Provide insight and strategies for changing databased storage and utilization requirements for the company and provide direction on potential solutions
Assist in the definition and implementation of a federated data model consisting of a mixture of multi-cloud and on premises environments to support operations and business strategies
Assist in managing vendor cloud environments and multi-cloud database connectivity.
Analyze structural data requirements for new/existing applications and platforms
Submit reports to management that outline the changing data needs of the company and develop related solutions
Align database implementation methods to make sure they support company policies and any external regulations
Interpret data, analyze results and provide ongoing reporting and support
Implement data collection systems and other strategies that optimize efficiency and data quality
Acquire available data sources and maintain data systems
Identify, analyze, and interpret trends or patterns in data sets
Scrub data as needed, review reports, printouts, and performance indicators to identify inconsistencies
Develop database design and architecture documentation for the management and executive teams
Monitor various data base systems to confirm optimal performance standards are met
Contribute to content updates within resource portals and other operational needs
Assist in presentations and interpretations of analytical findings and actively participate in discussions of results, internally and externally
Help maintain the integrity and security of the company database
Ensure transactional activities are processed in accordance with standard operating procedures The employee will be on call 24 hours 7 days per week.


Qualifications
Minimum of 10 + years of experience.
Proven work experience as a Data Architect, Data Scientist, or similar role
In-depth understanding of database structure principles
Strong knowledge of data mining and segmentation techniques
Expertise in MS SQL and other database platforms
Familiarity with data visualization tools
Experience with formal Enterprise Architecture tools (like BiZZ design)
Experience in managing cloud-based environments
Aptitude regarding data models, data mining, and in cloud-based applications.
Advanced analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
Adept at report writing and presenting findings
Proficiency in systems support and monitoring
Experience with complex data structures in the Oil and Gas Industry a plus


Education 
A bachelors degree in Computer Science, Math, Statistics, or related quantitative field required.


Travel Requirements
The percentage of travel anticipated for this position is 10 20%, including overnight extended stays.


All qualified candidates should apply by providing a current Word resume and denoting skill set experience as it relates to this requirement.

Webtrekk GmbH
  • Berlin, Deutschland
Your responsibilities:

In this role, you will set up your full-fledged research and development team of developers and data science engineers. You will evaluate and choose appropriate technologies and develop products that are powered by Artificial Intelligence and Machine Learning



  • Fast pace development of experimental prototypes, POCs and products for our >400 customers

  • Manage fast feedback cycles, adopt learnings and feedbacks and ultimately deliver AI powered products

  • You will develop new and optimise existing components always with an eye on scalability, performance and maintenance

  • Organize and lead team planning meetings and provide advice, clarification and guidance during the execution of sprints

  • Lead your teams' technical vision and drive the design and development of new innovative products and services from the technical side

  • Lead discussions with the team and management to define best practices and approaches

  • Set goals, objectives and priorities. Mentor team members and provide guidance by regular performance reviews.




The assets you bring to the team:


  • Hands on experience in agile software development on all levels based on profound technical understanding

  • Relevant experience in managing a team of software developers in an agile environment

  • At least 3 years of hands-on experience with developing in Frontend Technologies like Angular or React

  • Knowledge of backend technologies such as Java, Python or Scala are a big plus

  • Experience with distributed systems based on RESTful services

  • DevOps mentality and practical experience with tools for build and deployment automation (like Maven, Jenkins, Ansible, Docker)

  • Team and project-oriented leader with excellent problem solving and interpersonal skills

  • Excellent communication, coaching and conflict management skills as well as a strong assertiveness

  • Strong analytical capability, discipline, commitment and enthusiasm

  • Fluent in English, German language skills are a big plus




What we offer:


  • Prospect: We are a continuously growing team with experts in the most future-oriented fields of customer intelligence. We are dealing with real big data scenarios and data from various business models and industries. Apart from interesting tasks we offer you considerable freedom for your ideas and perspectives for the development of your professional and management skills.

  • Team oriented atmosphere: Our culture embraces integrity, team work and innovation. Our employees value the friendly atmosphere that is the most powerful driver within our company.

  • Goodies: Individual trainings, company tickets, team events, table soccer, fresh fruits and a sunny roof terrace.

  • TechCulture: Work with experienced developers who share the ambition for well-written and clean code. Choose your hardware, OS and IDE. Bring in your own ideas, work with open source and have fun at product demos, hackathons and meetups.

AIRBUS
  • Blagnac, France

Description of the job



Vacancies for 3 Data Scientists (m/f) have arisen within Airbus Commercial Aircraft in Toulouse. You will join the PLM Systems & Integration Tests team within IM Develop department.  



IM Develop organization is established to ensure Product Life Cycle Management (PLM) Support and Services as requested by Programmes, CoE and CoC. The department is the home within Airbus to lead the development, the implementation, the maintenance and the support of PLM to all Airbus programs in line with the corporate strategy.



Within the frame of its Digital Design, Manufacturing & Services (DDMS) project, Airbus is undergoing a significant digital transformation to benefit from the latest advances in new technologies and targets a major efficiency breakthrough across the program and product lifecycle. It will be enabled by a set of innovative concepts such as model based system engineering, modular product lines, digital continuity and concurrent co-design of the product, its industrial setup and operability features.



As a Data Scientist (m/f), you will be integrated in a team of the IM Develop department and appointed to dedicated missions. You will work in an international environment where you will able to develop in-depth knowledge of local specificities: engineering, manufacturing, costing, etc.



Tasks & accountabilities



Your main tasks and responsibilities will be to:




  • Analyze large amounts of information to discover trends and patterns, build predictive models, implement cost models and machine-learning algorithms based on technical data and DMU models.

  • Combine models through ensemble modelling

  • Present information using data visualization techniques

  • Propose solutions and strategies to business challenges

  • Implement features extraction by analyzing CAD models and engineering Bill of Material

  • Collaborate with engineering, costing (FCC) to implement new costing models in python

  • Design and propose new short/medium- and long-term forecasting methods

  • Consolidate, compare and enlarge the data required for the various types of modelling

  • Attend technical events/conferences and reinforce Data Science skills within Airbus




Required skills



We are looking for candidates with the following skills and experience:




  • Strong knowledge of python development in the frame of industrial projects

  • Experience in data mining & machine-learning

  • Knowledge of Scala, Java or C++,… familiarity with R, SQL is an asset

  • Experience using business intelligence tools

  • Analytical mindset

  • Strong math skills (e.g. statistics, algebra)

  • Problem-solving aptitude

  • Excellent communication and presentation skills

  • PLM knowledge and 3D CAD programming would be a plus

  • French & English: advanced level

Intercontinental Exchange
  • Atlanta, GA
Job Purpose
The Data Analytics team is seeing a dynamic, self-motivated Data Scientist, who is able to work independently on data analysis, datamining, report development and customer requirement gathering.
Responsibilities
  • Applies data analysis and data modeling techniques, based upon a detailed understanding of the corporate information requirements, to establish, modify, or maintain data structures and their associated components
  • Participates in the development and maintenance of corporate data standards
  • Supports stakeholders and business users to define data and analytic requirements
  • Works with the business to identify additional internal and external data sources to bring into the data environment and mesh with existing data
  • Story board, create, ad publish standard reports, data visualizations, analysis and presentations
  • Develop and implement workflows using Alteryx and/or R
  • Develop and implement various operational and sales Tableau dashboards
Knowledge And Experience
  • Bachelors degree in statistics/engineering/math/quantitative analytics/economics/finance or a related quantitative discipline required
  • Masters in engineering/physics/statistics/economics/math/science preferred
  • 1+ years of experience with data science techniques and real-world application experience
  • 2+ years of experience supporting the development of analytics solutions leveraging tools like Tableau Desktop and Tableau Online
  • 1+ years of experience working with SQL, developing complex SQL queries, and leveraging SQL in Tableau
  • 1+ years of experience in Alteryx, and R coding
  • Deep understanding of Data Governance and Data Modeling
  • Ability to actualize requirements
  • Advanced written and oral communication skills with the ability to summarize findings and present in a clear, concise manner to peers, management, and others
Additional Information
    • Job Type Standard
    • Schedule Full-time
Man AHL
  • London, UK

The Role


As a Quant Platform Developer at AHL you will be building the tools, frameworks, libraries and applications which power our Quantitative Research and Systematic Trading. This includes responsibility for the continued success of “Raptor”, our in-house Quant Platform, next generation Data Engineering, and evolution of our production Trading System as we continually expand the markets and types of assets we trade, and the styles in which we trade them. Your challenges will be varied and might involve building new high performance data acquisition and processing pipelines, cluster-computing solutions, numerical algorithms, position management systems, visualisation and reporting tools, operational user interfaces, continuous build systems and other developer productivity tools.


The Team


Quant Platform Developers at AHL are all part of our broader technology team, members of a group of over sixty individuals representing eighteen nationalities. We have varied backgrounds including Computer Science, Mathematics, Physics, Engineering – even Classics - but what unifies us is a passion for technology and writing high-quality code.



Our developers are organised into small cross-functional teams, with our engineering roles broadly of two kinds: “Quant Platform Developers” otherwise known as our “Core Techs”, and “Quant Developers” which we often refer to as “Sector Techs”. We use the term “Sector Tech” because some of our teams are aligned with a particular asset class or market sector. People often rotate teams in order to learn more about our system, as well as find the position that best matches their interests.


Our Technology


Our systems are almost all running on Linux and most of our code is in Python, with the full scientific stack: numpy, scipy, pandas, scikit-learn to name a few of the libraries we use extensively. We implement the systems that require the highest data throughput in Java. For storage, we rely heavily on MongoDB and Oracle.



We use Airflow for workflow management, Kafka for data pipelines, Bitbucket for source control, Jenkins for continuous integration, Grafana + Prometheus for metrics collection, ELK for log shipping and monitoring, Docker for containerisation, OpenStack for our private cloud, Ansible for architecture automation, and HipChat for internal communication. But our technology list is never static: we constantly evaluate new tools and libraries.


Working Here


AHL has a small company, no-attitude feel. It is flat structured, open, transparent and collaborative, and you will have plenty of opportunity to grow and have enormous impact on what we do.  We are actively engaged with the broader technology community.



  • We host and sponsor London’s PyData and Machine Learning Meetups

  • We open-source some of our technology. See https://github.com/manahl

  • We regularly talk at leading industry conferences, and tweet about relevant technology and how we’re using it. See @manahltech



We’re fortunate enough to have a fantastic open-plan office overlooking the River Thames, and continually strive to make our environment a great place in which to work.



  • We organise regular social events, everything from photography through climbing, karting, wine tasting and monthly team lunches

  • We have annual away days and off-sites for the whole team

  • We have a canteen with a daily allowance for breakfast and lunch, and an on-site bar for in the evening

  • As well as PC’s and Macs, in our office you’ll also find numerous pieces of cool tech such as light cubes and 3D printers, guitars, ping-pong and table-football, and a piano.



We offer competitive compensation, a generous holiday allowance, various health and other flexible benefits. We are also committed to continuous learning and development via coaching, mentoring, regular conference attendance and sponsoring academic and professional qualifications.


Technology and Business Skills


At AHL we strive to hire only the brightest and best and most highly skilled and passionate technologists.



Essential



  • Exceptional technology skills; recognised by your peers as an expert in your domain

  • A proponent of strong collaborative software engineering techniques and methods: agile development, continuous integration, code review, unit testing, refactoring and related approaches

  • Expert knowledge in one or more programming languages, preferably Python, Java and/or C/C++

  • Proficient on Linux platforms with knowledge of various scripting languages

  • Strong knowledge of one or more relevant database technologies e.g. Oracle, MongoDB

  • Proficient with a range of open source frameworks and development tools e.g. NumPy/SciPy/Pandas, Pyramid, AngularJS, React

  • Familiarity with a variety of programming styles (e.g. OO, functional) and in-depth knowledge of design patterns.



Advantageous



  • An excellent understanding of financial markets and instruments

  • Experience of front office software and/or trading systems development e.g. in a hedge fund or investment bank

  • Expertise in building distributed systems with service-based or event-driven architectures, and concurrent processing

  • A knowledge of modern practices for data engineering and stream processing

  • An understanding of financial market data collection and processing

  • Experience of web based development and visualisation technology for portraying large and complex data sets and relationships

  • Relevant mathematical knowledge e.g. statistics, asset pricing theory, optimisation algorithms.


Personal Attributes



  • Strong academic record and a degree with high mathematical and computing content e.g. Computer Science, Mathematics, Engineering or Physics from a leading university

  • Craftsman-like approach to building software; takes pride in engineering excellence and instils these values in others

  • Demonstrable passion for technology e.g. personal projects, open-source involvement

  • Intellectually robust with a keenly analytic approach to problem solving

  • Self-organised with the ability to effectively manage time across multiple projects and with competing business demands and priorities

  • Focused on delivering value to the business with relentless efforts to improve process

  • Strong interpersonal skills; able to establish and maintain a close working relationship with quantitative researchers, traders and senior business people alike

  • Confident communicator; able to argue a point concisely and deal positively with conflicting views.

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-5-8 years of Java experience, Scala and Python experience a plus

-3+ years of experience as an analyst, data scientist, or related quantitative role.

-3+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-3-5years of Java experience, Scala and Python experience a plus

-2+ years of experience as an analyst, data scientist, or related quantitative role.

-2+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

Hydrogen Group
  • Austin, TX

Data Scientist
x2 Roles
Permanent
Austin, TX
Remote + Flex Hours


Our client is a very well funded venture capital start-up ran by an experienced team of technical entrenreneurrs based in Austin, TX. Having sucessfully raisefd over $10 million in Series A funding they are looking to expand their existing Data Science and Analytics team by adding 2 new members with plans to grow total headcount to 10 by the summer.

The sucessful candidate will be focused on working with client company's data accross multople sources and developing algorithems and models which will be used to improve performance.


Requirements:

  • The ability to solve problems that no one else has before in a start-up environment - in return you will be given flexible working hours, ability to work remote as you deem fit.
  • Background in EITHER (or Both) Machine learning or statistics - PhD + advanced academic qualifications welcome but not essential.
  • Ability to work in R, Python, TensorFlow, RStudio preffered by not essential


Interviews taking place from 21st ownwards with offers made ASAP

IT - Data Scientist - x2 - Austin, TX - R - Machine Learning - IT - Analytics - Statistics - AI - Python - IT - Data Scientist - x2 - Austin, TX - Keras - Tensor Flow - RStudio - Data Scientist - x2 - Austin, TX -

Elev8 Hire Solutions
  • Atlanta, GA

Jr. Data Scientist


Our client in the Midtown area is looking for a Jr. Data Scientist with a passion for Machine Learning, knows the how's & why's of algorithms, and is excited about the fraud industry. You'll be a pivotal piece in the Atlanta/US team in development and application of adaptive real-time analytical modeling algorithms. So if that gets you excited, apply!


Role Expectations:


  • End-to-end processing and modeling of large customer data sets.
  • Working with customers to understand the opportunities and constraints of their existing data in the context of machine learning and predictive modeling.
  • Develop statistical models and algorithms for integration with companys product.
  • Apply analytical theory to real-world problems on large and dynamic datasets.
  • Produce materials to feedback analytic results to customers (reports, presentations, visualizations).
  • Providing input into future data science strategy and product development.
  • Working with development teams to support and enhance the analytical infrastructure.
  • Work with the QA team to advise on effective analytical testing.
  • Evaluate and improve the analytical results on live systems.
  • Develop an understanding of the industry data structures and processes.


Team working with:


  • Currently 6 other Data Scientist local to Atlanta, the rest of the team (10+) in Cambridge
  • 130 people in the entire company


Top skills required:


  • Degree-level qualification with good mathematical background and knowledge of statistics.
  • Professional experience using Random Forests, machine learning algorithms, development skills with C or Python
  • First-hand experience of putting Data Storage into production
  • Experience in implementing statistical models and analytical algorithms in software.
  • Practical experience of the handling and mining of large, diverse, data sets.
  • Must have a USA work visa or Passport.


Nice to have:


  • Ph.D. or other postgraduate qualification would be an extreme advantage
  • An indication of how relevant technologies have been used (not just a list).
  • Attention to grammatical detail, layout and presentation.


Benefits:


  • Regular bonus scheme
  • 20 days annual leave
  • Healthcare package
  • Free Friday lunches
  • Regular social outings
  • Fridge and cupboards packed full of edible treats
  • Annual summer social and Christmas dinner
118118Money
  • Austin, TX

Seeking an individual with a keen eye for good design combined with the ability to communicate those designs through informative design artifacts. Candidates should be familiar with an Agile development process (and understand its limitations), able to mediate between product / business needs and developer architectural needs. They should be ready to get their hands dirty coding complex pieces of the overall architecture.

We are .NET Core on the backend, Angular 2 on a mobile web front-end, and native on Android and iOS. We host our code across AWS and on-premises VMs, and use various data backends (SQL Server, Oracle, Mongo).

Very important is interest in (and hopefully, experience with) modern big data pipelines and machine learning. Experience with streaming platforms feeding Apache Spark jobs that train machine learning models would be music to our ears. Financial platforms generate massive amounts of data, and re-architecting aspects of our microservices to support that will be a key responsibility.

118118 Money is a private financial services company with R&D headquartered in Austin along highway 360, in front of the Bull Creek Nature preserve. We have offices around the world, so the candidate should be open to occasional travel abroad. The atmosphere is casual, and has a startup feel. You will see your software creations deployed quickly.

Responsibilities

    • Help us to build a big data pipeline and add machine learning capability to more areas of our platform.
    • Manage code from development through deployment, including support and maintenance.
    • Perform code reviews, assist and coach more junior developers to adhere to proper design patterns.
    • Build fault-tolerant distributed systems.

Requirements

    • Expertise in .NET, C#, HTML5, CSS3, Javascript
    • Experience with some flavor of ASP.NET MVC
    • Experience with SQL Server
    • Expertise in the design of elegant and intuitive REST APIs.
    • Cloud development experience (Amazon, Azure, etc)
    • Keen understanding of security principles as they pertain to service design.
    • Expertise in object-oriented design principles.

Desired

    • Machine Learning experience
    • Mobile development experience
    • Kafka / message streaming experience
    • Apache Spark experience
    • Knowledge of the ins and outs of Docker containers
    • Experience with MongoDB
FCA Fiat Chrysler Automobiles
  • Detroit, MI

Fiat Chrysler Automobiles is looking to fill the full-time position of a Data Scientist. This position is responsible for delivering insights to the commercial functions in which FCA operates.


The Data Scientist is a role in the Business Analytics & Data Services (BA) department and reports through the CIO. They will play a pivotal role in the planning, execution  and delivery of data science and machine learning-based projects. The bulk of the work with be in areas of data exploration and preparation, data collection and integration, machine learning (ML) and statistical modelling and data pipe-lining and deployment.

The newly hired data scientist will be a key interface between the ICT Sales & Marketing team, the Business and the BA team. Candidates need to be very much self-driven, curious and creative.

Primary Responsibilities:

    • Problem Analysis and Project Management:
      • Guide and inspire the organization about the business potential and strategy of artificial intelligence (AI)/data science
      • Identify data-driven/ML business opportunities
      • Collaborate across the business to understand IT and business constraints
      • Prioritize, scope and manage data science projects and the corresponding key performance indicators (KPIs) for success
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
      • Generate and test hypotheses about the underlying mechanics of the business process.
      • Network with domain experts to better understand the business mechanics that generated the data.
    • Data Collection and Integration:
      • Understand new data sources and process pipelines. Catalog and document their use in solving business problems.
      • Create data pipelines and assets the enable more efficiency and repeatability of data science activities.
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
    • Machine Learning and Statistical Modelling:
      • Apply various ML and advanced analytics techniques to perform classification or prediction tasks
      • Integrate domain knowledge into the ML solution; for example, from an understanding of financial risk, customer journey, quality prediction, sales, marketing
      • Testing of ML models, such as cross-validation, A/B testing, bias and fairness
    • Operationalization:
      • Collaborate with ML operations (MLOps), data engineers, and IT to evaluate and implement ML deployment options
      • (Help to) integrate model performance management tools into the current business infrastructure
      • (Help to) implement champion/challenger test (A/B tests) on production systems
      • Continuously monitor execution and health of production ML models
      • Establish best practices around ML production infrastructure
    • Other Responsibilities:
      • Train other business and IT staff on basic data science principles and techniques
      • Train peers on specialist data science topics
      • Promote collaboration with the data science COE within the organization.

Basic Qualifications:

    • A bachelors  in computer science, data science, operations research, statistics, applied mathematics, or a related quantitative field [or equivalent work experience such as, economics, engineering and physics] is required. Alternate experience and education in equivalent areas such as economics, engineering or physics, is acceptable. Experience in more than one area is strongly preferred.
    • Candidates should have three to six years of relevant project experience in successfully launching, planning, executing] data science projects. Preferably in the domains of automotive or customer behavior prediction.
    • Coding knowledge and experience in several languages: for example, R, Python, SQL, Java, C++, etc.
    • Experience of working across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service, and others.
    • Experience with distributed data/computing and database tools: MapReduce, Hadoop, Hive, Kafka, MySQL, Postgres, DB2 or Greenplum, etc.
    • All candidates must be self-driven, curious and creative.
    • They must demonstrate the ability to work in diverse, cross-functional teams.
    • Should be confident, energetic self-starters, with strong moderation and communication skills.

Preferred Qualifications:

    • A master's degree or PhD in statistics, ML, computer science or the natural sciences, especially physics or any engineering disciplines or equivalent.
    • Experience in one or more of the following commercial/open-source data discovery/analysis platforms: RStudio, Spark, KNIME, RapidMiner, Alteryx, Dataiku, H2O, SAS Enterprise Miner (SAS EM) and/or SAS Visual Data Mining and Machine Learning, Microsoft AzureML, IBM Watson Studio or SPSS Modeler, Amazon SageMaker, Google Cloud ML, SAP Predictive Analytics.
    • Knowledge and experience in statistical and data mining techniques: generalized linear model (GLM)/regression, random forest, boosting, trees, text mining, hierarchical clustering, deep learning, convolutional neural network (CNN), recurrent neural network (RNN), T-distributed Stochastic Neighbor Embedding (t-SNE), graph analysis, etc.
    • A specialization in text analytics, image recognition, graph analysis or other specialized ML techniques such as deep learning, etc., is preferred.
    • Ideally, the candidates are adept in agile methodologies and well-versed in applying DevOps/MLOps methods to the construction of ML and data science pipelines.
    • Knowledge of industry standard BA tools, including Cognos, QlikView, Business Objects, and other tools that could be used for enterprise solutions
    • Should exhibit superior presentation skills, including storytelling and other techniques to guide and inspire and explain analytics capabilities and techniques to the organization.
Pyramid Consulting, Inc
  • Atlanta, GA

Job Title: Tableau Engineer

Duration: 6-12 Months+ (potential to go perm)

Location: Atlanta, GA (30328) - Onsite

Notes from Manager:

We need a data analyst who knows Tableau, scripting (JSON, Python), Altreyx API, AWS, Analytics.

Description

The Tableau Software engineer will be a key resource to work across our Software Engineering BI/Analytics stack to ensure stability, scalability, and the delivery of valuable BI & Analytics solutions for our leadership teams and business partners. Keys to this position are the ability to excel in identification of problems or analytic gaps and mapping and implementing pragmatic solutions. An excellent blend of analytical, technical and communication skills in a team based environment are essential for this role.

Tools we use: Tableau, Business Objects, AngularJS, OBIEE, Cognos, AWS, Opinion Lab, JavaScript, Python, Jaspersoft, Alteryx and R packages, Spark, Kafka, Scala, Oracle

Your Role:

·         Able to design, build, maintain & deploy complex reports in Tableau

·         Experience integrating Tableau into another application or native platforms is a plus

·         Expertise in Data Visualization including effective communication, appropriate chart types, and best practices.

·         Knowledge of best practices and experience optimizing Tableau for performance.

·         Experience reverse engineering and revising Tableau Workbooks created by other developers.

·         Understand basic statistical routines (mean, percentiles, significance, correlations) with ability to apply in data analysis

·         Able to turn ideas into creative & statistically sound decision support solutions

Education and Experience:

·         Bachelors degree in Computer Science or equivalent work experience

·         3-5 Years of hands on experience in data warehousing & BI technologies (Tableau/OBIEE/Business Objects/Cognos)

·         Three or more years of experience in developing reports in Tableau

·         Have good understanding of Tableau architecture, design, development and end user experience.

What We Look For:

·         Very proficient in working with large Databases in Oracle & Big Data technologies will be a plus.

·         Deep understanding & working experience of data warehouse and data mart concepts.

·         Understanding of Alteryx and R packages is a plus

·         Experience designing and implementing high volume data processing pipelines, using tools such as Spark and Kafka.

·         Experience with Scala, Java or Python and a working knowledge of AWS technologies such as GLUE, EMR, Kinesis and Redshift preferred.

·         Excellent knowledge with Amazon AWS technologies, with a focus on highly scalable cloud-native architectural patterns, especially EMR, Kinesis, and Redshift

·         Experience with software development tools and build systems such as Jenkins

Avaloq Evolution AG
  • Zürich, Switzerland

The position


Are you passionate about data? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

A challenging role as Senior Data Scientist in a demanding, dynamic and international software company using the latest innovations in predictive analytics and visualization techniques. You will be driving the creation of statistical and machine learning models from prototyping until the final deployment.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.





Your profile


  • PhD or Master degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • 5+ years of experience in Statistical Modelling, Anomaly Detection, Machine Learning algorithms both Supervised and Unsupervised

  • Proven experience in applying data science methods to business problems

  • Ability to explain complex analytical concepts to people from other fields

  • Proficiency in at least one of the following: Python, R, Java/Scala, SQL and/or SAS

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, stream processing)

  • Expertise in text mining and natural language processing is a strong plus

  • Familiarity with network analysis and/or graph databases is a plus

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Experience in leading teams and mentoring others

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Experience in the financial industry is a strong plus

  • Fluent in English; German, Italian and French a plus



Professional requirements




  • Use machine learning tools and statistical techniques to produce solutions for customer demands and complex problems

  • Participate in pre-sales and pre-project analysis to develop prototypes and proof-of-concepts

  • Analyse customer behaviour and needs enabling customer-centric product development

  • Liaise and coordinate with internal infrastructure and architecture team regarding setting up and running a BigData & Analytics platform

  • Strengthen data science within Avaloq and establish a data science centre of expertise

  • Look for opportunities to use insights/datasets/code/models across other functions in Avaloq



Main place of work
Zurich

Contact
Avaloq Evolution AG
Alina Tauscher, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

careers@avaloq.com
www.avaloq.com/en/open-positions

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
Accenture
  • San Diego, CA
Join Accenture and help transform leading organizations and communities around the world. The sheer scale of our capabilities and client engagements and the way we collaborate, operate and deliver value provides an unparalleled opportunity to grow and advance. Choose Accenture, and make delivering innovative work part of your extraordinary career.
As part of our Data Business Group, you will lead technology innovation for our clients through robust delivery of world-class solutions. You will build better software better! There will never be a typical day and thats why people love it here. The opportunities to make a difference within exciting client initiatives are unlimited in the ever-changing technology landscape. You will be part of a highly collaborative and growing network of technology and data experts, who are taking on todays biggest, most complex business challenges using the latest data and analytics technologies. We will nurture your talent in an inclusive culture that values diversity. You will have an opportunity to work in roles such as Data Scientist, Data Engineer, or Chief Data Officer covering all aspects of Data including Data Management, Data Governance, Data Intelligence, Knowledge Graphs, and IoT. Come grow your career in Technology at Accenture!
People in our Client Delivery & Operations career track drive delivery and capability excellence through the design, development and/or delivery of a solution, service, capability or offering. They grow into delivery-focused roles, and can progress within their current role, laterally or upward.
Business & Technology Integration professionals advise upon, design, develop and/or deliver technology solutions that support best practice business changes
The Bus&Industry Integration Assoc Mgr aligning technology with business strategy and goals they working directly with the client gathering requirements to analyze, design and/or implement technology best practice business changes. They are sought out as experts internally and externally for their deep functional or industry expertise, domain knowledge, or offering expertise. They enhance Accenture's marketplace reputation.
Job Description
Data and Analytics professionals define strategies, develop and deliver solutions that enable the collection, processing and management of information from one or more sources, and the subsequent delivery of information to audiences in support of key business processes.
Data Management professionals define strategies and develop/deliver solutions and processes for managing enterprise-wide data throughout the data lifecycle from capture to processing to usage across all layers of the application architecture.
A professional at this position level within Accenture has the following responsibilities:
Identifies, assesses and solves complex business problems for area of responsibility, where analysis of situations or data requires an in-depth evaluation of variable factors.
Closely follows the strategic direction set by senior management when establishing near term goals.
Interacts with senior management at a client and/or within Accentureon matters where they may need to gain acceptance on an alternate approach.
Has some latitude in decision-making. Acts independently to determine methods and procedures on new assignments.
Decisions have a major day to day impact on area of responsibility.
Manages large - medium sized teams and/or work efforts (if in an individual contributor role) at a client or within Accenture.
Basic Qualifications
    • Minimum of 3 plus years of hands-on technical experience implementing Big Data solutions utilizing Hadoop or other Data Science and Analytics platforms.
    • Minimum of 3 plus years of experience with a full life cycle development from functional design to deployment
    • Minimum 2 plus years of hands-on technical experience with delivering Big Data Solutions in the cloud with AWS or Azure
    • Minimum 3 plus years of hands-on technical experience in developing solutions utilizing at least two of the following:
    • Kafka based streaming services
    • R Studio
    • Cassandra , MongoDB
    • MapReduce, Pig, Hive
    • Scala, Spark
    • knowledge on Jenkins, Chef, Puppet
  • Bachelor's degree or equivalent years of work experience
  • Ability to travel 100%, Monday- Thursday
Professional Skill Requirements
    • Proven ability to build, manage and foster a team-oriented environment
    • Proven ability to work creatively and analytically in a problem-solving environment
    • Desire to work in an information systems environment
    • Excellent communication (written and oral) and interpersonal skills
    • Excellent leadership and management skills
All of our consulting professionals receive comprehensive training covering business acumen, technical and professional skills development. You'll also have opportunities to hone your functional skills and expertise in an area of specialization. We offer a variety of formal and informal training programs at every level to help you acquire and build specialized skills faster. Learning takes place both on the job and through formal training conducted online, in the classroom, or in collaboration with teammates. The sheer variety of work we do, and the experience it offers, provide an unbeatable platform from which to build a career.
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture.
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is a federal contractor and an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
GeoPhy
  • New York, NY

We're already working with some of the largest real estate lenders and investors across the globe, and we believe that our AVM will truly disrupt the commercial real estate industry.  Using your machine learning and analytical skills, you will contribute to the development of GeoPhy's core information products. This includes working on the development of our flagship product, the Automated Valuation Model (AVM) that we've developed for the commercial real estate market.



What you'll be responsible for



  • Developing and maintaining predictive valuation algorithms for the commercial real estate market, based on stochastic modeling

  • Identifying and analyzing new data sources to improve model accuracy, closely working with our data sourcing teams

  • Conducting statistical analysis to identify patterns and insights, and process and feature engineer data as needed to support model development and business products

  • Bringing models to production, in collaboration with the development and data engineering teams 

  • Supporting data sourcing strategy and the validation of related infrastructure and technology

  • Contributing to the development of methods in data data science, including: statistical analysis and model development related to real estate, economics, the built environment, or financial markets



What we're looking for



  • Creative and intellectually curious with hands-on experience as a data scientist

  • Flexible, resourceful, and a reliable team player

  • Rigorous analyst, critical thinker, and problem solver with experience in hypothesis testing and experimental design

  • Excellent at communicating, including technical documentation and presenting work across a variety of audiences

  • Experienced working with disparate data sources and the engineering and statistical challenges that presents, particularly with time series, socio-economic-demographic (SED) data, and/or geo-spatial data

  • Strong at data exploration and visualization

  • Experienced implementing predictive models across a full suite of statistical learning algorithms (regression/classification, unsupervised/semi-supervised/supervised)

  • Proficient in Python or R as well as critical scientific and numeric programming packages and tools

  • Intermediate knowledge of SQL

  • Full working proficiency in English

  • An MSc/PhD degree in Computer Science, Mathematics, Statistics or a related subject, or commensurate technical experience



Bonus points for



  • International mind set

  • Experience in an Agile organization

  • Knowledge or experience with global real estate or financial markets

  • Experience with complex data and computing architectures, including cloud services and distributed computing

  • Direct experience implementing models in production or delivering a data product to market



What’s in it for you?



  • You will have the opportunity to accelerate our rapidly growing organisation.

  • We're a lean team, so your impact will be felt immediately.

  • Personal learning budget.

  • Agile working environment with flexible working hours and location.

  • No annual leave allowance; take time off whenever you need.

  • We embrace diversity and foster inclusion. This means we have a zero-tolerance policy towards discrimination,

  • GeoPhy is a family and pet friendly company.

  • Get involved in board games, books, and lego.

Coolblue
  • Rotterdam, Netherlands
As an Advanced Data Analyst / Data Scientist you use the data of millions of visitors to help Coolblue act smarter.

Pros and cons

  • Youre going to be working as a true Data Scientist. One who understands why you get the results that you do and apply this information to other experiments.
  • Youre able to use the right tools for every job.
  • Your job starts with a problem and ends with you monitoring your own solution.
  • You have to crawl underneath the foosball table when you lose a game.

Description Data Scientist

Your challenge in this sprint is improving the weekly sales forecasting models for the Christmas period. Your cross-validation strategy is ready, but before you can begin, you have to query the data from our systems and process them in a way that allows you to view the situation with clarity.

First, you have a meeting with Matthias, whos worked on this problem before. During your meeting, you conclude that Christmas has a non-linear effect on sales.  Thats why you decide to experiment with a multiplicative XGBoost in addition to your Regularised-Regression model. You make a grid with various features and parameters for both models and analyze the effects of both approaches. You notice your Regression is overfitting, which means XGBoost isnt performing and the forecast isnt high enough, so you increase the regularization and appoint the Christmas features to XGBoost alone.

Nice! You improved the precision of the Christmas forecast with an average of 2%. This will only yield results once the algorithm has been implemented, so you start thinking about how you want to implement this.

Your specifications

  • You have at least 4 years of experience in a similar function.
  • You have a university degree, MSC, or PHD in Mathematics, Computer Science, or Statistics.
  • You have experience with Machine Learning techniques, such as Gradient Boosting, Random Forest, and Neutral Networks, and you have proven experience with successfully applying these (or similar) techniques in a business environment.
  • You have some experience with Data mining, SQL, BigQuery, NoSQL, R, and monitoring.
  • You're highly knowledgeable about Python.
  • You have experience with Big Data technologies, such as Spark and Hadoop.

Included by default.

  • Money.
  • Travel allowance and a retirement plan.
  • 25 leave days. As long as you promise to come back.
  • A discount on all our products.
  • A picture-perfect office at a great location. You could crawl to work from Rotterdam Central Station. Though we recommend just walking for 2 minutes.
  • A horizontal organisation in the broadest sense. You could just go and have a beer with the boss.

Review



'I believe I'm working in a great team of enthusiastic and smart people, with a good mix of juniors and seniors. The projects that we work on are very interesting and diverse, think of marketing, pricing and recommender systems. For each project we try to use the latest research and machine learning techniques in order to create the best solutions. I like that we are involved in the projects start to end, from researching the problem to experimenting, to putting it in production, and to creating the monitoring dashboards and delivering the outputs on a daily basis to our stakeholders. The work environment is open, relaxed and especially fun'
- Cheryl Zandvliet, Data Scientist
ConocoPhillips
  • Houston, TX
Our Company
ConocoPhillips is the worlds largest independent E&P company based on production and proved reserves. Headquartered in Houston, Texas, ConocoPhillips had operations and activities in 17 countries, $71 billion of total assets, and approximately 11,100 employees as of Sept. 30, 2018. Production excluding Libya averaged 1,221 MBOED for the nine months ended Sept. 30, 2018, and proved reserves were 5.0 billion BOE as of Dec. 31, 2017.
Employees across the globe focus on fulfilling our core SPIRIT Values of safety, people, integrity, responsibility, innovation and teamwork. And we apply the characteristics that define leadership excellence in how we engage each other, collaborate with our teams, and drive the business.
Description
The purpose of this role is to enable and support Citizen Data Scientists (CDS) to develop analytical workflows and to manage the adoption and implementation of the latest innovations within the ConocoPhillips preferred analytics tools for Citizen Data Science.
This position will enable analytics tools and solutions for customers including; the facilitation of solution roadmap, the adoption of new analytics functionality, the integration between applications based on value driven workflows, the support and training of users on the new capabilities.
Responsibilities May Include
  • Work with customers to enable the latest data analytics capabilities
  • Understand and help implement the latest innovations available within ConocoPhillips preferred analytics platform including Spotfire, Statistica, ArcGIS Big Data (Spatial Analytics), Teradata and Python
  • Help users with the implementation of analytics workflows through integration of the analytics applications
  • Manage analytics solutions roadmap and implementation timeline enabling geoscience customers to take advantage of the latest features or new functionality
  • Communicate with vendors and COP community on analytics technology functionality upgrades, prioritized enhancements and adoption
  • Test and verify that existing analytics workflows are supported within the latest version of the technology
  • Guide users on how to enhance their current workflows with the latest analytics technology
  • Facilitate problem solving with analytics solutions
  • Work with other AICOE teams to validate and implement new technology or version upgrades into production
Specific Responsibilities May Include
    Provi
    • de architectural guidance for building integrated analytical solutions Under
    • stands analytics product roadmaps, product development and the implementation of new featuresPromo
    • tes new analytics product features within customer base and demonstrates how it enables analytics workflowsManag
    • e COP analytics product adoption roadmapCaptu
    • re product enhancement list and coordinate prioritization with the vendorTest
    • new capabilities and map them to COP business workflowsCoord
    • inate with the AICOE team the timely upgrades of the new features Provi
    • des support to CDS for:
    • analytics modelling best practices
    • know how implementation of analytics workflows based on new technology
  • Liaise with the AICOE Infrastructure team for timely technology upgrades
  • Work on day to day end user support activities for Citizen Data Science tools; Advanced Spotfire, Statistica, GIS Big Data
  • Provides technical consulting and guidance to Citizen Data Scientist for the design and development of complex analytics workflows
  • Communicates analytics technology roadmap to end users
  • Communicates and demonstrates the value of new features to COP business
  • Train and mentor Citizen Data Science on analytics solutions
Basic/Required
  • Legally authorized to work in the United States
  • Bachelor's degree in Information Technology, Computer Sciences, Geoscience, Engineering, Statistics or related field
  • 5+ years of experience in oil & gas and geoscience data and workflows
  • 3+ years of experience with Tibco Spotfire
  • 3+ years of experience Teradata or using SQL databases
  • 1+ years of experience with ArcGIS spatial analytics tools
  • Advanced knowledge and experience of Integration platform
Preferred
  • Masters degree in Analytics or related field
  • 1+ years of experience with Tibco Statistica or equivalent statistics-based analytics package
  • Prior experience in implementing and supporting visual, prescriptive and predictive analytics
  • In-depth understanding of the analytics applications and integration points
  • Experience implementing data science workflows in Oil & Gas
  • Takes ownership of actions and follows through on commitments by courageously dealing with important problems, holding others accountable, and standing up for what is right
  • Delivers results through realistic planning to accomplish goals
  • Generates effective solutions based on available information and makes timely decisions that are safe and ethical
To be considered for this position you must complete the entire application process, which includes answering all prescreening questions and providing your eSignature on or before the requisition closing date of February 27, 2019.
Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.
ConocoPhillips is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, veteran status, gender identity or expression, genetic information or any other legally protected status.
Job Function
Information Management-Information Technology
Job Level
Individual Contributor/Staff Level
Primary Location
NORTH AMERICA-USA-TEXAS-HOUSTON
Organization
ANALYTICS INNOVATION
Line of Business
Corporate Staffs
Job Posting
Feb 13, 2019, 4:51:37 PM