• Berlin, Germany

Your Tasks – Paint the world green

  • Holistic cloud-based infrastructure automation

  • Distributed data processing clusters as well as data streaming platforms based on Kafka, Flink and Spark

  • Microservice platforms based on Docker

  • Development infrastructure and QA automation

  • Continuous Integration/Delivery/Deployment

Your Profile – Ready to hop on board

  • Experience in building and operating complex infrastructure

  • Expert-level: Linux, System Administration

  • Experience with Cloud Services, Expert-Level with either AWS or GCP  

  • Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShift

  • Mindset: "Automate Everything", "Infrastructure as Code", "Pipelines as Code", "Everything as Code"

  • Hands-on experience with "Infrastructure as Code" tools: TerraForm, CloudFormation, Packer

  • Experience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)

  • Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELK

  • At least basic knowledge in designing and implementing Service Level Agreements

  • Solid knowledge of Network and general Security Engineering

  • At least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, Bamboo

  • At least basic hands-on DBA experience, experience with data backup and recovery

  • Experience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory

AXA Schweiz
  • Winterthur, Switzerland

Dich sprechen Agilität, Product driven IT, Cloud Computing und Machine Learning an?
Du bist leistungsorientiert und hast den Mut Neues auszuprobieren?

Wir haben den digitalen Wandel in unserer DNA verankert!

Dein Beitrag:

  • Das Aufgabenset umfasst vor allem Engineering (IBM MQ Linux, z/OS) und Betrieb von Middleware-Komponenten (File Transfer, Web Service Infrastruktur).

  • Im Detail heisst das Komponentenverantwortung (u.A. Lifecycling, Zur Verfügungstellung von API's und Self-Services, Automatisierung der Abläufe, Erstellung und Pflege der Dokumentation), Sicherstellung des Betriebs (Du ergreifst autonom die notwendigen Massnahmen, Bereitschaft zu sporadischen Wochenendeinsätzen/Pikett), als auch Wissenspflege und -vermittlung.

  • In einem agilen Umfeld, mithilfe bei der Migration unserer Komponenten in die Cloud.

Deine Fähigkeiten und Talente:

  • Du bringst ein abgeschlossenes Informatikstudium oder vergleichbare Erfahrung mit.

  • Dein Know-How umfasst Messaging Middleware-Komponenten, idealerweise IBM MQ auf Linux angereichert mit z/OS Knowhow, cool wären zudem noch Kenntnisse von RabbitMQ und Kafka.

  • Andere Middleware Komponenten (File Transfer und Web Service) sind Dir nicht gänzlich unbekannt und Übertragungsprotokolle als auch die Linux-Welt im Speziellen sind Dir vertraut.

  • Du bringst fundierte Erfahrung in der Automatisierung an den Tisch (Bash, Python) und auch REST, API's sowie Java(-script) sind keine Fremdwörter für Dich. Erste Programmiererfahrung in einer objektorientierten Sprache, vorzugsweise Java, runden dein Profil ab.

  • Du bist integrativ, betrachtest Herausforderungen aus verschiedenen Perspektiven und stellst unbequeme Fragen, wenn es darauf ankommt.

  • Du bist sicher in der deutschen und englischen Sprache.

  • San Diego, CA
Organization: Accenture Applied Intelligence
Position: Artificial Intelligence Engineer - Consultant
The digital revolution is changing everything. Its everywhere transforming how we work and play. Accenture Digitals 36,000 professionals are driving these exciting changes and bringing them to life across 40 industries in more than 120 countries. At the forefront of digital, youll create it, own it and make it a reality for clients looking to better serve their connected customers and operate always-on enterprises. Join us and become an integral part of our experienced digital team with the credibility, expertise and insight clients depend on.
Accenture Applied Intelligence, part of Accenture Digital, helps clients to use analytics and artificial intelligence to drive actionable insights, at scale. We apply sophisticated algorithms, data engineering and visualization to extract business insights and help clients turn those insights into actions that drive tangible outcomesto improve their performance and disrupt their markets. Accenture Applied Intelligence is a leader in big data analytics, with deep industry and technical experience. We provide services and solutions that include Analytics Advisory, Data Science, Data Engineering and Analytics-as-a-Service.
Role Description
As an AI engineer, you will facilitate the transfer of advanced AI technologies from the research labs to the domain testbeds and thus the real world. You will participate in the full research to deployment pipeline. You will help conceptualize and develop research experiments, and then implement the systems to execution these experiments. You will lead or work with a team and interact closely with deep experience machine learning engineering and research and the industry partners. You will attend reading groups and seminars, master research techniques and engineering practices, and design research tools and experimental testbeds. You will apply state-of-the-art AI algorithms, explore new solutions, and build working prototypes. You will also learn to deploy the systems and solutions at scale.
    • Use Deep Learning and Machine Learning to create scalable solutions for business problems.
    • Deliver Deep Learning/Machine Learning projects from beginning to end, including business understanding, data aggregation, data exploration, model building, validation and deployment.
    • Define Architecture Reference Assets - Apply Accenture methodology, Accenture reusable assets, and previous work experience to delivery consistently high quality work. Deliver written or oral status reports regularly. Stay educated on new and emerging market offerings that may be of interest to our clients. Adapt to existing methods and procedures to create possible alternative solutions to moderately complex problems
    • Work hands on to demonstrate and prototype integrations in customer environments. Primary upward interaction is with direct supervisor. May interact with peers and/or management levels at a client and/or within Accenture.
    • Solution and Proposal Alignment - Through a formal sales process, work with the Sales team to identify and qualify opportunities. Conduct full technical discovery, identifying pain points, business and technical requirements, as is and to be scenarios.
    • Understand the strategic direction set by senior management as it relates to team goals. Use considerable judgment to define solution and seeks guidance on complex problems.
    • Bachelors degree in AI, Computer Science, Engineering, Statistics, Physics.
    • Minimum of 1 year of experience in production deployed solutions using artificial intelligence or machine learning techniques.
    • Minimum of 1 years previous consulting or client service delivery experience
    • Minimum of 2 years of experience with system integration architectures, private and public cloud architectures, pros/cons, transformation experience
    • Minimum of 1 year of full lifecycle deployment experience
Preferred Skills
    • Masters or PhD in Analytics, Statistic or other quantitative disciplines
    • Deep learning architectures: convolutional, recurrent, autoencoders, GANs, ResNets
    • Experience in Cognitive tools like Microsoft Bot Framework & Cognitive Services, IBM Watson, Amazon AI services
    • Deep understanding of Data structures and Algorithms
    • Deep experience in Python, C# (.NET), Scala
    • Deep knowledge with MxNet, CNTK, R, H20, TensorFlow, PyTorch
    • Highly desirable to have experience in: cuDNN, NumPY, SciPy
Professional Skill Requirements
    • Recent success in contributing to a team-oriented environment
    • Proven ability to work creatively and analytically in a problem-solving environment
    • Excellent communication (written and oral) and interpersonal skills
    • Demonstrated leadership in professional setting; either military or civilian
    • Demonstrated teamwork and collaboration in a professional setting; either military or civilian
    • Ability to travel extensively
    • Your entrepreneurial spirit and vision will be rewarded, and your success will fuel opportunities for career advancement.
    • You will make a difference for some pretty impressive clients. Accenture serves 94 of the Fortune Global 100 and more than 80 percent of the Fortune Global 500.
    • You will be an integral part of a market-leading analytics organization, including the largest and most diversified group of digital, technology, business process and outsourcing professionals in the world. You can leverage our global team to support analytics innovation workshops, rapid capability development, enablement and managed services.
    • You will have access to Accentures deep industry and functional expertise. We operate across more than 40 industries and have hundreds of offerings addressing key business and technology issues. Through our global network, we bring unparalleled experience and comprehensive capabilities across industries and business functions, and extensive research on the worlds most successful companies. You will also be able to tap into the continuous innovation of our Accenture Technology Labs and Innovation Centers, as well as top universities such as MIT through our academic alliance program.
    • You will have access to distinctive analytics assets that we use to accelerate delivering value to our clients including more than 550 analytics assets underpinned by a strong information management and BI technology foundation. Accenture has earned more than 475 patents and patents pending globally for software assets, data- and analytic-related methodologies and content.
    • As the worlds largest independent technology services provider, we are agnostic about technology but have very clear viewpoints about what is most appropriate for a clients particular challenge. You will have access to our alliances with market-leading technology providers and collaborative relationships with emerging players in the analytics and big data spacethe widest ecosystem in the industry. These alliances bring together Accentures extensive analytics capabilities and alliance providers technology, experience and innovation to power analytics-based solutions.
    • You will have access to the best talent. Accenture has a team of more than 36,000 digital professionals including technical architects, big data engineers, data scientists and business analysts, as well as user digital strategists and experience designers.
    • Along with a competitive salary, Accenture offers a comprehensive package that includes generous paid time off, 401K match and an employee healthcare plan. Learn more about our extensive rewards and benefits here: Benefits .
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
  • Atlanta, GA

Arcadis is seeking a Director of Advanced Analytics to join our North America Digital Team.

The Director of Advanced Analytics (North America) is responsible for the development and implementation of a strategy centered around elevating our current analytics capabilities, identifying use cases within our current business, and identifying new business opportunities anchored around analytics. Together with the core/extended Arcadis Digital team(s) he/she will drive digital change within the organization which includes frequently interacting with Arcadians at all levels from the executive leadership, project managers, and entry level employees in addition to clients by providing technical expertise, guidance, and hands-on support on data processing, analytical models, and visualization techniques.

Key Responsibilities:

Develop capabilities in analytics phases

  • Evaluate our existing analytics capabilities and develop a strategy for Analytics in North America, that will create value for our employees, clients, and partners.
  • Develop Arcadiss capabilities in descriptive, predictive, and prescriptive analytics
  • Leading the development of required analytics skills in North America - by a combination of recruiting and training
  • Train our colleagues valuable technical skills on-the-job, which can include data processing, coding models in R, Python, BaysiaLab, etc, creating powerful visualizations and interfaces (e.g. using RShiny or Tableau), and developing the data infrastructure needed

Analytics as a value add to projects

  • Advise in the implementation of analytics and data life cycle on projects/programs
  • Capture the resulting insights from the data life cycle
  • Communicate the value of using analytics instead of the traditional way of working
  • Identify where analytics can be used as a differentiator
  • Scope out analytics services and develop monetization strategies
  • Develop powerful use cases, in close collaboration with market leaders and external partners

Advise clients on their analytics journey

  • Advising clients on data governance/management
  • Identify new business opportunities with existing and potential clients
  • Participation in design thinking/client visioning workshops

Evangelist/Leader for analytics

  • Empower our colleagues to implement analytics on their projects
  • Communicate complex ideas in a simple way using story telling
  • Inspire our colleagues to innovate and adopt of analytics

Develop strategy for analytics on overall business/bring in best practices in analytics

  • Integrate yourself within the existing business to evaluate where analytics can be strategically be implemented.
  • Collaborate closely with / support the Chief Digital Officer, North America with taking the lead and providing technical expertise within Analytics
  • Collaborate closely with / support the Global CDO / Analytics team by supporting the development of a global Analytics roadmap and data strategy
  • Participating in regular exchanges with other digital / analytics leaders in other regions

Leadership Requirements

  • Proven leadership capabilities
  • Ability to work cross-functionally with all levels of management, both internally and externally.
  • Ability to inspire clients and potential partners with new ideas and discipline combined with pragmatism to implement them rapidly;
  • Ability to develop a plan and approach to strategic problems - communicating and mobilizing teams to deliver against the plan;
  • The ideal candidate: Talented, passionate and high energy leader with a blend of technology/digital savviness and built & natural asset industry knowledge.

Key Requirements

  • Deep knowledge and expertise in analytics, including expert knowledge of the common software applications, such as R, Python or database management tools, such as SQL or MongoDB.
  • Complete understanding of cloud technologies
  • Complete understand of data lake implementation and other data governance practices
  • Knowledge of design thinking and client visioning workshops
  • Experience in agile project management and product development

Key Attributes

  • Knowledge of best practices in analytics across the industry
  • Broad and strategic perspective on the current and future global technology landscape
  • Credible counterpart for existing or potential partners with strong business acumen combined with solid technical skills;
  • Self-starter, high energy, can work independently and remotely and comfortable with ambiguity;
  • Communication and relationship skills, ability to engage with a broad range of international stakeholders;
  • Ability to work in a global matrix organization and to deal with different cultures worldwide;
  • Proactive, practical, entrepreneurial, open-minded, flexible and creative;
  • Excellent command of English, both spoken and written.
  • Given the international spread of the business a certain level of flexibility in working hours is important.
  • 7+ years of professional experience with an impressive track record in Analytics and bringing data-driven initiatives to success, ideally at the intersection of natural & built assets and analytics/digital

Required Qualifications

  • Masters Degree (preferably in Business Analytics, Data Science or Computer Science) preferred.
  • 10+ years of professional experience with an impressive track record in Analytics

Preferred Qualifications

  • MA/MS/MBA degree from a top-tier university / research institution
Delivery Hero SE
  • Berlin, Germany

We are now looking for a tech geek who will grow with our well renowned engineering department as a Senior Engineering Manager - Python/Scala (f/m/d). Join our inquisitive team in the center of Berlin, and start to reinvent food delivery. 

  • Lead and empower an experienced team of engineers, focused on building innovative customer-facing solutions such as customer reviews and ratings, surveys, insights intelligence for restaurants and delivery riders

  • Develop and continuously improve microservices and scalable systems in Python and Scala in our global cloud platform running in multiple regions

  • Work closely with business teams, communicate solutions with non-technical stakeholders and solve challenges

  • Ensure continued service reliability and 24/7 provide technical support for global services

  • Design and implement cutting-edge insights and customer-facing services

  • Practice modern software development methodologies such as continuous delivery, TDD, scrum and collaborate with product managers

  • Participate in code reviews and application debugging and diagnosis.

Your heroic skills:

  • 3 years of hands-on technical leadership and people management experience

  • Excellent knowledge and Hands-on programming experience in developing in Python and/or Scala applications.

  • A completed technical degree in Computer Science or any related fields.

  • Profound knowledge and working experience with unix and systems engineering

  • Several years of experience in design and implementation large scale software systems

  • Experience working with relational databases and NoSQL Technologies and interest in Elasticsearch and Google Cloud and Microservices architectures.

  • Development and co-ownership of applications used by over 100.000 daily users.

  • Curiosity, creative outside-the-box problem solving abilities and an eye for detail.

We offer you:

  • Develop your skills with your educational budget for conferences and external trainings.

  • Exchange ideas and meet fellow developers at regular meetups and in our active guilds.

  • Get to know your colleagues during company parties, hackathons, cultural and sports events.

  • English is our working language, and our colleagues at Delivery Hero come from every corner of the globe, working in diverse, cross-cultural teams.

  • Flexible working hours.

  • Save responsibly with our corporate pension scheme.

  • Enjoy fresh fruits, cereals, beverages, tea and coffee in our lounges. 

  • Take a break with Kicker or table tennis.

  • Take a timeout in our nap room.

  • Learn German with free classes, access our e-learning platform and participate in our inhouse trainings.

  • Enjoy massages or get your hair cut in the office.

Are you the missing ingredient? Send us your CV!

Read about the latest updates from our Tech & Product teams on our blog.

Find our stack here.

American Express
  • Phoenix, AZ

Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every single day. Today, creative ideas, insight and new points of view are at the core of how we craft a more powerful, personal and fulfilling experience for all our customers. So if youre passionate about a career building breakthrough software and making an impact on an audience of millions, look no further.

There are hundreds of chances for you to make your mark on Technology and life at American Express. Heres just some of what youll be doing:

    • Take your place as a core member of an Agile team driving the latest application development practices.
    • Find your opportunity to execute new technologies, write code and perform unit tests, as well as working with data science, algorithms and automation processing
    • Engage your collaborative spirit by Collaborate with fellow engineers to craft and deliver recommendations to Finance, Business, and Technical users on Finance Data Management. 



Are you up for the challenge?

    • 4+ years of Software Development experience.
    • BS or MS Degree in Computer Science, Computer Engineering, or other Technical discipline including practical experience effectively interpreting Technical and Business objectives and challenges and designing solutions.
    • Ability to effectively collaborate with Finance SMEs and partners of all levels to understand their business processes and take overall ownership of Analysis, Design, Estimation and Delivery of technical solutions for Finance business requirements and roadmaps, including a deep understanding of Finance and other LOB products and processes. Experience with regulatory reporting frameworks, is preferred.
    • Hands-on expertise with application design and software development across multiple platforms, languages, and tools: Java, Hadoop, Python, Streaming, Flink, Spark, HIVE, MapReduce, Unix, NoSQL and SQL Databases is preferred.
    • Working SQL knowledge and experience working with relational databases, query authoring (SQL), including working familiarity with a variety of databases(DB2, Oracle, SQL Server, Teradata, MySQL, HBASE, Couchbase, MemSQL).
    • Experience in architecting, designing, and building customer dashboards with data visualization tools such as Tableau using accelerator database Jethro.
    • Extensive experience in application, integration, system and regression testing, including demonstration of automation and other CI/CD efforts.
    • Experience with version control softwares like git, svn and CI/CD testing/automation experience.
    • Proficient with Scaled Agile application development methods.
    • Deals well with ambiguous/under-defined problems; Ability to think abstractly.
    • Willingness to learn new technologies and exploit them to their optimal potential, including substantiated ability to innovate and take pride in quickly deploying working software.
    • Ability to enable business capabilities through innovation is a plus.
    • Ability to get results with an emphasis on reducing time to insights and increased efficiency in delivering new Finance product capabilities into the hands of Finance constituents.
    • Focuses on the Customer and Client with effective consultative skills across a multi-functional environment.
    • Ability to communicate effectively verbally and in writing, including effective presentation skills. Strong analytical skills, problem identification and resolution.
    • Delivering business value using creative and effective approaches
    • Possesses strong business knowledge about the Finance organization, including industry standard methodologies.
    • Demonstrates a strategic/enterprise viewpoint and business insights with the ability to identify and resolve key business impediments.

Employment eligibility to work with American Express in the U.S. is required as the company will not pursue visa sponsorship for these positions.

Pyramid Consulting, Inc
  • Atlanta, GA

Job Title: Tableau Engineer

Duration: 6-12 Months+ (potential to go perm)

Location: Atlanta, GA (30328) - Onsite

Notes from Manager:

We need a data analyst who knows Tableau, scripting (JSON, Python), Altreyx API, AWS, Analytics.


The Tableau Software engineer will be a key resource to work across our Software Engineering BI/Analytics stack to ensure stability, scalability, and the delivery of valuable BI & Analytics solutions for our leadership teams and business partners. Keys to this position are the ability to excel in identification of problems or analytic gaps and mapping and implementing pragmatic solutions. An excellent blend of analytical, technical and communication skills in a team based environment are essential for this role.

Tools we use: Tableau, Business Objects, AngularJS, OBIEE, Cognos, AWS, Opinion Lab, JavaScript, Python, Jaspersoft, Alteryx and R packages, Spark, Kafka, Scala, Oracle

Your Role:

·         Able to design, build, maintain & deploy complex reports in Tableau

·         Experience integrating Tableau into another application or native platforms is a plus

·         Expertise in Data Visualization including effective communication, appropriate chart types, and best practices.

·         Knowledge of best practices and experience optimizing Tableau for performance.

·         Experience reverse engineering and revising Tableau Workbooks created by other developers.

·         Understand basic statistical routines (mean, percentiles, significance, correlations) with ability to apply in data analysis

·         Able to turn ideas into creative & statistically sound decision support solutions

Education and Experience:

·         Bachelors degree in Computer Science or equivalent work experience

·         3-5 Years of hands on experience in data warehousing & BI technologies (Tableau/OBIEE/Business Objects/Cognos)

·         Three or more years of experience in developing reports in Tableau

·         Have good understanding of Tableau architecture, design, development and end user experience.

What We Look For:

·         Very proficient in working with large Databases in Oracle & Big Data technologies will be a plus.

·         Deep understanding & working experience of data warehouse and data mart concepts.

·         Understanding of Alteryx and R packages is a plus

·         Experience designing and implementing high volume data processing pipelines, using tools such as Spark and Kafka.

·         Experience with Scala, Java or Python and a working knowledge of AWS technologies such as GLUE, EMR, Kinesis and Redshift preferred.

·         Excellent knowledge with Amazon AWS technologies, with a focus on highly scalable cloud-native architectural patterns, especially EMR, Kinesis, and Redshift

·         Experience with software development tools and build systems such as Jenkins

  • Salt Lake City, UT

At Recursion we combine experimental biology, automation, and artificial intelligence to quickly and efficiently identify treatments for human diseases. We’re transforming drug discovery into a data science problem and to do that we’re building a platform for rapid biological experimentation, data generation, automated analysis, model training, and prediction.


As a Machine Learning Engineer, you'll report to the VP of Engineering and will work with others on the data, machine learning, and engineering teams to build the infrastructure and systems to enable both ML prototyping and production grade deployment of ML solutions that lift our drug discovery platform to new levels of effectiveness and efficiency. We are looking for experienced Machine Learning Engineers who value experimentation and the rigorous use of the scientific method, high collaboration across multiple functions, and intense curiosity driving them to keep our systems cutting edge. In this role you will:

  • Build, scale, and operate compute clusters for deep learning.You’ll be a part of a team responsible for the ML infrastructure, whether that be large-scale on-prem GPU clusters or cloud-based TPU pods.

  • Create a world-class ML research platform.You’ll work with Data Scientists, ML Researchers, and Systems and Data Engineers to create an ML platform that allows them to efficiently prepare hundreds of terabytes of data for training and processing, train cutting-edge deep learning models, backtest them on thousands of past experiments, and deploy working solutions to production. Examples of ML platforms like this are Uber’s Michelangelo and Facebook’s FBLearner Flow.

  • Be a mentor to peers. You will share your technical knowledge and experiences, resulting in an increase in their productivity and effectiveness.


  • An ability to be resourceful and collaborative in order to complete large projects. You’ll be working cross-functionally to build these systems and must always have the end (internal) user in mind.

  • Experience implementing, training, and evaluating deep learning models using modern ML frameworks through collaboration with others, reading of ML papers, or primary research.

  • A demonstration of accelerating ML research efforts through improved systems, processes and frameworks.

  • A track record of learning new technologies as needed to get things done. Our current tech stack uses Python and the pydata libraries, TensorFlow, Keras, Kubernetes + Docker, Big Query, and other cloud services provided by Google Cloud Platform.

  • Biology background is not necessary, but intellectual curiosity is a must!


  • Coverage of health, vision, and dental insurance premiums (in most cases 100%)

  • 401(k) with generous matching (immediate vesting)

  • Stock option grants

  • Two one-week paid company closures (summer and winter) in addition to flexible, generous vacation/sick leave

  • Commuter benefit and vehicle parking to ease your commute

  • Complimentary chef-prepared lunches and well-stocked snack bars

  • Generous paid parental leave for birth, non-birth, and adoptive parents

  • Fully-paid gym membership to Metro Fitness, located just feet away from our new headquarters

  • Gleaming new 100,000 square foot headquarters complete with a 70-foot climbing wall, showers, lockers, and bike parking


We have raised over $80M to apply machine learning to one of the most unique datasets in existence - over a petabyte of imaging data spanning more than 10 billion cells treated with hundreds of thousands of different biological and chemical perturbations, generated in our own labs - in order to find treatments for hundreds of diseases. Our long term mission is to decode biology to radically improve lives and we want to understand biology so well that we can fix most things that go wrong in our bodies. Our data scientists, machine learning researchers and engineers work on some of the most challenging and interesting problems in computational drug discovery, and collaborate with some of the brightest minds in the deep learning community (Yoshua Bengio is one of our advisors), who help our machine learning team design novel ways of tackling these problems.

Recursion is an equal opportunity employer and complies with all applicable federal, state, and local fair employment practices laws. Recursion strictly prohibits and does not tolerate discrimination against applicants because of race, color, religion, creed, national origin or ancestry, ethnicity, sex, pregnancy, gender (including gender nonconformity and status as a transgender individual), age, physical or mental disability, citizenship, past, current, or prospective service in the uniformed services, or any other characteristic protected under applicable federal, state, or local law.

The HT Group
  • Austin, TX

Full Stack Engineer, Java/Scala Direct Hire Austin

Do you have a track record of building both internal- and external-facing software services in a dynamic environment? Are you passionate about introducing disruptive and innovative software solutions for the shipping and logistics industry? Are you ready to deliver immediate impact with the software you create?

We are looking for Full Stack Engineers to craft, implement and deploy new features, services, platforms, and products. If you are curious, driven, and naturally explore how to build elegant and creative solutions to complex technical challenges, this may be the right fit for you. If you value a sense of community and shared commitment, youll collaborate closely with others in a full-stack role to ship software that delivers immediate and continuous business value. Are you up for the challenge?

Tech Tools:

  • Application stack runs entirely on Docker frontend and backend
  • Infrastructure is 100% Amazon Web Services and we use AWS services whenever possible. Current examples: EC2 Elastic Container Service (Docker), Kinesis, SQS, Lambda and Redshift
  • Java and Scala are the languages of choice for long-lived backend services
  • Python for tooling and data science
  • Postgres is the SQL database of choice
  • Actively migrating to a modern JavaScript-centric frontend built on Node, React/Relay, and GraphQL as some of our core UI technologies


  • Build both internal and external REST/JSON services running on our 100% Docker-based application stack or within AWS Lambda
  • Build data pipelines around event-based and streaming-based AWS services and application features
  • Write deployment, monitoring, and internal tooling to operate our software with as much efficiency as we build it
  • Share ownership of all facets of software delivery, including development, operations, and test
  • Mentor junior members of the team and coach them to be even better at what they do


  • Embrace the AWS + DevOps philosophy and believe this is an innovative approach to creating and deploying products and technical solutions that require software engineers to be truly full-stack
  • Have high-quality standards, pay attention to details, and love writing beautiful, well-designed and tested code that can stand the test of time
  • Have built high-quality software, solved technical problems at scale and believe in shipping software iteratively and often
  • Proficient in and have delivered software in Java, Scala, and possibly other JVM languages
  • Developed a strong command over Computer Science fundamentals
ettain group
  • Raleigh, NC

Role: Network Engineer R/S

Location: RTP, primarily onsite but some flexibility for remote after initial rampup

Pay Rate: 35-60/hr depending on experience.

Interview Process:
Video WebEx (30 min screen)
Panel Interview with 3-4 cpoc engineers- in depth technical screen


·         Customer facing

·         Experience dealing with high pressure situations

·         Be able to hand technology at the level the customer will throw at them

·         Customers test the engineers to see if tech truly is working

·         Have to be able to figure out how to make it work

Must have Tech:

·         Core r/s

·         Vmware

Who You'll Work With:

The POV Services Team (dCloud, CPOC, CXC, etc) provides services, tools and content for Cisco field sales and channel partners, enabling them to highlight Cisco solutions and technologies to customers.

What You'll Do

As a Senior Engineer, you are responsible for the development, delivery, and support of a wide range of Enterprise Networking content and services for Cisco Internal, Partner and Customer audiences.

Content Roadmap, Design and Project Management 25%

    • You will document and scope all projects prior to entering project build phase.
    • Youll work alongside our platform/automation teams to review applicable content to be hosted on Cisco dCloud.
    • You specify and document virtual and hardware components, resources, etc. required for content delivery.
    • You can identify and prioritize all project-related tasks while working with Project Manager to develop a timeline with high expectations to meet project deadlines.\
    • You will successfully collaborate and work with a globally-dispersed team using collaboration tools, such as email, instant messaging (Cisco Jabber/Spark), and teleconferencing (WebEx and/or TelePresence).

Content Engineering and Documentation 30%

    • Document device connectivity requirements of all components (virtual and HW) and build as part of pre-work.
    • Work with the Netops team to rack, cabling, imaging, and access required for the content project.
    • As part of the development cycle, the developer will work collaboratively with the business unit technical marketing engineers (TME) and WW EN Sales engineers to configure solution components, including Cisco routers, switches, wireless LAN controllers (WLC), SD-Access, DNA Center, Meraki, SD-WAN (Viptela), etc.
    • Work with BU, WW EN Sales and marketing resources to draft, test and troubleshoot compelling demo/lab/story guides that contribute to the field sales teams and generate high interest and utilization.
    • Work with POV Services Technical Writer to format/edit/publish content and related documents per POV Services standards.
    • Work as the liaison to the operations and support teams to resolve issues identified during the development and testing process, providing technical support and making design recommendations for fixes.
    • Perform resource studies using VMware vCenter to ensure an optimal balance of content performance, efficiency and stability before promoting/publishing production content.

Content Delivery 25%

    • SD-Access POV, SD-WAN POV Presentations, Webex and Video recordings, TOI, SE Certification Proctor, etc.
    • Customer engagement at customer location, Cisco office, remote delivering proof of value and at Cisco office delivering Test Drive and or Technical Solutions Workshop content.
    • Deliver training, TOI, and presentations at events (Cisco Live, GSX, SEVT, Partner VT, etc).
    • Work with the POV Services owners, architects, and business development team to market, train, and increase global awareness of new/revised content releases.

Support and Other 20%

    • You provide transfer of information and technical support to Level 1 & 2 support engineers, program managers and others ensuring that content is understood and in working order.
    • You will test and replicate issues, isolate the root cause, and provide timely workarounds and/or short/long term fixes.
    • You will be monitoring any support trends for assigned content. Track and log critical issues effectively using Jira.
    • You provide Level 3 user support directly/indirectly to Cisco and Partner sales engineers while supporting and mentoring peer/junior engineers as required.

Who You Are

    • You are well versed in the use of standard design templates and tools (Microsoft Office including Visio, Word, Excel, PowerPoint, and Project).
    • You bring an uncanny ability to multitask between multiple projects, user support, training, events, etc. and shifting priorities.
    • Demonstrated, in-depth working knowledge/certification of routing, switching and WLAN design, configuration and deployment. Cisco Certifications including CCNA, CCNP and or CCIE (CCIE preferred) in R&S.
    • You possess professional or expert knowledge/experience with Cisco Service Provider solutions.
    • You are an Associate or have professional knowledge with Cisco Security including Cisco ISE, Stealthwatch, ASA, Firepower, AMP, etc.
    • You have the ability to travel to Cisco internal, partner and customer events, roadshows, etc. to train and raise awareness to drive POV Services adoption and sales. Up to 40% travel.
    • You bring VMWare/ESXi experience building servers, install VMware, deploying virtual appliances, etc.
    • You have Linux experience or certifications including CompTIA Linux+, Red Hat, etc.
    • Youre experience using Tool Command Language (Tcl), PERL, Python, etc. as well as Cisco and 3rd party traffic, event and device generation applications/tools/hardware. IXIA, Sapro, Pagent, etc.
    • Youve used Cisco and 3rd party management/monitoring/troubleshooting solutions; Cisco: DNA Center, Cisco Prime, Meraki, Viptela, CMX.
    • 3rd party solutions: Solarwinds, Zenoss, Splunk, LiveAction or other to monitor and/or manage an enterprise network.
    • Experience using Wireshark and PCAP files.

Why Cisco

At Cisco, each person brings their unique talents to work as a team and make a difference.

Yes, our technology changes the way the world works, lives, plays and learns, but our edge comes from our people.

    • We connect everything people, process, data and things and we use those connections to change our world for the better.
    • We innovate everywhere - From launching a new era of networking that adapts, learns and protects, to building Cisco Services that accelerate businesses and business results. Our technology powers entertainment, retail, healthcare, education and more from Smart Cities to your everyday devices.
    • We benefit everyone - We do all of this while striving for a culture that empowers every person to be the difference, at work and in our communities.
  • Irvine, CA
  • Salary: $96k - 135k

The Senior Data Engineer focuses on designing, implementing and supporting new and existing data solutions- data processing, and data sets to support various advanced analytical needs. You will be designing, building and supporting data pipelines consuming data from multiple different source systems and transforming it into valuable and insightful information. You will have the opportunity to contribute to end-to-end platform design for our cloud architecture and work multi-functionally with operations, data science and the business segments to build batch and real-time data solutions. The role will be part of a team supporting our Corporate, Sales, Marketing, and Consumer business lines.


  • 7+ years of relevant experience in one of the following areas: Data engineering, business intelligence or business analytics

  • 5-7 years of supporting a large data platform and data pipelining

  • 5+ years of experience in scripting languages like Python etc.

  • 5+ years of experience with AWS services including S3, Redshift, EMR andRDS

  • 5+ years of experience with Big Data Technologies (Hadoop, Hive, HBase, Pig, Spark, etc.)

  • Expertise in database design and architectural principles and methodologies

  • Experienced in Physical data modeling

  • Experienced in Logical data modeling

  • Technical expertise should include data models, database design and data mining


  • Design, implement, and support a platform providing access to large datasets

  • Create unified enterprise data models for analytics and reporting

  • Design and build robust and scalable data integration (ETL) pipelines using SQL, Python, and Spark.

  • As part of an Agile development team contribute to architecture, tools and development process improvements

  • Work in close collaboration with product management, peer system and software engineering teams to clarify requirements and translate them into robust, scalable, operable solutions that work well within the overall data architecture

  • Coordinate data models, data dictionaries, and other database documentation across multiple applications

  • Leads design reviews of data deliverables such as models, data flows, and data quality assessments

  • Promotes data modeling standardization, defines and drives adoption of the standards

  • Work with Data Management to establish governance processes around metadata to ensure an integrated definition of data for enterprise information, and to ensure the accuracy, validity, and reusability of metadata

GrubHub Seamless
  • New York, NY

Got a taste for something new?

We’re Grubhub, the nation’s leading online and mobile food ordering company. Since 2004 we’ve been connecting hungry diners to the local restaurants they love. We’re moving eating forward with no signs of slowing down.

With more than 90,000 restaurants and over 15.6 million diners across 1,700 U.S. cities and London, we’re delivering like never before. Incredible tech is our bread and butter, but amazing people are our secret ingredient. Rigorously analytical and customer-obsessed, our employees develop the fresh ideas and brilliant programs that keep our brands going and growing.

Long story short, keeping our people happy, challenged and well-fed is priority one. Interested? Let’s talk. We’re eager to show you what we bring to the table.

About the Opportunity: 

Senior Site Reliability Engineers are embedded in Big Data specific Dev teams to focus on the operational aspects of our services, and our SREs run their respective products and services from conception to continuous operation.  We're looking for engineers who want to be a part of developing infrastructure software, maintaining it and scaling. If you enjoy focusing on reliability, performance, capacity planning, and the automation everything, you’d probably like this position.

Some Challenges You’ll Tackle


  • Python – our primary infrastructure language

  • Cassandra

  • Docker (in production!)

  • Splunk, Spark, Hadoop, and PrestoDB

  • AWS

  • Python and Fabric for automation and our CD pipeline

  • Jenkins for builds and task execution

  • Linux (CentOS and Ubuntu)

  • DataDog for metrics and alerting

  • Puppet

You Should Have

  • Experience in AWS services like Kinesis, IAM, EMR, Redshift, and S3

  • Experience managing Linux systems

  • Configuration management tool experiences like Puppet, Chef, or Ansible

  • Continuous integration, testing, and deployment using Git, Jenkins, Jenkins DSL

  • Exceptional communication and troubleshooting skills.


  • Python or Java / Scala development experience

  • Bonus points for deploying/operating large-ish Hadoop clusters in AWS/GCP and use of EMR, DC/OS, Dataproc.

  • Experience in Streaming data platforms, (Spark streaming, Kafka)

  • Experience developing solutions leveraging Docker

Avaloq Evolution AG
  • Zürich, Switzerland

The position

Are you passionate about data? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

A challenging role as Senior Data Scientist in a demanding, dynamic and international software company using the latest innovations in predictive analytics and visualization techniques. You will be driving the creation of statistical and machine learning models from prototyping until the final deployment.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.

Your profile

  • PhD or Master degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • 5+ years of experience in Statistical Modelling, Anomaly Detection, Machine Learning algorithms both Supervised and Unsupervised

  • Proven experience in applying data science methods to business problems

  • Ability to explain complex analytical concepts to people from other fields

  • Proficiency in at least one of the following: Python, R, Java/Scala, SQL and/or SAS

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, stream processing)

  • Expertise in text mining and natural language processing is a strong plus

  • Familiarity with network analysis and/or graph databases is a plus

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Experience in leading teams and mentoring others

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Experience in the financial industry is a strong plus

  • Fluent in English; German, Italian and French a plus

Professional requirements

  • Use machine learning tools and statistical techniques to produce solutions for customer demands and complex problems

  • Participate in pre-sales and pre-project analysis to develop prototypes and proof-of-concepts

  • Analyse customer behaviour and needs enabling customer-centric product development

  • Liaise and coordinate with internal infrastructure and architecture team regarding setting up and running a BigData & Analytics platform

  • Strengthen data science within Avaloq and establish a data science centre of expertise

  • Look for opportunities to use insights/datasets/code/models across other functions in Avaloq

Main place of work

Avaloq Evolution AG
Alina Tauscher, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
Avaloq Evolution AG
  • Zürich, Switzerland

The position

Are you passionate about data architecture? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

Responsible for selecting appropriate technologies from open source, commercial on-premises and cloud-based offerings. Integrating a new generation of tools within the existing environment to ensure access to accurate and current data. Consider not only the functional requirements, but also the non-functional attributes of platform quality such as security, usability, and stability.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.

Your profile

  • PhD, Master or Bachelor degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, data lakes, stream processing)

  • Practical experience with Container Platforms (OpenShift) and/or containerization software (Kubernetes, Dockers)

  • Hands-on experience developing data extraction and transformation pipelines (ETL process)

  • Expert knowledge in RDBMS, NoSQL and Data Warehousing

  • Familiar with information retrieval software such as Elastic Search/Lucene/SOLR

  • Firm understanding of major programming/scripting languages like Java/Scala, Linux, PHP, Python and/or R

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Fluent in English; German, Italian and French a plus

 Professional requirements

  • Be a thought leader for best practice how to develop and deploy data science products & services

  • Provide an infrastructure to make data driven insights scalable and agile

  • Liaise and coordinate with stakeholders regarding setting up and running a BigData and analytics platform

  • Lead the evaluation of business and technical requirements

  • Support data-driven activities and a data-driven mindset where needed

Main place of work

Avaloq Evolution AG
Anna Drozdowska, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
ITCO Solutions, Inc.
  • Austin, TX

The Sr. Engineer will be building pipelines using Spark ScalaMust Haves:
Expertise in the Big Data processing and ETL Pipeline
Designing large scaling ETL pipelines - batch and realtime
Expertise in Spark Scala coding and Data Frame API (rather than the SQL based APIs)
Expertise in core Data Frame APIs
Expertise in doing unit testing Spark Data frame API based code
Strong in Scripting knowledge using Python and shell scripting
Experience and expertise in working on performance tuning of large scale data pipelines

  • New York, NY

We're already working with some of the largest real estate lenders and investors across the globe, and we believe that our AVM will truly disrupt the commercial real estate industry.  Using your machine learning and analytical skills, you will contribute to the development of GeoPhy's core information products. This includes working on the development of our flagship product, the Automated Valuation Model (AVM) that we've developed for the commercial real estate market.

What you'll be responsible for

  • Developing and maintaining predictive valuation algorithms for the commercial real estate market, based on stochastic modeling

  • Identifying and analyzing new data sources to improve model accuracy, closely working with our data sourcing teams

  • Conducting statistical analysis to identify patterns and insights, and process and feature engineer data as needed to support model development and business products

  • Bringing models to production, in collaboration with the development and data engineering teams 

  • Supporting data sourcing strategy and the validation of related infrastructure and technology

  • Contributing to the development of methods in data data science, including: statistical analysis and model development related to real estate, economics, the built environment, or financial markets

What we're looking for

  • Creative and intellectually curious with hands-on experience as a data scientist

  • Flexible, resourceful, and a reliable team player

  • Rigorous analyst, critical thinker, and problem solver with experience in hypothesis testing and experimental design

  • Excellent at communicating, including technical documentation and presenting work across a variety of audiences

  • Experienced working with disparate data sources and the engineering and statistical challenges that presents, particularly with time series, socio-economic-demographic (SED) data, and/or geo-spatial data

  • Strong at data exploration and visualization

  • Experienced implementing predictive models across a full suite of statistical learning algorithms (regression/classification, unsupervised/semi-supervised/supervised)

  • Proficient in Python or R as well as critical scientific and numeric programming packages and tools

  • Intermediate knowledge of SQL

  • Full working proficiency in English

  • An MSc/PhD degree in Computer Science, Mathematics, Statistics or a related subject, or commensurate technical experience

Bonus points for

  • International mind set

  • Experience in an Agile organization

  • Knowledge or experience with global real estate or financial markets

  • Experience with complex data and computing architectures, including cloud services and distributed computing

  • Direct experience implementing models in production or delivering a data product to market

What’s in it for you?

  • You will have the opportunity to accelerate our rapidly growing organisation.

  • We're a lean team, so your impact will be felt immediately.

  • Personal learning budget.

  • Agile working environment with flexible working hours and location.

  • No annual leave allowance; take time off whenever you need.

  • We embrace diversity and foster inclusion. This means we have a zero-tolerance policy towards discrimination,

  • GeoPhy is a family and pet friendly company.

  • Get involved in board games, books, and lego.

  • Surry Hills, Australia
  • Salary: A$120k - 140k

The Role

  • Be an integral member on the team responsible for design, implement and maintain distributed big data capable system with high-quality components (Kafka, EMR + Spark, Akka, etc).

  • Embrace the challenge of dealing with big data on a daily basis (Kafka, RDS, Redshift, S3, Athena, Hadoop/HBase), perform data ETL, and build tools for proper data ingestion from multiple data sources.

  • Collaborate closely with data infrastructure engineers and data analysts across different teams, find bottlenecks and solve the problem

  • Design, implement and maintain the heterogeneous data processing platform to automate the execution and management of data-related jobs and pipelines

  • Implement automated data workflow in collaboration with data analysts, continue to improve, maintain and improve system in line with growth

  • Collaborate with Software Engineers on application events, and ensuring right data can be extracted

  • Contribute to resources management for computation and capacity planning

  • Diving deep into code and constantly innovating


  • Experience with AWS data technologies (EC2, EMR, S3, Redshift, ECS, Data Pipeline, etc) and infrastructure.

  • Working knowledge in big data frameworks such as Apache Spark, Kafka, Zookeeper, Hadoop, Flink, Storm, etc

  • Rich experience with Linux and database systems

  • Experience with relational and NoSQL database, query optimization, and data modelling

  • Familiar with one or more of the following: Scala/Java, SQL, Python, Shell, Golang, R, etc

  • Experience with container technologies (Docker, k8s), Agile development, DevOps and CI tools.

  • Excellent problem-solving skills

  • Excellent verbal and written communication skills 

  • Rotterdam, Netherlands
As an Advanced Data Analyst / Data Scientist you use the data of millions of visitors to help Coolblue act smarter.

Pros and cons

  • Youre going to be working as a true Data Scientist. One who understands why you get the results that you do and apply this information to other experiments.
  • Youre able to use the right tools for every job.
  • Your job starts with a problem and ends with you monitoring your own solution.
  • You have to crawl underneath the foosball table when you lose a game.

Description Data Scientist

Your challenge in this sprint is improving the weekly sales forecasting models for the Christmas period. Your cross-validation strategy is ready, but before you can begin, you have to query the data from our systems and process them in a way that allows you to view the situation with clarity.

First, you have a meeting with Matthias, whos worked on this problem before. During your meeting, you conclude that Christmas has a non-linear effect on sales.  Thats why you decide to experiment with a multiplicative XGBoost in addition to your Regularised-Regression model. You make a grid with various features and parameters for both models and analyze the effects of both approaches. You notice your Regression is overfitting, which means XGBoost isnt performing and the forecast isnt high enough, so you increase the regularization and appoint the Christmas features to XGBoost alone.

Nice! You improved the precision of the Christmas forecast with an average of 2%. This will only yield results once the algorithm has been implemented, so you start thinking about how you want to implement this.

Your specifications

  • You have at least 4 years of experience in a similar function.
  • You have a university degree, MSC, or PHD in Mathematics, Computer Science, or Statistics.
  • You have experience with Machine Learning techniques, such as Gradient Boosting, Random Forest, and Neutral Networks, and you have proven experience with successfully applying these (or similar) techniques in a business environment.
  • You have some experience with Data mining, SQL, BigQuery, NoSQL, R, and monitoring.
  • You're highly knowledgeable about Python.
  • You have experience with Big Data technologies, such as Spark and Hadoop.

Included by default.

  • Money.
  • Travel allowance and a retirement plan.
  • 25 leave days. As long as you promise to come back.
  • A discount on all our products.
  • A picture-perfect office at a great location. You could crawl to work from Rotterdam Central Station. Though we recommend just walking for 2 minutes.
  • A horizontal organisation in the broadest sense. You could just go and have a beer with the boss.


'I believe I'm working in a great team of enthusiastic and smart people, with a good mix of juniors and seniors. The projects that we work on are very interesting and diverse, think of marketing, pricing and recommender systems. For each project we try to use the latest research and machine learning techniques in order to create the best solutions. I like that we are involved in the projects start to end, from researching the problem to experimenting, to putting it in production, and to creating the monitoring dashboards and delivering the outputs on a daily basis to our stakeholders. The work environment is open, relaxed and especially fun'
- Cheryl Zandvliet, Data Scientist
  • Houston, TX
Our Company
ConocoPhillips is the worlds largest independent E&P company based on production and proved reserves. Headquartered in Houston, Texas, ConocoPhillips had operations and activities in 17 countries, $71 billion of total assets, and approximately 11,100 employees as of Sept. 30, 2018. Production excluding Libya averaged 1,221 MBOED for the nine months ended Sept. 30, 2018, and proved reserves were 5.0 billion BOE as of Dec. 31, 2017.
Employees across the globe focus on fulfilling our core SPIRIT Values of safety, people, integrity, responsibility, innovation and teamwork. And we apply the characteristics that define leadership excellence in how we engage each other, collaborate with our teams, and drive the business.
The purpose of this role is to enable and support Citizen Data Scientists (CDS) to develop analytical workflows and to manage the adoption and implementation of the latest innovations within the ConocoPhillips preferred analytics tools for Citizen Data Science.
This position will enable analytics tools and solutions for customers including; the facilitation of solution roadmap, the adoption of new analytics functionality, the integration between applications based on value driven workflows, the support and training of users on the new capabilities.
Responsibilities May Include
  • Work with customers to enable the latest data analytics capabilities
  • Understand and help implement the latest innovations available within ConocoPhillips preferred analytics platform including Spotfire, Statistica, ArcGIS Big Data (Spatial Analytics), Teradata and Python
  • Help users with the implementation of analytics workflows through integration of the analytics applications
  • Manage analytics solutions roadmap and implementation timeline enabling geoscience customers to take advantage of the latest features or new functionality
  • Communicate with vendors and COP community on analytics technology functionality upgrades, prioritized enhancements and adoption
  • Test and verify that existing analytics workflows are supported within the latest version of the technology
  • Guide users on how to enhance their current workflows with the latest analytics technology
  • Facilitate problem solving with analytics solutions
  • Work with other AICOE teams to validate and implement new technology or version upgrades into production
Specific Responsibilities May Include
    • de architectural guidance for building integrated analytical solutions Under
    • stands analytics product roadmaps, product development and the implementation of new featuresPromo
    • tes new analytics product features within customer base and demonstrates how it enables analytics workflowsManag
    • e COP analytics product adoption roadmapCaptu
    • re product enhancement list and coordinate prioritization with the vendorTest
    • new capabilities and map them to COP business workflowsCoord
    • inate with the AICOE team the timely upgrades of the new features Provi
    • des support to CDS for:
    • analytics modelling best practices
    • know how implementation of analytics workflows based on new technology
  • Liaise with the AICOE Infrastructure team for timely technology upgrades
  • Work on day to day end user support activities for Citizen Data Science tools; Advanced Spotfire, Statistica, GIS Big Data
  • Provides technical consulting and guidance to Citizen Data Scientist for the design and development of complex analytics workflows
  • Communicates analytics technology roadmap to end users
  • Communicates and demonstrates the value of new features to COP business
  • Train and mentor Citizen Data Science on analytics solutions
  • Legally authorized to work in the United States
  • Bachelor's degree in Information Technology, Computer Sciences, Geoscience, Engineering, Statistics or related field
  • 5+ years of experience in oil & gas and geoscience data and workflows
  • 3+ years of experience with Tibco Spotfire
  • 3+ years of experience Teradata or using SQL databases
  • 1+ years of experience with ArcGIS spatial analytics tools
  • Advanced knowledge and experience of Integration platform
  • Masters degree in Analytics or related field
  • 1+ years of experience with Tibco Statistica or equivalent statistics-based analytics package
  • Prior experience in implementing and supporting visual, prescriptive and predictive analytics
  • In-depth understanding of the analytics applications and integration points
  • Experience implementing data science workflows in Oil & Gas
  • Takes ownership of actions and follows through on commitments by courageously dealing with important problems, holding others accountable, and standing up for what is right
  • Delivers results through realistic planning to accomplish goals
  • Generates effective solutions based on available information and makes timely decisions that are safe and ethical
To be considered for this position you must complete the entire application process, which includes answering all prescreening questions and providing your eSignature on or before the requisition closing date of February 27, 2019.
Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.
ConocoPhillips is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, veteran status, gender identity or expression, genetic information or any other legally protected status.
Job Function
Information Management-Information Technology
Job Level
Individual Contributor/Staff Level
Primary Location
Line of Business
Corporate Staffs
Job Posting
Feb 13, 2019, 4:51:37 PM
  • Houston, TX
Our Company
ConocoPhillips is the worlds largest independent E&P company based on production and proved reserves. Headquartered in Houston, Texas, ConocoPhillips had operations and activities in 17 countries, $71 billion of total assets, and approximately 11,100 employees as of Sept. 30, 2018. Production excluding Libya averaged 1,221 MBOED for the nine months ended Sept. 30, 2018, and proved reserves were 5.0 billion BOE as of Dec. 31, 2017.
Employees across the globe focus on fulfilling our core SPIRIT Values of safety, people, integrity, responsibility, innovation and teamwork. And we apply the characteristics that define leadership excellence in how we engage each other, collaborate with our teams, and drive the business.
The Sr. Analytics Analyst will be part of the Production, Drilling, and Projects Analytics Services Team within the Analytics Innovation Center of Excellence that enables data analytics across the ConocoPhillips global enterprise. This role works with business units and global functions to help strategically design, implement, and support data analytics solutions. This is a full-time position that provides tremendous career growth potential within ConocoPhillips.
Responsibilities May Include
  • Complete end to end delivery of data analytics solutions to the end user
  • Interacting closely with both business and developers while gathering requirements, designing, testing, implementing and supporting solutions
  • Gather business and technical specifications to support analytic, report and database development
  • Collect, analyze and translate user requirements into effective solutions
  • Build report and analytic prototypes based on initial business requirements
  • Provide status on the issues and progress of key business projects
  • Providing regular reporting on the performance of data analytics solutions
  • Delivering regular updates and maintenance on data analytics solutions
  • Championing the data analytics solutions and technologies at ConocoPhillips
  • Integrate data for data models used by the customers
  • Deliver Data Visualizations used for data driven decision making
  • Provide strategic technology direction while supporting the needs of the business
  • Legally authorized to work in the United States
  • 5+ years of related IT experience
  • 5+ year of Structure Querying Language experience (ANSI SQL, T-SQL, PL/SQL)
  • 3+ years hands-on experience delivering solutions with an Analytics Tools i.e. (Spotfire, SSRS, Power BI, Tableau, Business Objects)
  • Bachelor's Degree in Information Technology or Computer Science
  • 5+ years of Oil and Gas Industry experience
  • 5+ years hands-on experience delivering solutions with Informatica PowerCenter
  • 5+ years architecting data warehouses and/or data lakes
  • 5+ years with Extract Transform and Load (ETL) tools and best practices
  • 3+ years hands-on experience delivering solutions with Teradata
  • 1+ years developing analytics models with R or Python
  • 1+ years developing visualizations using R or Python
  • Experience with Oracle (11g, 12c) and SQL Server (2008 R2, 2010, 2016) and Teradata 15.x
  • Experience with Hadoop technologies (Hortonworks, Cloudera, SQOOP, Flume, etc.)
  • Experience with AWS technologies (S3, SageMaker, Athena, EMR, Redshift, Glue, etc.)
  • Thorough understanding of BI/DW concepts, proficient in SQL, and data modeling
  • Familiarity with ETL tools (Informatica, etc.) and ETL processes
  • Solutions oriented individual; learn quickly, understand complex problems, and apply useful solutions
  • Ability to work in a fast-paced environment independently with the customer
  • Ability to work as a team player
  • Ability to work with business and technology users to define and gather reporting and analytics requirements
  • Strong analytical, troubleshooting, and problem-solving skills experience in analyzing and understanding business/technology system architectures, databases, and client applications to recognize, isolate, and resolve problems
  • Demonstrates the desire and ability to learn and utilize new technologies in data analytics solutions
  • Strong communication and presentation skills
  • Takes ownership of actions and follows through on commitments by courageously dealing with important problems, holding others accountable, and standing up for what is right
  • Delivers results through realistic planning to accomplish goals
  • Generates effective solutions based on available information and makes timely decisions that are safe and ethical
To be considered for this position you must complete the entire application process, which includes answering all prescreening questions and providing your eSignature on or before the requisition closing date of February 20, 2019.
Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.
ConocoPhillips is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, veteran status, gender identity or expression, genetic information or any other legally protected status.
Job Function
Information Management-Information Technology
Job Level
Individual Contributor/Staff Level
Primary Location
Line of Business
Corporate Staffs
Job Posting
Feb 13, 2019, 4:56:49 PM