OnlyDataJobs.com

Freeport-McMoRan
  • Phoenix, AZ

Provide management and leadership to the Big Data project teams. Directs initiatives in the Freeport-McMoRan Big Data program. Provides analytical direction, expertise and support for the Big Data program; this includes project leadership for initiatives, coordination with business subject matter experts and travel to mine sites. This will be a global role that will coordinate with site and corporate stakeholders to ensure global alignment on service and project delivery. The role will also work with business operations management to ensure the program is focusing in areas most beneficial to the company.


  • Work closely with business, engineering and technology teams to develop solution to data-intensive business problems
  • Supervise internal and external science teams
  • Perform quality control of deliverables
  • Prepare reports and presentations, and communicate with Executives
  • Provide thought leadership in algorithmic and process innovations, and creativity in solving unconventional problems
  • Use statistical and programming tools such as R and Python to analyze data and develop machine-learning models
  • Perform other duties as required


Minimum Qualifications


  • Bachelors degree in an analytical field (statistics, mathematics, etc.) and eight (8) years of relevant work experience, OR
  • Masters degree in an analytical field (statistics, mathematics, etc.) and six (6) years of relevant work experience, OR
  • Proven track record of collaborating with business partners to translate business problems and needs into data-based analytical solutions
  • Proficient in predictive modeling:
  • Linear and logistic regression
  • Tree based techniques (CART, Random Forest, Gradient Boosting)
  • Time-Series Analysis
  • Anomaly detection
  • Survival Analysis
  • Strong Experience with SQL/Hive environments
  • Skilled with R and/or Python analysis environments
  • Experience with Big Data tools for machine learning, R, Hive, Python
  • Good communication skills


Preferred Qualifications


  • Doctorate degree in an analytical field
  • Willing and able to travel 20-30% or more


Criteria/Conditions


  • Ability to understand and apply verbal and written work and safety-related instructions and procedures given in English
  • Ability to communicate in English with respect to job assignments, job procedures, and applicable safety standards
  • Must be able to work in a potentially stressful environment
  • Position is in busy, non-smoking office located in downtown Phoenix, AZ
  • Location requires mobility in an office environment; each floor is accessible by elevator
  • Occasionally work will be performed in a mine, outdoor or manufacturing plant setting
  • Must be able to frequently sit, stand and walk
  • Must be able to frequently lift and carry up to ten (10) pounds
  • Personal protective equipment is required when performing work in a mine, outdoor, manufacturing or plant environment, including hard hat, hearing protection, safety glasses, safety footwear, and as needed, respirator, rubber steel-toe boots, protective clothing, gloves and any other protective equipment as required
  • Freeport-McMoRan promotes a drug/alcohol-free work environment through the use of mandatory pre-employment drug testing and on-going random drug testing as allowed by applicable State laws


Freeport-McMoRan has reviewed the jobs at its various office and operating sites and determined that many of these jobs require employees to perform essential job functions that pose a direct threat to the safety or health of the employees performing these tasks or others. Accordingly, the Company has designated the following positions as safety-sensitive:


  • Site-based positions, or positions which require unescorted access to site-based operational areas, which are held by employees who are required to receive MSHA, OSHA, DOT, HAZWOPER and/or Hazard Recognition Training; or
  • Positions which are held by employees who operate equipment, machinery or motor vehicles in furtherance of performing the essential functions of their job duties, including operating motor vehicles while on Company business or travel (for this purpose motor vehicles includes Company owned or leased motor vehicles and personal motor vehicles used by employees in furtherance of Company business or while on Company travel); or
  • Positions which Freeport-McMoRan has designated as safety sensitive positions in the applicable job or position description and which upon further review continue to be designated as safety-sensitive based on an individualized assessment of the actual duties performed by a specifically identified employee.


Equal Opportunity Employer/Protected Veteran/Disability


Requisition ID
1900606 

Freeport-McMoRan
  • Phoenix, AZ

Supports the activities for all Freeport-McMoRan Big Data programs. Provides analytical support and expertise for the Big Data program; this includes coordination with business subject matter experts and travel to mine sites. The role will provide analyses and statistical models as part of Big Data projects, and may be the project lead on analytics initiatives. The role will also provide visualizations and descriptive results of the analysis. This will be a global role that will coordinate with site and corporate stakeholders to ensure alignment on project delivery.


    Work
    • closely with business, engineering and technology teams to analyze data-intensive business problems.
    • Research and develop appropriate statistical methodology to translate these business problems into analytics solutions
    • Perform quality control of deliverables
    • Develop visualizations of results and prepare deliverable reports and presentations, and communicate with business partners
    • Provide thought leadership in algorithmic and process innovations, and creativity in solving unconventional problems
    • Develop, implement and maintain analytical solutions in the Big Data environment
    • Work with onshore and offshore resources to implement and maintain analytical solutions
    • Perform variable selection and other standard modeling tasks
    • Produce model performance metrics
    • Use statistical and programming tools such as R and Python to analyze data and develop machine-learning models
    • Perform other duties as requested


Minimum Qualifications


  • Bachelors degree in an analytical field (statistics, mathematics, etc.) and five (5) years of relevant work experience, OR 
  • Masters degree in an analytical field (statistics, mathematics, etc.) and three (3) years of relevant work experience

  • Proven track record of collaborating with business partners to translate operational problems and needs into data-based analytical solutions

  • Proficient in predictive modeling:

  • Linear and logistic regression

  • Tree based techniques (CART, Random Forest, Gradient Boosting)

  • Time-Series Analysis

  • Anomaly detection

  • Survival Analysis

  • Strong experience with SQL/Hive environments

  • Skilled with R and/or Python analysis environments

  • Experience with Big Data tools for machine learning, R, Hive, Python

  • Good communication skills


Preferred Qualifications


  • Masters degree in an analytical field
  • Willing and able to travel 20-30% or more


Criteria/Conditions


  • Ability to understand and apply verbal and written work and safety-related instructions and procedures given in English
  • Ability to communicate in English with respect to job assignments, job procedures, and applicable safety standards

  • Must be able to work in a potentially stressful environment

  • Position is in busy, non-smoking office located in Phoenix, AZ

  • Location requires mobility in an office environment; each floor is accessible by elevator and internal staircase

  • Occasionally work may be performed in a mine, outdoor or manufacturing plant setting

  • Must be able to frequently sit, stand and walk

  • Must be able to frequently lift and carry up to ten (10) pounds

  • Personal protective equipment is required when performing work in a mine, outdoor, manufacturing or plant environment, including hard hat, hearing protection, safety glasses, safety footwear, and as needed, respirator, rubber steel-toe boots, protective clothing, gloves and any other protective equipment as required

  • Freeport-McMoRan promotes a drug/alcohol free work environment through the use of mandatory pre-employment drug testing and on-going random drug testing as per applicable State Laws


Freeport-McMoRan has reviewed the jobs at its various office and operating sites and determined that many of these jobs require employees to perform essential job functions that pose a direct threat to the safety or health of the employees performing these tasks or others. Accordingly, the Company has designated the following positions as safety-sensitive:


  • Site-based positions, or positions which require unescorted access to site-based operational areas, which are held by employees who are required to receive MSHA, OSHA, DOT, HAZWOPER and/or Hazard Recognition Training; or
  • Positions which are held by employees who operate equipment, machinery or motor vehicles in furtherance of performing the essential functions of their job duties, including operating motor vehicles while on Company business or travel (for this purpose motor vehicles includes Company owned or leased motor vehicles and personal motor vehicles used by employees in furtherance of Company business or while on Company travel); or
  • Positions which Freeport-McMoRan has designated as safety sensitive positions in the applicable job or position description and which upon further review continue to be designated as safety-sensitive based on an individualized assessment of the actual duties performed by a specifically identified employee.


Equal Opportunity Employer/Protected Veteran/Disability


Requisition ID
1900604 

MRE Consulting, Ltd.
  • Houston, TX

Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.


Our client is seeking to hire an Enterprise Data Architect. The position reports to the VP IT. The Data Architect is responsible for providing a standard common business vocabulary across all applications and data elements, expressing and defining strategic data requirements, outlining high level integrated designs to meet the various business unit requirements, and aligning with the overall enterprise strategy and related business architecture.


Essential Duties & Responsibilities:
Provide insight and strategies for changing databased storage and utilization requirements for the company and provide direction on potential solutions
Assist in the definition and implementation of a federated data model consisting of a mixture of multi-cloud and on premises environments to support operations and business strategies
Assist in managing vendor cloud environments and multi-cloud database connectivity.
Analyze structural data requirements for new/existing applications and platforms
Submit reports to management that outline the changing data needs of the company and develop related solutions
Align database implementation methods to make sure they support company policies and any external regulations
Interpret data, analyze results and provide ongoing reporting and support
Implement data collection systems and other strategies that optimize efficiency and data quality
Acquire available data sources and maintain data systems
Identify, analyze, and interpret trends or patterns in data sets
Scrub data as needed, review reports, printouts, and performance indicators to identify inconsistencies
Develop database design and architecture documentation for the management and executive teams
Monitor various data base systems to confirm optimal performance standards are met
Contribute to content updates within resource portals and other operational needs
Assist in presentations and interpretations of analytical findings and actively participate in discussions of results, internally and externally
Help maintain the integrity and security of the company database
Ensure transactional activities are processed in accordance with standard operating procedures The employee will be on call 24 hours 7 days per week.


Qualifications
Minimum of 10 + years of experience.
Proven work experience as a Data Architect, Data Scientist, or similar role
In-depth understanding of database structure principles
Strong knowledge of data mining and segmentation techniques
Expertise in MS SQL and other database platforms
Familiarity with data visualization tools
Experience with formal Enterprise Architecture tools (like BiZZ design)
Experience in managing cloud-based environments
Aptitude regarding data models, data mining, and in cloud-based applications.
Advanced analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
Adept at report writing and presenting findings
Proficiency in systems support and monitoring
Experience with complex data structures in the Oil and Gas Industry a plus


Education 
A bachelors degree in Computer Science, Math, Statistics, or related quantitative field required.


Travel Requirements
The percentage of travel anticipated for this position is 10 20%, including overnight extended stays.


All qualified candidates should apply by providing a current Word resume and denoting skill set experience as it relates to this requirement.

Expedia, Inc.
  • Bellevue, WA

We are seeking a deeply experienced technical leader to lead the next generation of engineering investments, and culture for the GCO Customer Care Platform (CCP). The technical leader in this role will help design, engineer and drive implementation of critical pieces of the EG-wide architecture (platform and applications) for customer care - these areas include, but limited to unified voice support, partner on boarding with configurable rules, Virtual agent programming model for all partners, and intelligent fulfillment. In addition, a key focus of this leader's role will also be to grow and mentor junior software engineers in GCO with a focus on building out a '2020 world-class engineering excellence' vision / culture.


What you’ll do:



  • Deep Technology Leadership (Design, Implementation, and Execution for the follow);

  • Ship next-gen EG-wide architecture (platform and applications) that enable 90% of automated self-service journeys with voice as a first-class channel from day zero

  • Design and ship a VA (Virtual Agent) Programming Model that enables partners standup intelligent virtual agents on CCP declaratively in minutes

  • Enable brand partners to onboard their own identity providers onto CCP

  • Enable partners to configure their workflows and business rules for their Virtual Agents

  • Programming Model for Intelligent actions in the Fulfillment layer

  • Integration of Context and Query as first-class entities into the Virtual Agent

  • Cross-Group Collaboration and Influence

  • Work with company-wide initiatives across AI Labs, BeX to build out a Best of Breed

  • Conversational Platform for EG-wide apps

  • Engage with and translate internal and external partner requirements into platform investments for effective on boarding of customers

  • Represent GCO's Technical Architecture at senior leadership meetings (eCP and EG) to influence and bring back enhancements to improve CCP



Help land GCO 2020 Engineering and Operational Excellence Goals

Mentor junior developers on platform engineering excellence dimensions (re-usable patterns, extensibility, configurability, scalability, performance, and design / implementation of core platform pieces)

Help develop a level of engineering muscle across GCO that becomes an asset for EG (as a provider of platform service as well as for talent)

Who you are:



  • BS or MS in Computer Science

  • 20 years of experience designing and developing complex, mission-critical, distributed software systems on a variety of platforms in high-tech industries

  • Hands on experience in designing, developing, and delivering (shipping) V1 (version one) MVP enterprise software products and solutions in a technical (engineering and architecture) capacity

  • Experience in building strong relationships with technology partners, customers, and getting closure on issues including delivering on time and to specification

  • Skills: Linux/ Windows/VMS, Scala, Java, Python, C#, C++, Object Oriented Design (OOD), Spark, Kafka, REST/Web Services, Distributed Systems, Reliable and scalable transaction processing systems (HBase, Microsoft SQL, Oracle, Rdb)

  • Nice to have: Experience in building highly scalable real-time processing platforms that hosts machine learning algorithms for Guided / Prescriptive Learning
    Identifies and solves problems at the company level while influencing product lines

  • Provides technical leadership in difficult times or serious crises

  • Key strategic player to long-term business strategy and vision

  • Recognized as an industry expert and is a recognized mentor and leader at the company Provides strategic influence across groups, projects and products

  • Provides long term product strategy and vision through group level efforts

  • Drive for results: Is sought out to lead company-wide initiatives that deliver cross-cutting lift to the organization and provides leadership in a crisis and is a key player in long-term business strategy and vision

  • Technical/Functional skills: Proves credentials as industry experts by inventing and delivering transformational technology/direction and helps drive change beyond the company and across the industry

  • Has the vision to impact long-term product/technology horizon to transform the entire industry



Why join us:

Expedia Group recognizes our success is dependent on the success of our people.  We are the world's travel platform, made up of the most knowledgeable, passionate, and creative people in our business.  Our brands recognize the power of travel to break down barriers and make people's lives better – that responsibility inspires us to be the place where exceptional people want to do their best work, and to provide them the tools to do so. 


Whether you're applying to work in engineering or customer support, marketing or lodging supply, at Expedia Group we act as one team, working towards a common goal; to bring the world within reach.  We relentlessly strive for better, but not at the cost of the customer.  We act with humility and optimism, respecting ideas big and small.  We value diversity and voices of all volumes. We are a global organization but keep our feet on the ground, so we can act fast and stay simple.  Our teams also have the chance to give back on a local level and make a difference through our corporate social responsibility program, Expedia Cares.


If you have a hunger to make a difference with one of the most loved consumer brands in the world and to work in the dynamic travel industry, this is the job for you.


Our family of travel brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Egencia®, trivago®, HomeAway®, Orbitz®, Travelocity®, Wotif®, lastminute.com.au®, ebookers®, CheapTickets®, Hotwire®, Classic Vacations®, Expedia® Media Solutions, CarRentals.com™, Expedia Local Expert®, Expedia® CruiseShipCenters®, SilverRail Technologies, Inc., ALICE and Traveldoo®.



Expedia is committed to creating an inclusive work environment with a diverse workforce.   All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.  This employer participates in E-Verify. The employer will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS) with information from each new employee's I-9 to confirm work authorization.

Ultra Tendency
  • Berlin, Deutschland

Big Data Software Engineer


Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership. 


At Ultra Tendency you would:



  • Work in our office in Berlin/Magdeburg and on-site at our customer's offices

  • Make Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)

  • Develop outstanding Big Data application following the latest trends and methodologies

  • Be a role model and strong leader for your team and oversee the big picture

  • Prioritize tasks efficiently, evaluating and balancing the needs of all stakeholders


Ideally you have:



  • Strong experience in developing software using Python, Scala or a comparable language

  • Proven experience with data ingestion, analysis, integration, and design of Big Data applications using Apache open-source technologies

  • Profound knowledge about with data engineering technology, e.g. Kafka, SPARK, HBase, Kubernetes

  • Strong background in developing on Linux

  • Solid computer science fundamentals (algorithms, data structures and programming skills in distributed systems)

  • Languages: Fluent English and German is a plus


We offer:



  • Fascinating tasks and unique Big Data challenges of major players from various industries (automotive, insurance, telecommunication, etc.)

  • Fair pay and bonuses

  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • International diverse team

  • Possibility to work with the open-source community and become a contributor

  • Work with cutting edge equipment and tools


Confidentiality guaranteed

eBay
  • Kleinmachnow, Germany

About the team:



Core Product Technology (CPT) is a global team responsible for the end-to-end eBay product experience and technology platform. In addition, we are working on the strategy and execution of our payments initiative, transforming payments management on our Marketplace platform which will significantly improve the overall customer experience.


The opportunity

At eBay, we have started a new chapter in our iconic internet history of being the largest online marketplace in the world. With more than 1 billion listings (more than 80% of them selling new items) in over 400 markets, eBay is providing a robust platform where merchants of all sizes can compete and win. Every single day millions of users come to eBay to search for items in our diverse inventory of over a billion items.



eBay is starting a new Applied Research team in Germany and we are looking for a senior technologist to join the team. We’re searching for a hands-on person who has an applied research background with strong knowledge in machine learning and natural language processing (NLP). The German team’s mission is to improve the German and other European language search experience as well as to enhance our global search platform and machine learned ranking systems in partnership with our existing teams in San Jose California and Shanghai China.



This team will help customers find what they’re shopping for by developing full-stack solutions from indexing, to query serving and applied research to solve core ranking, query understanding and recall problems in our highly dynamic marketplace. The global search team works closely with the product management and quality engineering teams along with the Search Web and Native Front End and Search services, and Search Mobile. We build our systems using C++, Scala, Java, Hadoop/Spark/HBase, TensorFlow/Caffe, Kafka and other standard technologies. The team believes in agile development with autonomous and empowered teams.






Diversity and inclusion at eBay goes well beyond a moral necessity – it’s the foundation of our business model and absolutely critical to our ability to thrive in an increasingly competitive global landscape. To learn about eBay’s Diversity & Inclusion click here: https://www.ebayinc.com/our-company/diversity-inclusion/.
AIRBUS
  • Blagnac, France

Description of the job



Vacancies for 3 Data Scientists (m/f) have arisen within Airbus Commercial Aircraft in Toulouse. You will join the PLM Systems & Integration Tests team within IM Develop department.  



IM Develop organization is established to ensure Product Life Cycle Management (PLM) Support and Services as requested by Programmes, CoE and CoC. The department is the home within Airbus to lead the development, the implementation, the maintenance and the support of PLM to all Airbus programs in line with the corporate strategy.



Within the frame of its Digital Design, Manufacturing & Services (DDMS) project, Airbus is undergoing a significant digital transformation to benefit from the latest advances in new technologies and targets a major efficiency breakthrough across the program and product lifecycle. It will be enabled by a set of innovative concepts such as model based system engineering, modular product lines, digital continuity and concurrent co-design of the product, its industrial setup and operability features.



As a Data Scientist (m/f), you will be integrated in a team of the IM Develop department and appointed to dedicated missions. You will work in an international environment where you will able to develop in-depth knowledge of local specificities: engineering, manufacturing, costing, etc.



Tasks & accountabilities



Your main tasks and responsibilities will be to:




  • Analyze large amounts of information to discover trends and patterns, build predictive models, implement cost models and machine-learning algorithms based on technical data and DMU models.

  • Combine models through ensemble modelling

  • Present information using data visualization techniques

  • Propose solutions and strategies to business challenges

  • Implement features extraction by analyzing CAD models and engineering Bill of Material

  • Collaborate with engineering, costing (FCC) to implement new costing models in python

  • Design and propose new short/medium- and long-term forecasting methods

  • Consolidate, compare and enlarge the data required for the various types of modelling

  • Attend technical events/conferences and reinforce Data Science skills within Airbus




Required skills



We are looking for candidates with the following skills and experience:




  • Strong knowledge of python development in the frame of industrial projects

  • Experience in data mining & machine-learning

  • Knowledge of Scala, Java or C++,… familiarity with R, SQL is an asset

  • Experience using business intelligence tools

  • Analytical mindset

  • Strong math skills (e.g. statistics, algebra)

  • Problem-solving aptitude

  • Excellent communication and presentation skills

  • PLM knowledge and 3D CAD programming would be a plus

  • French & English: advanced level

Intercontinental Exchange
  • Atlanta, GA
Job Purpose
The Data Analytics team is seeing a dynamic, self-motivated Data Scientist, who is able to work independently on data analysis, datamining, report development and customer requirement gathering.
Responsibilities
  • Applies data analysis and data modeling techniques, based upon a detailed understanding of the corporate information requirements, to establish, modify, or maintain data structures and their associated components
  • Participates in the development and maintenance of corporate data standards
  • Supports stakeholders and business users to define data and analytic requirements
  • Works with the business to identify additional internal and external data sources to bring into the data environment and mesh with existing data
  • Story board, create, ad publish standard reports, data visualizations, analysis and presentations
  • Develop and implement workflows using Alteryx and/or R
  • Develop and implement various operational and sales Tableau dashboards
Knowledge And Experience
  • Bachelors degree in statistics/engineering/math/quantitative analytics/economics/finance or a related quantitative discipline required
  • Masters in engineering/physics/statistics/economics/math/science preferred
  • 1+ years of experience with data science techniques and real-world application experience
  • 2+ years of experience supporting the development of analytics solutions leveraging tools like Tableau Desktop and Tableau Online
  • 1+ years of experience working with SQL, developing complex SQL queries, and leveraging SQL in Tableau
  • 1+ years of experience in Alteryx, and R coding
  • Deep understanding of Data Governance and Data Modeling
  • Ability to actualize requirements
  • Advanced written and oral communication skills with the ability to summarize findings and present in a clear, concise manner to peers, management, and others
Additional Information
    • Job Type Standard
    • Schedule Full-time
Man AHL
  • London, UK

The Role


As a Quant Platform Developer at AHL you will be building the tools, frameworks, libraries and applications which power our Quantitative Research and Systematic Trading. This includes responsibility for the continued success of “Raptor”, our in-house Quant Platform, next generation Data Engineering, and evolution of our production Trading System as we continually expand the markets and types of assets we trade, and the styles in which we trade them. Your challenges will be varied and might involve building new high performance data acquisition and processing pipelines, cluster-computing solutions, numerical algorithms, position management systems, visualisation and reporting tools, operational user interfaces, continuous build systems and other developer productivity tools.


The Team


Quant Platform Developers at AHL are all part of our broader technology team, members of a group of over sixty individuals representing eighteen nationalities. We have varied backgrounds including Computer Science, Mathematics, Physics, Engineering – even Classics - but what unifies us is a passion for technology and writing high-quality code.



Our developers are organised into small cross-functional teams, with our engineering roles broadly of two kinds: “Quant Platform Developers” otherwise known as our “Core Techs”, and “Quant Developers” which we often refer to as “Sector Techs”. We use the term “Sector Tech” because some of our teams are aligned with a particular asset class or market sector. People often rotate teams in order to learn more about our system, as well as find the position that best matches their interests.


Our Technology


Our systems are almost all running on Linux and most of our code is in Python, with the full scientific stack: numpy, scipy, pandas, scikit-learn to name a few of the libraries we use extensively. We implement the systems that require the highest data throughput in Java. For storage, we rely heavily on MongoDB and Oracle.



We use Airflow for workflow management, Kafka for data pipelines, Bitbucket for source control, Jenkins for continuous integration, Grafana + Prometheus for metrics collection, ELK for log shipping and monitoring, Docker for containerisation, OpenStack for our private cloud, Ansible for architecture automation, and HipChat for internal communication. But our technology list is never static: we constantly evaluate new tools and libraries.


Working Here


AHL has a small company, no-attitude feel. It is flat structured, open, transparent and collaborative, and you will have plenty of opportunity to grow and have enormous impact on what we do.  We are actively engaged with the broader technology community.



  • We host and sponsor London’s PyData and Machine Learning Meetups

  • We open-source some of our technology. See https://github.com/manahl

  • We regularly talk at leading industry conferences, and tweet about relevant technology and how we’re using it. See @manahltech



We’re fortunate enough to have a fantastic open-plan office overlooking the River Thames, and continually strive to make our environment a great place in which to work.



  • We organise regular social events, everything from photography through climbing, karting, wine tasting and monthly team lunches

  • We have annual away days and off-sites for the whole team

  • We have a canteen with a daily allowance for breakfast and lunch, and an on-site bar for in the evening

  • As well as PC’s and Macs, in our office you’ll also find numerous pieces of cool tech such as light cubes and 3D printers, guitars, ping-pong and table-football, and a piano.



We offer competitive compensation, a generous holiday allowance, various health and other flexible benefits. We are also committed to continuous learning and development via coaching, mentoring, regular conference attendance and sponsoring academic and professional qualifications.


Technology and Business Skills


At AHL we strive to hire only the brightest and best and most highly skilled and passionate technologists.



Essential



  • Exceptional technology skills; recognised by your peers as an expert in your domain

  • A proponent of strong collaborative software engineering techniques and methods: agile development, continuous integration, code review, unit testing, refactoring and related approaches

  • Expert knowledge in one or more programming languages, preferably Python, Java and/or C/C++

  • Proficient on Linux platforms with knowledge of various scripting languages

  • Strong knowledge of one or more relevant database technologies e.g. Oracle, MongoDB

  • Proficient with a range of open source frameworks and development tools e.g. NumPy/SciPy/Pandas, Pyramid, AngularJS, React

  • Familiarity with a variety of programming styles (e.g. OO, functional) and in-depth knowledge of design patterns.



Advantageous



  • An excellent understanding of financial markets and instruments

  • Experience of front office software and/or trading systems development e.g. in a hedge fund or investment bank

  • Expertise in building distributed systems with service-based or event-driven architectures, and concurrent processing

  • A knowledge of modern practices for data engineering and stream processing

  • An understanding of financial market data collection and processing

  • Experience of web based development and visualisation technology for portraying large and complex data sets and relationships

  • Relevant mathematical knowledge e.g. statistics, asset pricing theory, optimisation algorithms.


Personal Attributes



  • Strong academic record and a degree with high mathematical and computing content e.g. Computer Science, Mathematics, Engineering or Physics from a leading university

  • Craftsman-like approach to building software; takes pride in engineering excellence and instils these values in others

  • Demonstrable passion for technology e.g. personal projects, open-source involvement

  • Intellectually robust with a keenly analytic approach to problem solving

  • Self-organised with the ability to effectively manage time across multiple projects and with competing business demands and priorities

  • Focused on delivering value to the business with relentless efforts to improve process

  • Strong interpersonal skills; able to establish and maintain a close working relationship with quantitative researchers, traders and senior business people alike

  • Confident communicator; able to argue a point concisely and deal positively with conflicting views.

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-5-8 years of Java experience, Scala and Python experience a plus

-3+ years of experience as an analyst, data scientist, or related quantitative role.

-3+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-3-5years of Java experience, Scala and Python experience a plus

-2+ years of experience as an analyst, data scientist, or related quantitative role.

-2+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

Infosys
  • Houston, TX
Responsibilities

-Hands on experience with Big Data systems, building ETL pipelines, data processing, and analytics tools
-Understanding of data structures & common methods in data transformation.
-Familiar with the concepts of dimensional modeling.
-Sound knowledge of one programming language - Python or Java
-Programming experience using tools such as Hadoop and Spark.
-Strong proficiency in using query languages such as SQL, Hive and SparkSQL
-Experience in Kafka & Scala would be a plus

Hydrogen Group
  • Austin, TX

Data Scientist
x2 Roles
Permanent
Austin, TX
Remote + Flex Hours


Our client is a very well funded venture capital start-up ran by an experienced team of technical entrenreneurrs based in Austin, TX. Having sucessfully raisefd over $10 million in Series A funding they are looking to expand their existing Data Science and Analytics team by adding 2 new members with plans to grow total headcount to 10 by the summer.

The sucessful candidate will be focused on working with client company's data accross multople sources and developing algorithems and models which will be used to improve performance.


Requirements:

  • The ability to solve problems that no one else has before in a start-up environment - in return you will be given flexible working hours, ability to work remote as you deem fit.
  • Background in EITHER (or Both) Machine learning or statistics - PhD + advanced academic qualifications welcome but not essential.
  • Ability to work in R, Python, TensorFlow, RStudio preffered by not essential


Interviews taking place from 21st ownwards with offers made ASAP

IT - Data Scientist - x2 - Austin, TX - R - Machine Learning - IT - Analytics - Statistics - AI - Python - IT - Data Scientist - x2 - Austin, TX - Keras - Tensor Flow - RStudio - Data Scientist - x2 - Austin, TX -

Elev8 Hire Solutions
  • Atlanta, GA

Jr. Data Scientist


Our client in the Midtown area is looking for a Jr. Data Scientist with a passion for Machine Learning, knows the how's & why's of algorithms, and is excited about the fraud industry. You'll be a pivotal piece in the Atlanta/US team in development and application of adaptive real-time analytical modeling algorithms. So if that gets you excited, apply!


Role Expectations:


  • End-to-end processing and modeling of large customer data sets.
  • Working with customers to understand the opportunities and constraints of their existing data in the context of machine learning and predictive modeling.
  • Develop statistical models and algorithms for integration with companys product.
  • Apply analytical theory to real-world problems on large and dynamic datasets.
  • Produce materials to feedback analytic results to customers (reports, presentations, visualizations).
  • Providing input into future data science strategy and product development.
  • Working with development teams to support and enhance the analytical infrastructure.
  • Work with the QA team to advise on effective analytical testing.
  • Evaluate and improve the analytical results on live systems.
  • Develop an understanding of the industry data structures and processes.


Team working with:


  • Currently 6 other Data Scientist local to Atlanta, the rest of the team (10+) in Cambridge
  • 130 people in the entire company


Top skills required:


  • Degree-level qualification with good mathematical background and knowledge of statistics.
  • Professional experience using Random Forests, machine learning algorithms, development skills with C or Python
  • First-hand experience of putting Data Storage into production
  • Experience in implementing statistical models and analytical algorithms in software.
  • Practical experience of the handling and mining of large, diverse, data sets.
  • Must have a USA work visa or Passport.


Nice to have:


  • Ph.D. or other postgraduate qualification would be an extreme advantage
  • An indication of how relevant technologies have been used (not just a list).
  • Attention to grammatical detail, layout and presentation.


Benefits:


  • Regular bonus scheme
  • 20 days annual leave
  • Healthcare package
  • Free Friday lunches
  • Regular social outings
  • Fridge and cupboards packed full of edible treats
  • Annual summer social and Christmas dinner
118118Money
  • Austin, TX

Seeking an individual with a keen eye for good design combined with the ability to communicate those designs through informative design artifacts. Candidates should be familiar with an Agile development process (and understand its limitations), able to mediate between product / business needs and developer architectural needs. They should be ready to get their hands dirty coding complex pieces of the overall architecture.

We are .NET Core on the backend, Angular 2 on a mobile web front-end, and native on Android and iOS. We host our code across AWS and on-premises VMs, and use various data backends (SQL Server, Oracle, Mongo).

Very important is interest in (and hopefully, experience with) modern big data pipelines and machine learning. Experience with streaming platforms feeding Apache Spark jobs that train machine learning models would be music to our ears. Financial platforms generate massive amounts of data, and re-architecting aspects of our microservices to support that will be a key responsibility.

118118 Money is a private financial services company with R&D headquartered in Austin along highway 360, in front of the Bull Creek Nature preserve. We have offices around the world, so the candidate should be open to occasional travel abroad. The atmosphere is casual, and has a startup feel. You will see your software creations deployed quickly.

Responsibilities

    • Help us to build a big data pipeline and add machine learning capability to more areas of our platform.
    • Manage code from development through deployment, including support and maintenance.
    • Perform code reviews, assist and coach more junior developers to adhere to proper design patterns.
    • Build fault-tolerant distributed systems.

Requirements

    • Expertise in .NET, C#, HTML5, CSS3, Javascript
    • Experience with some flavor of ASP.NET MVC
    • Experience with SQL Server
    • Expertise in the design of elegant and intuitive REST APIs.
    • Cloud development experience (Amazon, Azure, etc)
    • Keen understanding of security principles as they pertain to service design.
    • Expertise in object-oriented design principles.

Desired

    • Machine Learning experience
    • Mobile development experience
    • Kafka / message streaming experience
    • Apache Spark experience
    • Knowledge of the ins and outs of Docker containers
    • Experience with MongoDB
FCA Fiat Chrysler Automobiles
  • Detroit, MI

Fiat Chrysler Automobiles is looking to fill the full-time position of a Data Scientist. This position is responsible for delivering insights to the commercial functions in which FCA operates.


The Data Scientist is a role in the Business Analytics & Data Services (BA) department and reports through the CIO. They will play a pivotal role in the planning, execution  and delivery of data science and machine learning-based projects. The bulk of the work with be in areas of data exploration and preparation, data collection and integration, machine learning (ML) and statistical modelling and data pipe-lining and deployment.

The newly hired data scientist will be a key interface between the ICT Sales & Marketing team, the Business and the BA team. Candidates need to be very much self-driven, curious and creative.

Primary Responsibilities:

    • Problem Analysis and Project Management:
      • Guide and inspire the organization about the business potential and strategy of artificial intelligence (AI)/data science
      • Identify data-driven/ML business opportunities
      • Collaborate across the business to understand IT and business constraints
      • Prioritize, scope and manage data science projects and the corresponding key performance indicators (KPIs) for success
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
      • Generate and test hypotheses about the underlying mechanics of the business process.
      • Network with domain experts to better understand the business mechanics that generated the data.
    • Data Collection and Integration:
      • Understand new data sources and process pipelines. Catalog and document their use in solving business problems.
      • Create data pipelines and assets the enable more efficiency and repeatability of data science activities.
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
    • Machine Learning and Statistical Modelling:
      • Apply various ML and advanced analytics techniques to perform classification or prediction tasks
      • Integrate domain knowledge into the ML solution; for example, from an understanding of financial risk, customer journey, quality prediction, sales, marketing
      • Testing of ML models, such as cross-validation, A/B testing, bias and fairness
    • Operationalization:
      • Collaborate with ML operations (MLOps), data engineers, and IT to evaluate and implement ML deployment options
      • (Help to) integrate model performance management tools into the current business infrastructure
      • (Help to) implement champion/challenger test (A/B tests) on production systems
      • Continuously monitor execution and health of production ML models
      • Establish best practices around ML production infrastructure
    • Other Responsibilities:
      • Train other business and IT staff on basic data science principles and techniques
      • Train peers on specialist data science topics
      • Promote collaboration with the data science COE within the organization.

Basic Qualifications:

    • A bachelors  in computer science, data science, operations research, statistics, applied mathematics, or a related quantitative field [or equivalent work experience such as, economics, engineering and physics] is required. Alternate experience and education in equivalent areas such as economics, engineering or physics, is acceptable. Experience in more than one area is strongly preferred.
    • Candidates should have three to six years of relevant project experience in successfully launching, planning, executing] data science projects. Preferably in the domains of automotive or customer behavior prediction.
    • Coding knowledge and experience in several languages: for example, R, Python, SQL, Java, C++, etc.
    • Experience of working across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service, and others.
    • Experience with distributed data/computing and database tools: MapReduce, Hadoop, Hive, Kafka, MySQL, Postgres, DB2 or Greenplum, etc.
    • All candidates must be self-driven, curious and creative.
    • They must demonstrate the ability to work in diverse, cross-functional teams.
    • Should be confident, energetic self-starters, with strong moderation and communication skills.

Preferred Qualifications:

    • A master's degree or PhD in statistics, ML, computer science or the natural sciences, especially physics or any engineering disciplines or equivalent.
    • Experience in one or more of the following commercial/open-source data discovery/analysis platforms: RStudio, Spark, KNIME, RapidMiner, Alteryx, Dataiku, H2O, SAS Enterprise Miner (SAS EM) and/or SAS Visual Data Mining and Machine Learning, Microsoft AzureML, IBM Watson Studio or SPSS Modeler, Amazon SageMaker, Google Cloud ML, SAP Predictive Analytics.
    • Knowledge and experience in statistical and data mining techniques: generalized linear model (GLM)/regression, random forest, boosting, trees, text mining, hierarchical clustering, deep learning, convolutional neural network (CNN), recurrent neural network (RNN), T-distributed Stochastic Neighbor Embedding (t-SNE), graph analysis, etc.
    • A specialization in text analytics, image recognition, graph analysis or other specialized ML techniques such as deep learning, etc., is preferred.
    • Ideally, the candidates are adept in agile methodologies and well-versed in applying DevOps/MLOps methods to the construction of ML and data science pipelines.
    • Knowledge of industry standard BA tools, including Cognos, QlikView, Business Objects, and other tools that could be used for enterprise solutions
    • Should exhibit superior presentation skills, including storytelling and other techniques to guide and inspire and explain analytics capabilities and techniques to the organization.
FlixBus
  • Berlin, Germany

Your Tasks – Paint the world green



  • Holistic cloud-based infrastructure automation

  • Distributed data processing clusters as well as data streaming platforms based on Kafka, Flink and Spark

  • Microservice platforms based on Docker

  • Development infrastructure and QA automation

  • Continuous Integration/Delivery/Deployment


Your Profile – Ready to hop on board



  • Experience in building and operating complex infrastructure

  • Expert-level: Linux, System Administration

  • Experience with Cloud Services, Expert-Level with either AWS or GCP  

  • Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShift

  • Mindset: "Automate Everything", "Infrastructure as Code", "Pipelines as Code", "Everything as Code"

  • Hands-on experience with "Infrastructure as Code" tools: TerraForm, CloudFormation, Packer

  • Experience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)

  • Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELK

  • At least basic knowledge in designing and implementing Service Level Agreements

  • Solid knowledge of Network and general Security Engineering

  • At least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, Bamboo

  • At least basic hands-on DBA experience, experience with data backup and recovery

  • Experience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory

AXA Schweiz
  • Winterthur, Switzerland

Dich sprechen Agilität, Product driven IT, Cloud Computing und Machine Learning an?
Du bist leistungsorientiert und hast den Mut Neues auszuprobieren?

Wir haben den digitalen Wandel in unserer DNA verankert!


Dein Beitrag:



  • Das Aufgabenset umfasst vor allem Engineering (IBM MQ Linux, z/OS) und Betrieb von Middleware-Komponenten (File Transfer, Web Service Infrastruktur).

  • Im Detail heisst das Komponentenverantwortung (u.A. Lifecycling, Zur Verfügungstellung von API's und Self-Services, Automatisierung der Abläufe, Erstellung und Pflege der Dokumentation), Sicherstellung des Betriebs (Du ergreifst autonom die notwendigen Massnahmen, Bereitschaft zu sporadischen Wochenendeinsätzen/Pikett), als auch Wissenspflege und -vermittlung.

  • In einem agilen Umfeld, mithilfe bei der Migration unserer Komponenten in die Cloud.


Deine Fähigkeiten und Talente:



  • Du bringst ein abgeschlossenes Informatikstudium oder vergleichbare Erfahrung mit.

  • Dein Know-How umfasst Messaging Middleware-Komponenten, idealerweise IBM MQ auf Linux angereichert mit z/OS Knowhow, cool wären zudem noch Kenntnisse von RabbitMQ und Kafka.

  • Andere Middleware Komponenten (File Transfer und Web Service) sind Dir nicht gänzlich unbekannt und Übertragungsprotokolle als auch die Linux-Welt im Speziellen sind Dir vertraut.

  • Du bringst fundierte Erfahrung in der Automatisierung an den Tisch (Bash, Python) und auch REST, API's sowie Java(-script) sind keine Fremdwörter für Dich. Erste Programmiererfahrung in einer objektorientierten Sprache, vorzugsweise Java, runden dein Profil ab.

  • Du bist integrativ, betrachtest Herausforderungen aus verschiedenen Perspektiven und stellst unbequeme Fragen, wenn es darauf ankommt.

  • Du bist sicher in der deutschen und englischen Sprache.

Pyramid Consulting, Inc
  • Atlanta, GA

Job Title: Tableau Engineer

Duration: 6-12 Months+ (potential to go perm)

Location: Atlanta, GA (30328) - Onsite

Notes from Manager:

We need a data analyst who knows Tableau, scripting (JSON, Python), Altreyx API, AWS, Analytics.

Description

The Tableau Software engineer will be a key resource to work across our Software Engineering BI/Analytics stack to ensure stability, scalability, and the delivery of valuable BI & Analytics solutions for our leadership teams and business partners. Keys to this position are the ability to excel in identification of problems or analytic gaps and mapping and implementing pragmatic solutions. An excellent blend of analytical, technical and communication skills in a team based environment are essential for this role.

Tools we use: Tableau, Business Objects, AngularJS, OBIEE, Cognos, AWS, Opinion Lab, JavaScript, Python, Jaspersoft, Alteryx and R packages, Spark, Kafka, Scala, Oracle

Your Role:

·         Able to design, build, maintain & deploy complex reports in Tableau

·         Experience integrating Tableau into another application or native platforms is a plus

·         Expertise in Data Visualization including effective communication, appropriate chart types, and best practices.

·         Knowledge of best practices and experience optimizing Tableau for performance.

·         Experience reverse engineering and revising Tableau Workbooks created by other developers.

·         Understand basic statistical routines (mean, percentiles, significance, correlations) with ability to apply in data analysis

·         Able to turn ideas into creative & statistically sound decision support solutions

Education and Experience:

·         Bachelors degree in Computer Science or equivalent work experience

·         3-5 Years of hands on experience in data warehousing & BI technologies (Tableau/OBIEE/Business Objects/Cognos)

·         Three or more years of experience in developing reports in Tableau

·         Have good understanding of Tableau architecture, design, development and end user experience.

What We Look For:

·         Very proficient in working with large Databases in Oracle & Big Data technologies will be a plus.

·         Deep understanding & working experience of data warehouse and data mart concepts.

·         Understanding of Alteryx and R packages is a plus

·         Experience designing and implementing high volume data processing pipelines, using tools such as Spark and Kafka.

·         Experience with Scala, Java or Python and a working knowledge of AWS technologies such as GLUE, EMR, Kinesis and Redshift preferred.

·         Excellent knowledge with Amazon AWS technologies, with a focus on highly scalable cloud-native architectural patterns, especially EMR, Kinesis, and Redshift

·         Experience with software development tools and build systems such as Jenkins