OnlyDataJobs.com

trivago N.V
  • Amsterdam, Netherlands


trivago has gathered a lot of data about hotels, among others from Online Travel Agencies, hoteliers, the public domain. How can we learn from these pieces of (sometimes contradictory) information in a way that helps our users to find their ideal hotel? This is the fundamental challenge that we are tackling in the Hotel Profiling team at trivago.



As a Data Scientist in the Hotel Profiling team, you will support us with this mission and contribute to our continuous cycle of improvement with your discoveries and ideas. Your work has the potential to have a direct impact on how millions of users' search for their ideal hotel. Ready for the challenge?



What you'll do:



  • Apply your analytics and modelling skills to gain a better understanding of our large hotel inventory and create user-centric hotel profiles that let travelers answer these types of questions:


    • "I'm interested in hiking in the French Alps, which accommodation is ideal for me?"


    • "I'm travelling for two months on the East Coast with my Labrador, which accommodations are ideal for me?"


    • "I like two comparable hotels in Vienna. What distinguishes them and which one is ideal for me?"


    • "I want a value-for-money hotel or hostel in Sydney for my sightseeing trip, which one is ideal for me?"


  • Translate the sea of data that trivago has into actionable insights and/or innovative prototypes.


  • Design and execute data-driven approaches to validate your ideas as well as to measure the business impact.


  • Collaborate closely with team members to develop ideas end-to-end, from data collection to shipping of a feature to the measuring of success.




What you'll definitely need:



  • 3-5 years of professional hands-on experience in data science or a related field, with a track record of delivering products/data that make business impact.


  • Expertise in analytics, predictive modelling and/or data mining, using a scripting language like Python or R.


  • Solid knowledge of SQL.


  • Pragmatic approach to problem solving, applying the Pareto principle, end-to-end thinking


  • Autonomous and proactive way of working, flexibility and adaptiveness to changing environments.


  • Openness in your way of thinking, adopting the sharing and learning culture of trivago.


  • Great written and verbal communication skills.


  • Fluent English (our company language).


What we'd love you to have:



  • An advanced degree in Computer Science, Math, Statistics or similar field.


  • Experience in working in a cross-functional team with data scientists and software engineers.


  • Experience in working with big data technologies and pipelines (AWS, Hive).




What you can expect from life at trivago:

  • Growth: We help you grow as trivago grows through support for personal and professional development, constant new challenges, regular peer-feedback, mentorship and world-class training.
  • Autonomy: Every talent has the ability to make an impact independently by driving topics thanks to our strong entrepreneurial mindset, our horizontal workflow and self-determined working hours.
  • International environment: Our agile, international culture and environment with talents from 50+ nations encourages mutual trust and creates a safe space to discuss openly and act freely.
  • Collaborative spaces: Our state-of-the-art campus in Düsseldorf offers interactive spaces where we can easily collaborate, exchange ideas, take a break and workout together.
  • Relocation: We offer our international talents support with relocation costs, work permit andvisa questions, free language classes, flat search and insurance.




Additional information:
  • trivago N.V. is proud to foster a workplace free from discrimination. We strongly believe that diversity of experience, perspectives, and background will lead to a better environment for our employees and a better product for our users.
  • To find out more about life at trivago follow us on social media @lifeattrivago.
  • To learn more about tech at trivago, check out our blog: https://tech.trivago.com/
Antuit
  • Dallas, TX

Location: Dallas, TX or Chicago, IL. Open to talk to candidates from across locations


Antuit seeks a Data Scientist/Senior Data Scientist to develop machine learning algorithms in Supply Chain and Forecasting domain with data science toolkits that include Python, R or SAS. This role is also participates in the design process and it is responsible for implementation. The ideal candidate will view the role as an excellent opportunity to master and support solving world-class data science problems.

 Data Scientist responsibilities and duties:

    Devel
    • op machine learning algorithms in Supply chain and Forecasting domain with data science toolkits that include Python, R or SAS Furth
    • er design processes and implement them
    • Research and develop efficient and robust machine learning algorithms
    • Collaborate and work closely with cross-functional Antuit teams and domain experts to identify gaps and structure problems
    • Create meaningful presentations and analyses that tell a story, focused on insights, to communicate results and ideas to key decision makers at Antuit and client companies

Data Scientist qualifications and skills:

  • Experience / Education. Masters or PhD in Computer Science, Computer Engineering, Electrical Engineering, Statistics, Applied Math or another related field. 4-10 years work experience involving quantitative data analyses for problem solving (work experience negotiable for recent PhDs with relevant research experience). Experience working with cloud Big Data Stack to orchestrate data gathering, cleansing, preparation and modelling. Additional experience with forecasting and optimization problems; and implementing data analytics solutions with Python, R or SAS
  • Knowledge. Exceptionally skilled in machine learning, data analytics, pattern recognition and predictive modelling
  • Strong communication and presentation skills. Effective communication and story-telling skills
  • Energy and enthusiasm. Passion for learning and contributing to development
  • A true team plater. Collaborative mindset for effective communication across teams

EEOC

Antuit is an at-will, equal opportunity employer. All qualified applicants will receive consideration for employment witout regard to race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law. 

Babich & Associates
  • Houston, TX
    • Our Data architect is responsible for the design, structure, and maintenance of data. The candidate ensures the accuracy and accessibility of data relevant to our organization or a project. The management and organization of data is highly technical and requires advanced skills with computers and proficiency with data-oriented computer languages such as SQL and XML.

      You are required to possess superior analytical skills and be detail-oriented as well as the ability to communicate effectively, as part of the larger team within the information technology department. Additionally, you will likely need to explain complex technical concepts to non-technical staff. Since development of data models and logical workflows is common, you must also exhibit advanced visualization skills, as well as creative problem-solving.

      Responsibilities:

      Pla
        ns
      • , architects, designs, analyses, develops, codes, tests, debugs and documents data & analytics platforms to satisfy business requirements for large, complex Data Reservoir/Data Warehouse, Reporting & Analytics & development Lead and perform database level tuning and optimization in support of application development teams on an ad-hoc basis.
      • Analyses business and data requirements to support the implementation of an applications full functionality
      • Contributes to high level functional design used across all Reporting & Analytics applications based on system build and knowledge of business needs
      • Collaborates with fellow team members and keeps the team and other key stakeholders well informed of progress of application business features being developed
      • Create data architecture strategies for each subject area of the enterprise data model.
      • Communicate plans, status and issues to higher management levels.
      • Create and maintain a corporate repository of all data architecture artifacts.
      • Collaborate with the business and other IT organizations to plan a data strategy.
      • Produce all project data architecture deliverables.

    Qualifications

      • 6+ years of experience with demonstrated knowledge in the design & development of data warehouses and/or data reservoir/data lake/data mart platform.
      • 4+ years of expert level experience in data ingestion tools and techniques including ETL & ELT methodologies.
      • 3+ Experience in one or more of the reporting/visualization tools (Cognos, Tableau or Power BI) are desirable
      • Working proficiency in a selection of software engineering disciplines and demonstrates understanding of overall software skills including business analysis, development, testing, deployment, maintenance and improvement of software.
      • Strong communication skills with demonstrated experience coordinating development cycles and project management.
      • Self-starter that can work alone and as part of a larger internal and external team
      • Works well with others and understands the importance of the team
      • Exceptional data analysis skills and problem solving ability
      • Knowledge of advanced analytics tools & methodologies (Python, R etc.)
      • Experience with statistical analysis and predictive modelling skills a plus
      • Bachelor or masters degree in computer science or similar field.
ING-DiBa AG
  • Frankfurt am Main, Deutschland

Du hörst nie auf, neugierig zu sein, und willst jeden Tag dazulernen. Und das am liebsten von den Besten und mit den besten Aussichten. Du suchst die Herausforderung und möchtest endlich die Theorie Deines Studiums in die Praxis umsetzen. Beste Voraussetzungen, um als Praktikantin oder Praktikant bei uns richtig durchzustarten. Jump on.



Deine Aufgaben:


Damit wir unsere Services künftig noch individueller gestalten können, dreht sich im Tribe „Customer Interaction“ alles darum, die Wünsche & Vorstellungen unserer Kunden zu antizipieren. Du unterstützt uns dabei, indem Du umfangreich Datenanalyse betreibst – von einfachen Auszählungen bis zu innovativen Analyseverfahren, Machine-Learning-Algorithmen und Predictive Modelling. Deine Analyse-Ergebnisse bereitest Du selbstständig in verständlicher und ansprechender Weise auf und wirst zudem aktiv in die Prozesse des operativen Kampagnenmanagements eingebunden. Bei allem, was Du tust, wirst Du natürlich bestens gecoacht, und kannst dank unserer agilen Arbeitsweise schnell Verantwortung übernehmen.



Dein Profil:


Studium der Wirtschaft, Informatik, Mathematik, Statistik o. Ä.


Erste Erfahrung im Umgang mit großen und komplexen
Datenmengen sowie Kenntnisse in SAS und R


Idealerweise Praxis in der statistischen Modellierung und dem Einsatz von Machine-Learning-Algorithmen


Schnelle Auffassungsgabe


Sehr gutes Englisch, schriftlich wie mündlich


Verfügbar für 6 Monate (bitte gib den genauen Zeitraum inkl. Startdatum im Anschreiben an)



Bewirb Dich jetzt bei Deutschlands drittgrößter Privatkundenbank: ing.de/karriere

YouGov
  • Warsaw, Poland
  • Salary: £45k - 55k

Are you our next Senior Data Scientist? 


We don’t just collect data, we connect data. YouGov is an international data and analytics group with the ambition to become a unique part of the global internet infrastructure - like Google for search, Facebook for social, Amazon for retail, we want it to be YouGov for opinion. Our value chain is a virtuous circle consisting of a highly engaged online panel, innovative data collection methods and powerful analytics technology. From the beginning we had one simple idea: the more people are able to participate in the decisions made by the institutions that serve them, the better those decisions will be. We are a global online community for millions of people, and thousands of organisations, to engage in a continuous conversation about their beliefs, behaviours and brands, and provide a more accurate portrait of what the world thinks. 


We are looking for a Senior Data Scientist to work in the Data Science and Architecture Team (DSAT).  A core team within the global innovations department, DSAT cooperates closely with senior stakeholders to deliver new data product offerings and insights. The successful candidate will already have minimum 3 years of experience in data science, enjoys working in a team, and is ready to think creatively to find solutions and add value to our products. They will join our vision of making information more insightful and more widely available.


What will I be doing day to day? 



  • Use machine learning on a wide array of large, complex datasets

  • Communicate insights in a clear, straightforward manner

  • Deliver actionable, data science-informed contributions to internal and external projects

  • Work with senior stakeholders on global projects

  • Guide and develop junior members of the team


You are:



  • Passionate about creatively using data to solve problems

  • A self-starter, keen to learn new technologies at a fast pace

  • Undaunted by big challenges


What do I need to bring with me?



  • MSc or PhD in a quantitative discipline (e.g. Computer Science, Statistics, Maths or similar)

  • Experience with applying supervised and unsupervised learning algorithms to deliver actionable insights using Python (pandas, sklearn, matplotlib) and R (tidyverse, ggplot2)

  • Experience with wrangling large, unstructured datasets

  • Experience with efficiently conveying complex results to non-technical stakeholders

  • Experience with database languages (SQL)

  • Experience with creating production-ready machine learning pipelines a plus

  • Experience in version control, code reviews, and test-driven development a plus

  • Experience with cloud services (AWS) a plus

  • Experience with containerisation using Docker a plus

  • Project management experience (being able to translate business needs into analytical questions and deliver actionable results in timely manner)

  • Line management experience a plus


Experience in one or more:



  • AB testing or multivariate testing

  • Predictive modelling for churn prevention

  • Predictive modelling for fraud detection

  • Market segmentation using clustering techniques

  • Time series analysis/forecasting

  • Anomaly detection

  • Attribution modelling or marketing mix modelling


Any additional info...


This is a full time position, based out of our tech hub in Warsaw, Poland or work remotely. 

JobCloud
  • Zürich, Schweiz

Do you want to work in a team designed from the ground up to deliver applied innovation in record speeds? Do you want to develop deep learning technologies that support users in publishing better content, deliver more personalized user experiences and streamline matching and search systems? Are you excited about the opportunity to have a tangible impact on an entire working population, both job seekers and employers alike?

For the application of modern machine learning techniques to complex NLP problems the JobCloud R&D team is looking for an exceptionally experienced and creative engineer / scientist. In a small interdisciplinary Scrum team you will apply state-of-the-art deep learning methods to real-world problems, process large amounts of data and deploy production quality models at scale.




Für was du zuständig bist


Conception, design and implementation of mission-critical ML & AI solutions (analysis,
architecture, design, implementation & deployment of end-to-end systems)
Design and implementation of data acquisition, ingestion, validation, transformation,
augmentation and visualization pipelines
Investigating new approaches and evaluating new technologies and tools

Was du mitbringst


Master's / PhD in Computer Science, Data Science or equivalent work experience
Practical experience in machine learning technologies related to NLP for tasks such as
Embeddings, Named Entity Recognition, Classification, Taxonomies Construction, Sentiment
Analysis, Text Similarity and Predictive Modelling
Hands-on experience with ML toolkits such as Gorgonia, Tensorflow, Keras, PyTorch etc.
Experience in general purpose programming languages like Golang, Java, C/C++ or Python
Exceptional ability to autodidact and heuristic practice
English: fluent in speech and writing, German or French: B1

Was wir dir bieten


Collaboration in a small interdisciplinary team of experts reporting directly to the CEO with
the mission to deliver 10X impact on our business through technology and product
innovation
A friendly team of bikers, gamers and travelers who come to work every day because they
love what they do
If you are the right teammate we are happy to meet your needs ;)
Antuit
  • Dallas, TX

Location: Dallas, TX or Chicago, IL. Open to talk to candidates from across locations


Antuit seeks a Data Scientist/Senior Data Scientist to develop machine learning algorithms in Supply Chain and Forecasting domain with data science toolkits that include Python, R or SAS. This role is also participates in the design process and it is responsible for implementation. The ideal candidate will view the role as an excellent opportunity to master and support solving world-class data science problems.

 Data Scientist responsibilities and duties:

    Devel
    • op machine learning algorithms in Supply chain and Forecasting domain with data science toolkits that include Python, R or SAS Furth
    • er design processes and implement them
    • Research and develop efficient and robust machine learning algorithms
    • Collaborate and work closely with cross-functional Antuit teams and domain experts to identify gaps and structure problems
    • Create meaningful presentations and analyses that tell a story, focused on insights, to communicate results and ideas to key decision makers at Antuit and client companies

Data Scientist qualifications and skills:

  • Experience / Education. Masters or PhD in Computer Science, Computer Engineering, Electrical Engineering, Statistics, Applied Math or another related field. 4-10 years work experience involving quantitative data analyses for problem solving (work experience negotiable for recent PhDs with relevant research experience). Experience working with cloud Big Data Stack to orchestrate data gathering, cleansing, preparation and modelling. Additional experience with forecasting and optimization problems; and implementing data analytics solutions with Python, R or SAS
  • Knowledge. Exceptionally skilled in machine learning, data analytics, pattern recognition and predictive modelling
  • Strong communication and presentation skills. Effective communication and story-telling skills
  • Energy and enthusiasm. Passion for learning and contributing to development
  • A true team plater. Collaborative mindset for effective communication across teams

EEOC

Antuit is an at-will, equal opportunity employer. All qualified applicants will receive consideration for employment witout regard to race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law. 

GeoPhy
  • New York, NY

We seek a Machine Learning Engineer to support development of a suite of valuation models and data products. This person would work alongside software engineers building the technology to manage our data, and the data scientists conducting statistical analysis and developing models. As a Machine Learning engineer, you would complement these efforts by demonstrating how ML can enhance approaches to data discovery, feature engineering, and predictive modelling.


What you'll be responsible for



  • Develop proof-of-concept ML models to solve problems for a variety of data and domains, which might include but not limited to:


    • -text extraction and classification from millions of pages of technical reports;

    • -estimating building density from satellite images; or,

    • -training ML models on semantically related public records.




  • Explain the effectiveness and assumptions of your approach and guide collaborative research and methodology development with appropriate technical rigor.

  • Build predictive models in production at scale.

  • Contribute to decisions about our technology stack, particularly as it relates to end-to-end ML model and data flow and automation.

  • Stay current with latest ML algorithms and methods and share knowledge with colleagues in data science and engineering and externally as appropriate.


What we're looking for



  • Critical professional skills include: a curiosity to discover new approaches to problem-solving, a drive to initiate ideas and collaborate with colleagues to see them through, and an ability to communicate technical material clearly.

  • Skills in Python or R (+Scala a bonus), including ML libraries (e.g. SKLearn, NumPy, SciPy), SQL, parallelization and tools for large scale computing (e.g. Spark, Hadoop), matrix algebra, and vectorization

  • Experience with at least one of the DL frameworks (e.g. PyTorch, Caffe, TensorFlow, Theano, Keras) and a perspective on what distinguishes them

  • Experience with supervised and unsupervised learning algorithms, including cluster analysis (e.g. k-means, density-based), regression and classification with shallow algorithms (e.g. decision trees, XGBoost, and various ensemble methods) and with DL algorithms (e.g. RNN w/ LTSM, CNN)

  • Experience with advanced methods of ML model hyper-parameter tuning (e.g. Bayesian optimization)

  • Deep understanding of statistics and Bayesian inference, linear algebra (e.g. decomposition, image registration), vector calculus (e.g. gradients), time series analysis (e.g. Fourier Transform, ARIMA, Dynamic Time-Warping)

  • Proven track record of building production models (batch processing at minimum, online ML a bonus)

  • Experience in at least one of the following data domains: highly disparate public records, satellite images, text in various states of structure

  • Experience with remote computing and data management (e.g. AWS, GCP suite of tools)


Bonus points for



  • Being a technical lead on a successful ML-based product

  • Doing applied research (at graduate school level or equivalent) with: 1) Geospatial analysis, 2) Image processing (e.g. denoising, image registration), 3) NLP, or 4) ML with semantic databases

  • Experience with streaming models or online ML

  • Authored technical publications or presented work in the field

  • Domain expertise in real estate or the built environment


What we give you



  • You will have the opportunity to accelerate our rapidly growing organisation.

  • We're a lean team, so your impact will be felt immediately.

  • Agile working environment with flexible working hours and location, career advancement, and competitive compensation package

  • GeoPhy is a family friendly company. We arrange social activities to help our employees and families become familiar with each other and our culture

  • Diverse, unique colleagues from every corner of the world

OpsTalent
  • Kraków, Poland
  • Salary: zł144k - 288k

We are making banking safer for everyone.


Together with our partner - one of the biggest financial institutions in the world - we are working on identifying the biggest security threats for the banking industry and helping the business make smart, cost-effective decisions to mitigate possible attacks, using statistics models, probability theory, risk modelling and expert knowledge in cybersecurity.


Your job would include re-coding of a model from the existing technology stack to cloud native solution and significant extensions to the methodology behind the model. Depending your qualifications the position can be more focused or software development or methodology developments aspects.


We are doing it in a global, Agile team, using top-notch technology.


Who said that working for a bank needs to be boring?


Requirements:



  • Strong Python programming skills

  • Previous exposure to machine learning and numerical libraries in Python like numpy, scipy, pandas, scikit-learn, h2o, keras, pymc3 or similar

  • Some experience with building front-ends using HTML, CSS, Javascript

  • Very good understanding of probability theory, random variables and their distributions, monte-carlo simulations, stochastic processes, uncertainty quantification

  • Familiarity with basic statistical inference and machine learning algorithms and general interest in the field  (any exposure to Bayesian inference would be a strong plus)

  • Experience with tools like Alteryx, RapidMiner, Orange, Knime, Azure ML Studio or similar (would be a plus)

  • Familiarity with Linux, experience is using AWS or GCP cloud (would be a plus)

  • Familiarity with Git and GitHub

  • Motivation to develop in the field of risk modelling

  • Understanding of agile methodologies

  • Team-oriented mentality combined with ability to complete tasks independently to a high quality standard

  • English, both written and spoken

  • At least M.Sc. degree in a quantitative field (mathematics, physics, computer science or similar)


Responsibilities:



  • You will support our cybersecurity initiatives by providing modeled outputs to better enable management decisions on cyber risk

  • You will take control of the model from model developers and operate the model within our environment to support ongoing cyber portfolio planning

  • You will propose mathematically-sound alternatives to address methodology deficiencies

  • You will perform conceptual and quantitative work to extend a mathematical domain model of risk being developed within the unit for policy analysis and predictive modelling

  • You will provide conceptual and theoretical validation of technical work produced by other team members

Antuit
  • Dallas, TX

Antuit seeks a Data Scientist to develop machine learning algorithms in Supply Chain and Forecasting domain with data science toolkits that include Python, R or SAS. This role is also participates in the design process and it is responsible for implementation. The ideal candidate will view the role as an excellent opportunity to master and support solving world-class data science problems.

 Data Scientist responsibilities and duties:

    Devel
    • op machine learning algorithms in Supply chain and Forecasting domain with data science toolkits that include Python, R or SAS Furth
    • er design processes and implement them
    • Research and develop efficient and robust machine learning algorithms
    • Collaborate and work closely with cross-functional Antuit teams and domain experts to identify gaps and structure problems
    • Create meaningful presentations and analyses that tell a story, focused on insights, to communicate results and ideas to key decision makers at Antuit and client companies

Data Scientist qualifications and skills:

  • Experience / Education. Masters or PhD in Computer Science, Computer Engineering, Electrical Engineering, Statistics, Applied Math or another related field. 3+ years work experience involving quantitative data analyses for problem solving (work experience negotiable for recent PhDs with relevant research experience). Experience working with cloud Big Data Stack to orchestrate data gathering, cleansing, preparation and modelling. Additional experience with forecasting and optimization problems; and implementing data analytics solutions with Python, R or SAS
  • Knowledge. Exceptionally skilled in machine learning, data analytics, pattern recognition and predictive modelling
  • Strong communication and presentation skills. Effective communication and story-telling skills
  • Energy and enthusiasm. Passion for learning and contributing to development
  • A true team plater. Collaborative mindset for effective communication across teams

EEOC

Antuit is an at-will, equal opportunity employer. All qualified applicants will receive consideration for employment witout regard to race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law. &n

bsp;

Antuit
  • Dallas, TX

Location - Dallas, TX or Chicago, IL. Open to look at candidates from across


Antuit seeks a Data Scientist/Senior Data Scientist to develop machine learning algorithms in Supply Chain and Forecasting domain with data science toolkits that include Python, R or SAS. This role is also participates in the design process and it is responsible for implementation. The ideal candidate will view the role as an excellent opportunity to master and support solving world-class data science problems.

 Data Scientist responsibilities and duties:

    Devel
    • op machine learning algorithms in Supply chain and Forecasting domain with data science toolkits that include Python, R or SAS Furth
    • er design processes and implement them
    • Research and develop efficient and robust machine learning algorithms
    • Collaborate and work closely with cross-functional Antuit teams and domain experts to identify gaps and structure problems
    • Create meaningful presentations and analyses that tell a story, focused on insights, to communicate results and ideas to key decision makers at Antuit and client companies

Data Scientist qualifications and skills:

  • Experience / Education. Masters or PhD in Computer Science, Computer Engineering, Electrical Engineering, Statistics, Applied Math or another related field. 4-10 years work experience involving quantitative data analyses for problem solving (work experience negotiable for recent PhDs with relevant research experience). Experience working with cloud Big Data Stack to orchestrate data gathering, cleansing, preparation and modelling. Additional experience with forecasting and optimization problems; and implementing data analytics solutions with Python, R or SAS
  • Knowledge. Exceptionally skilled in machine learning, data analytics, pattern recognition and predictive modelling
  • Strong communication and presentation skills. Effective communication and story-telling skills
  • Energy and enthusiasm. Passion for learning and contributing to development
  • A true team plater. Collaborative mindset for effective communication across teams

EEOC

Antuit is an at-will, equal opportunity employer. All qualified applicants will receive consideration for employment witout regard to race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law. &n

bsp;

Infor
  • Dallas, TX
TITLE: Research and Development Scientist
LOCATION: Dallas, Texas
SUMMARY: Infor Talent Science is looking for a driven and brilliant scientist to join our R&D team. You will be taking a leading role in innovating our psychometrics-based profile creation tool, conducting cutting-edge data science research, and building client-facing predictive models. You will also be working directly with leadership to enhance other science-based applications and our collective predictive modelling expertise.
Responsibilities
  • Technically lead the development, testing, and communication of innovations to the profile creation tool, including enhancements to the machine learning model
  • Lead all phases of the predictive modeling process with all types of data
  • Develop new prototypes and enhance other science-based web applications
  • Develop requirements and technical specifications
  • Partner with scientists and developers to implement app performance and design standards
  • Demonstrate a big picture and team-oriented approach to application development
  • Proactively generate relevant ideas for new and improved use of technology
  • Adapt quickly to changing requirements, deadlines, priorities, and technologies
  • Communicate technical information to technical and non-technical audiences
  • Serve as a technical and scientific resource for the department
Qualifications
  • Ph.D. in Statistics, Computer Science, Mathematics, Computer Engineering, or related field with an emphasis on machine learning, data science, quantitative analysis, or psychometrics
  • 5 years of experience coding and testing. Mandatory languages include Python, R, JavaScript, CSS3, and HTML5. An ability to code in Java, D3 and/or Julia is a plus
  • 3 years of experience in SQL, T-SQL and/or working with relational databases that support SQL as a query language, (e.g., Postgres, Microsoft SQL Server)
  • 3 years of work experience, preferably within an R&D group, in software development and 3 years building/working with predictive models and deriving insightful analytics using skills from statistics, machine learning, data science, and computer science
  • Expert knowledge of machine learning, statistics, and data analytics using field data
  • Proficiency with Microsoft Excel and other analytical tools (e.g. SPSS, SAS, R)
The Company does not discriminate in employment opportunities or practices on the basis of race, color, creed, religion, sex, gender / transgender identity or expression, sexual orientation, sexual stereotype, national origin, genetics, disability, marital status, age, veteran status, protected veterans, military service obligation, citizenship status, individuals with disabilities, or any other characteristic protected by law applicable to the state or municipality in which you work.
If you have a disability under the Americans with Disabilities Act or similar law, and you wish to discuss potential accommodations related to applying for employment at our company, please contact Human Resources at 470-548-7173 and/or ADAAA@infor.com.
GeoPhy
  • Delft, Netherlands

We seek a Machine Learning Engineer to support development of a suite of valuation models and data products. This person would work alongside software engineers building the technology to manage our data, and the data scientists conducting statistical analysis and developing models. As a Machine Learning engineer, you would complement these efforts by demonstrating how ML can enhance approaches to data discovery, feature engineering, and predictive modelling.



What you'll be responsible for



  • Develop proof-of-concept ML models to solve problems for a variety of data and domains, which might include but not limited to:


    • -text extraction and classification from millions of pages of technical reports;

    • -estimating building density from satellite images; or,

    • -training ML models on semantically related public records.




  • Explain the effectiveness and assumptions of your approach and guide collaborative research and methodology development with appropriate technical rigor.

  • Build predictive models in production at scale.

  • Contribute to decisions about our technology stack, particularly as it relates to end-to-end ML model and data flow and automation.

  • Stay current with latest ML algorithms and methods and share knowledge with colleagues in data science and engineering and externally as appropriate.



What we're looking for



  • Critical professional skills include: a curiosity to discover new approaches to problem-solving, a drive to initiate ideas and collaborate with colleagues to see them through, and an ability to communicate technical material clearly.

  • Skills in Python or R (+Scala a bonus), including ML libraries (e.g. SKLearn, NumPy, SciPy), SQL, parallelization and tools for large scale computing (e.g. Spark, Hadoop), matrix algebra, and vectorization

  • Experience with at least one of the DL frameworks (e.g. PyTorch, Caffe, TensorFlow, Theano, Keras) and a perspective on what distinguishes them

  • Experience with supervised and unsupervised learning algorithms, including cluster analysis (e.g. k-means, density-based), regression and classification with shallow algorithms (e.g. decision trees, XGBoost, and various ensemble methods) and with DL algorithms (e.g. RNN w/ LTSM, CNN)

  • Experience with advanced methods of ML model hyper-parameter tuning (e.g. Bayesian optimization)

  • Deep understanding of statistics and Bayesian inference, linear algebra (e.g. decomposition, image registration), vector calculus (e.g. gradients), time series analysis (e.g. Fourier Transform, ARIMA, Dynamic Time-Warping)

  • Proven track record of building production models (batch processing at minimum, online ML a bonus)

  • Experience in at least one of the following data domains: highly disparate public records, satellite images, text in various states of structure

  • Experience with remote computing and data management (e.g. AWS, GCP suite of tools)



Bonus points for



  • Being a technical lead on a successful ML-based product

  • Doing applied research (at graduate school level or equivalent) with: 1) Geospatial analysis, 2) Image processing (e.g. denoising, image registration), 3) NLP, or 4) ML with semantic databases

  • Experience with streaming models or online ML

  • Authored technical publications or presented work in the field

  • Domain expertise in real estate or the built environment



What we give you



  • You will have the opportunity to accelerate our rapidly growing organisation.

  • We're a lean team, so your impact will be felt immediately.

  • Agile working environment with flexible working hours and location, career advancement, and competitive compensation package

  • GeoPhy is a family friendly company. We arrange social activities to help our employees and families become familiar with each other and our culture

  • Diverse, unique colleagues from every corner of the world

dunnhumby Ltd.
  • Gurugram, India

 Job Description


The Pluggable Science team (Customer Engagement & media) at dunnhumby works towards testing and deployment of various Product Science solutions created via R&D. The team works with specialist Product and Engineering teams to aid market-level product deployments, including R&D that might be needed for customization of existing science to aid easier deployment.


The team’s work profile includes creation of personalization algorithms and running A/B Testing for some of the biggest retailers and CPG companies. These algorithms are used in real-time recommender systems embedded on client websites/apps through dunnhumby products.



Preferred Qualifications



  • Bachelor's degree or higher in a quantitative/technical field (e.g. Computer Science, Statistics, Engineering)

  • 3+ years of relevant experience Machine Learning/ Statistical Algorithms/ Predictive Modelling – Boosting techniques, Decision Trees, Random Forests, Logistic Regression, Neural Nets, SVM, Clustering Techniques (k-means, DBSCAN, Affinity Propagation, etc), Optimization Techniques – Non Linear Programming, Genetic Algorithm, Gradient Boosting, etc.



  • Hands-on experience in scripting languages like Python, Apache Spark, Scala, etc. Experience in in Data Structures, Big data handling would be preferred

  • Sharp analytical abilities, proven design skills, excellent communication skills

  • Experience with software coding practices is a strong plus

  • Experience using Linux/UNIX to process large data sets

Traveloka
  • Palmerah, Indonesia

As a data scientist, you play a key role to solve complex problems and drive insights from our sea of data. Your role will strongly emphasize modeling, algorithm creation and optimization, and the making of data products though you may apply your skills to more advanced business analysis amenable to advanced statistical and mathematical analysis. Primarily, you will focus on the algorithm and the analysis rather than the actual construction of the operational software. 


We are looking for someone with:



  • A strong mix of mathematical/statistical and programming skills.

  • Passion in big data.

  • Solid analytical and problem-solving skills to create data-driven insights.

  • 3+ years of experience in Data Modelling/Predictive Modelling.

  • S2/S3 degree from top universities in a quantitative field (Physics, Mathematics, Computer Science, Engineering, etc.)

  • Plus: Knows programming language (e.g. R, Java, Haskell) and past participation in science Olympiad or Kaggle

Sodatone Music
  • Toronto, ON, Canada

Sodatone helps the music industry find the best undiscovered artists through data analytics and predictive intelligence. We’re a Toronto-based startup that was acquired by Warner Music Group in 2018, where we’re building a large-scale analytics engine to understand how the world interacts with music.


There are two core components to our technology.


The Sodatone Platform: The industry’s most comprehensive data analytics and visualization platform, used by the most influential people in music. They depend on it to find emerging talent that deserves their attention, and monitor critical insights and data on their own roster of artists.


Sodatone AI: Using deep learning, predictive modelling, and our ever-growing database of billions of data points, we’re developing algorithms that use social, streaming, and concert data to find the next Bruno Mars and Ed Sheeran.


Help us find and share the music that will be heard, loved, and remembered. At Sodatone, you’ll help shape culture globally.


The Role


At Sodatone, you’ll be a senior full stack engineer who builds backend, frontend, and data science systems for our global userbase. We're a small team, so that means wearing many hats, and learning a ton along the way.


Here’s a taste of what you’ll be doing:



  • Leading the design and implementation of new features for the Sodatone Platform, based on feedback you collect from users

  • Developing novel, large-scale data ingestion and processing pipelines to power both the Platform and AI

  • Building nimble and delightful UIs that make your advanced data processing algorithms seem effortless

  • Extracting insights from data through statistical analyses and predictive algorithms

  • Scaling Sodatone’s storage and compute infrastructure to support our ever-growing database

  • Being both a creative/product and technical contributor, taking on leadership roles as we grow


Tech Stack



  • Backend: Ruby on Rails

  • Frontend: React, Redux, HTML5, CCS3

  • Machine learning: Tensorflow, Keras, Python, GPUs

  • Multi-TB PostgreSQL database with data on millions of artists

  • Redis for caching and job queues

  • Cloud hosting on AWS; Ubuntu systems


Required Skills



  • 3+ years of practical experience and proficiency with a high-level programming language and SQL

  • Proven ability to prototype, ask for feedback, incorporate usage metrics, and iterate quickly

  • Experience interacting with 3rd party APIs and efficiently handling large datasets

  • Strong design, product and user experience intuition

  • A love of music!


Bonus Points For



  • Experience with AWS

  • Experience with an MVC application framework (we use Rails)

  • A body of work (not necessarily open source) that you’d be proud to show us during an interview, preferably one that’s reached real users

  • Ideally, you’ve built an exciting SaaS product and loved the satisfaction that comes with knowing that people around the world are using what you’ve created

Jefferson Frank
  • Headquarters: Bridgewater, NJ
Headquarters: Bridgewater, NJ URL: www.jeffersonfrank.com [http://www.jeffersonfrank.com] AWS Big Data Scientist - New Jersey - $150KOne of my clients in the New Jersey area is looking to hire an experienced AWS Big Data Scientist. This candidate should have hands on experience with coding, analytics, and predictive models. They should also have hands on experience with the Hadoop Ecosystem. If you are interested, please review the following requirements below and apply!AWS Big Data Scientist - New Jersey - $150K Requirements: * Experience in SQL and relational databases * Experience with coding language Python * Hands on with Big Data tools such as: Hadoop, Hive, Spark * Knowledge of AWS * Experience with predictive analytics and data science * Hands on with predictive modeling and machine learning projects Benefits: * Fully paid medical/dental/vision insurance * Work remote * 401K with company match * Competitive base salary with bonus If you are interested in this position, please call me (Ericka Styles) at 212-731-8282. Act fast, as this opportunity will most likely be off the market in the near future.I understand the need for discretion and would welcome the opportunity to speak to any Big Data and cloud analytics candidates that are considering a new career or job either now or in the future. Confidentiality is of the upmost importance. For more information on available BI jobs, as well as the Business Intelligence, Big data and cloud market, I can be contacted at 212-731-8282. Please see www.jeffersonfrank.com [http://www.jeffersonfrank.com/] for more information.Big Data / Data Science / Hadoop / Microsoft Business Intelligence / BI / Business Intelligence / SSRS / SSAS / SSIS / SQL / T-SQL / MDX / Azure / Cloud / AWS / Data Warehouse / ETL / Power BI / Architect / Big Data / Hadoop / Scala / Python / Apache / Hive / Spark / Data Science / Analytics / Predictive Modelling / New York / New JerseyBig Data / Data Science / Hadoop / Microsoft Business Intelligence / BI / Business Intelligence / SSRS / SSAS / SSIS / SQL / T-SQL / MDX / Azure / Cloud / AWS / Data Warehouse / ETL / Power BI / Architect / Big Data / Hadoop / Scala / Python / Apache / Hive / Spark / Data Science / Analytics / Predictive Modelling / New York / New JerseyTo apply: If you are interested in this position, please call me (Ericka Styles) at 212-731-8282
Esri
  • Headquarters: Redlands, CA UR
Headquarters: Redlands, CA URL: https://www.esri.com [https://www.esri.com] Do you want to bring geospatial data science and machine learning into the hands of data scientists worldwide? Are you passionate about building APIs? If yes, join us, as we are doing the same! We are looking for someone with hands-on experience with statistical analysis, machine learning, predictive analytics, and software engineering to apply a wide variety of analytical and predictive modelling techniques using popular machine learning libraries in combination with ArcGIS API for Python.The team is comprised of driven and passionate data scientists/programmers integrating machine learning capabilities into the ArcGIS API for Python, which is quickly becoming the Python library of choice for spatial analysis, mapping, and Geo-AI! You will be responsible for not only designing and developing an API in Python, but developing on top of successful open source projects such as pandas and Jupyter Notebooks, using cutting-edge ML and DL libraries such as scikit-learn, TensorFlow, and PyTorch.RESPONSIBILITIES * Participate in the design, development, and successful adoption of the ArcGIS API for Python among analysts and data scientists * Develop Jupyter Notebook-based samples, SDK guides, and demos for integrating ArcGIS with data science libraries and workflows * Integrate ArcGIS API for Python with popular machine learning modules such as scikit-learn, TensorFlow, and PyTorch * Perform bug fixes and documentation and maintenance tasks * Design, test, release, and support ArcGIS API for Python to enhance overall product quality and applicability for supporting data science workflows and needs * Evangelize the data science community to our software community through various venues such as user documentation, educational materials, social media, and online content REQUIREMENTS * 1-3 years of experience with high level programming languages such as Python or Java * Experience using Python libraries such as pandas and numpy and machine learning libraries such as scikit-learn, TensorFlow, and PyTorch * Understanding of machine learning as well as deep learning techniques and algorithms such as k-NN, Naive Bayes, SVM, Decision Forests, CNNs, RNNs, LSTMs * Understanding of REST APIs and web programming * A strong drive and interest to learn new technologies quickly and work in a fast-paced software development environment * Bachelor's or master's in data science, information technology, computer science, GIS, or related discipline, depending on position level RECOMMENDED QUALIFICATIONS * Experience with MATLAB, R, and visualization libraries such as ggplot * Familiarity with ArcGIS suite of products and concepts of GIS * Understanding of multivariable calculus and linear algebra To apply: https://www.esri.com/en-us/about/careers/job-detail?req=7912&title=Python%20API%20Developer%20-%20Applied%20Data%20Science&mode=job&iis=Ad+/+Website&iisn=futurejobs.io [https://www.esri.com/en-us/about/careers/job-detail?req=7912&title=Python%20API%20Developer%20-%20Applied%20Data%20Science&mode=job&iis=Ad+/+Website&iisn=futurejobs.io]