OnlyDataJobs.com

Glocomms
  • Dallas, TX

Data Scientist
Dallas, TX
Investment Bank
Compensation: $160,000-$190,000

(Unlimited PTO, Remote Work Options, Daily Catered Lunches)

Do you enjoy solving challenging puzzles? Protecting critical networks from cyber-attacks? Our Engineers are innovators and problem-solvers, building solutions in risk management, big data, mobile and more. We look for creative collaborators who evolve, adapt to change and thrive in a fast-paced global environment. We are is leading threat, risk analysis and data science initiatives that are helping to protect the firm and our clients from information and cyber security risks. Our team equips the firm with the knowledge and tools to measure risk, identify and mitigate threats and protect against unauthorized disclosure of confidential information for our clients, internal business functions, and our extended supply chain.

You will be responsible for:
Designing and integrating state-of-the-art technical solutions? A position as a Security Analyst on our Threat Management Center
Applying statistical methodology, machine learning, and Big Data analytics for network modelling, anomaly detection, forensics, and risk management.
Creating innovative methodologies for extracting key parameters from big data coming from various sensors.
Utilize machine learning, statistical data analytics, and predictive analytics to help implement analytics tied to cyber security and hunting methodologies and applications
Designing, developing, testing, and delivering complex analytics in a range of programming environments on large data sets

Minimum Qualifications:
6+ years industry experience focused on Data Science
Hands-on experience in Cyber Security Analytics
Proficient coding capability in at least one of the major programming languages such as Python, R, Java
Proficient in data analysis, model building and data mining
Solid foundation in natural language processing and machine learning
Significant experience with deep learning

Preferred Qualifications
Strong interpersonal, communication and communication skills
Ability to communicate complex quantitative analysis in a clear, precise, and actionable manner
Experience with knowledge management best practices
Penchant to learn about new technology innovation and creativity in applying to business problems

Novartis Institutes for BioMedical Research
  • Cambridge, MA

20 years of untapped data waiting for a new Principle Scientific Computing Engineer/Scientific Programmer, Imaging and Machine Learning to unlock the next breakthrough in innovative medicines for patients in need. You will be at the forefront of life sciences, tackling some incredible challenges that are curing diseases and improving patients’ lives.

Your responsibilities include, but are not limited to: 
Collaborating with scientists to create, optimize and accelerate workflows through the application of High Performance Computing Techniques. You will integrate algorithm and application development with state of the art technologies to create scalable platforms that accelerate scientific research in a reproducible and standardized manner.

Key responsibilities:
• Collaborate with scientists and Research IT peers to provide consulting services around parallel algorithm development and workflow optimization for the High Performance Computing (HPC) platform.
• Teaching and training the NIBR Scientific and Informatics community in areas of expertise
• Research, develop and integrate new technologies and computational approaches to enhance and accelerate scientific research.
• Establish and maintain the technical partnership with one or more scientific functions in NIBR.




Minimum Requirements


What you will bring to this role:
• BSc in computer science or related field; or equivalent experience with 
• 5 years minimum relevant experience including strong competencies in data structures, algorithms, and software design
• Experience with High Performance Computing and Cloud Computing
• Demonstrated proficiency in Python, C, C++, CUDA or OpenCL
• Demonstrated proficiency in Signal Processing, Advanced Imaging and Microscopy techniques.
• Solid project management skills and process analysis skills
• Demonstration of strong collaboration skills, effective communication skills, and creative problem-solving abilities

Preferred Qualifications
• MSc degree
• Demonstrated proficiency in 2 or more advanced machine learning frameworks and their application to natural language processing, action recognition-micro and macro tracking.
• Demonstrated proficiency in Signal Processing, Advanced Imaging and Microscopy techniques.
• Interest in drug discovery and knowledge of the life science is a strong plus
• Knowledge of Deep visualization, Deep transfer learning and Generative Adversarial Networks is a plus.
• Demonstrated proficiency in MPI in a large-scale Linux HPC environment
• Experience with CellProfiler, Matlab, ImageJ and R is a plus.

Position will be filled commensurate with experience

JHU Applied Physics Laboratory
  • Laurel, MD
  • Salary: $100k - 140k

The Johns Hopkins Applied Physics Laboratory (APL) is a national leader in scientific research and development. APL is actively seeking a Senior Data Scientist for the Health Data Sciences & Analytics Group. The Senior Data Scientist will support the National Health Mission Area, whose aim is to revolutionize health through science and engineering. JHU/APL is located midway between Baltimore and Washington, DC.


The Health Data Science and Analytics Group provides cutting edge analytics contributions to national and global publichealth and healthcare challenges, developing solutions in Health Informatics, Population Health, Precision Medicine, Digital Health, Analytics and Software Systems. Our multidisciplinary team of engineers and scientists develop statistical and machine learning algorithms and incorporate visual analytics into software systems that process large and complex data sets. We are looking for data scientists, computer scientists, applied mathematicians, statisticians and software developers that are creative problem solvers and team players dedicated to building world class expertise to provide
solutions for health and healthcare systems around the globe.


Job Summary:
Design and develop novel computational algorithms and statistical methods and design corresponding data architectures to analyze large and complex data for a variety of challenging health and healthcare problems.
Duties:
1. Develop advanced algorithms and create software applications to perform analytics on large-scale and complex data for real-world health and healthcare applications. Promote a climate conducive to intellectual curiosity, creativity, innovation, collaboration, growth, life-long learning, productivity, and respect for others.
2. Be a leader in data science and analytics efforts. Provide input to team leads and other analysts to help define the team’s vision, design and execute analytic projects from start-to-finish, inform technical direction, and support reporting of accomplishments. Assure milestones are met on time and be responsive to sponsor needs. Build collaboration among health stakeholders, working across organizations to bring consensus to achieve objectives. Become a sought out resource by consistently producing high-quality results.
3. Document and present papers to communicate impact of research and engage with sponsor and stakeholder community.
4. Communicate often and effectively with team, sponsors and JHU/APL leadership. Participate in the data science, analytics and APL community. Take advantage of collaboration and innovation opportunities to help ensure success of APL’s mission.


Note: This job summary and listing of duties is for the purpose of describing the position and its essential functions at time of hire and may change over time.


Qualifications - External
Required Qualifications:
• M.S. in Computer Science, Information Science, Mathematics, Statistics, Data Science, or related field.
• 5-10+ years of experience.
• Demonstrated ability in selecting, developing, and applying machine learning and data mining algorithms.
• Working knowledge of modern large-scale data systems and architectures; ability to manage and manipulate large disparate data sets.
• Experience with graph analytics.
• Experience with pattern recognition, statistical analysis and machine learning; fluent, with hands-on experience with some of the following implementation languages: Python, R, Matlab, JAVA, or C++/C;
• Excellent interpersonal skills and outstanding written and oral communication skills; ability to articulate complex technical issues effectively and appropriately for a wide range of audiences.
• Strong problem solving skills strong analytical and organizational skills; ability to work independently or within a group.
• Must be eligible for Secret clearance requiring background investigation.


Desired Qualifications:
• Ph.D. in the disciplines listed above.
• Demonstrated capability to carry out original machine learning research beyond incremental application of existing techniques, as evidenced by publications in premier conferences.
• Research records that illustrate in-depth understanding of underlying theory necessary to develop novel algorithms to address unique real-world challenges.
• Extensive experience in developing and applying machine learning algorithms in health and healthcare application settings.
• Research experience with advanced machine learning research topics.
• Experience with data–driven predictive model development, unstructured text mining, natural language processing, and anomaly and novelty detection.
• A strong technical writing background.
• Experience in medicine, emergency response, or public health applications and/or exposure to clinical information systems and medical data standards.


Special Working Conditions: Some travel to local sponsor sites and support for field testing may be required.



Security: Applicant selected will be subject to a government security clearance investigation and must meet the requirements for access to classified information. Eligibility requirements include U.S. citizenship.


Equal Employment Opportunity: Johns Hopkins University/Applied Physics Laboratory (APL) is an Equal Opportunity/Affirmative Action employer that complies with Title IX of the Education Amendments Acts of 1972, as well as other applicable laws. All qualified applicants will receive consideration for employment without regard to race, color, religion, sexual orientation, gender identity, national origin, disability, or protected Veteran status.

LPL Financial
  • San Diego, CA

As a Development Manager for Digital Experience Technology, responsible for strategy, analysis, design and implementation of modern, scalable, cloud native digital and artificial intelligence solutions. This role primarily will lead Intelligent Content Delivery program with focus on improving content search engine, implementing personalization engine and AI driven content ranking engine to deliver relevant and personalized intelligent content across Web, Mobile and AI Chat channels.  This role requires a successful and experienced digital, data and artificial intelligence practitioner, able to lead discussions with business and technology partners to identify business problems and delivering breakthrough solutions.

The successful candidate will have excellent verbal and written communication skills along with a demonstrated ability to mentor and manage a digital team. You must possess a unique blend of business and technical savvy; a big-picture vision, and the drive to make a vision a reality.


Key Responsibilities:
 

·       Define innovative Digital Content, AI and Automation offerings solving business problems with inputs from customers, business and product teams

·       Partner with business, product and technology teams to define problems, develop business case, build prototypes and create new offerings.

·       Evangelize and promote Cloud based micro-services architecture and Digital Technology capabilities adoption across organization

·       Lead and own technical design and development aspects of knowledge graphs, search engines, personalization engine and intelligent content engine platforms by applying digital technologies, machine learning and deep learning frameworks

·       Responsible for research, design and prototype robust and scalable models based on machine learning, data mining, and statistical modeling to answer key business problems

·       Manage onsite and offshore development teams implementing products and platforms in Agile

·       Collaborate with business, product , enterprise architecture and cross-functional teams to ensure strategic and tactical goals of project efforts are met

·       Work collaboratively with QA, DevOPS teams to adopt CI/CD tool chain and develop automation

·       Work with technical teams to ensure overall support and stability of platforms and assist with troubleshooting when production incidents arise

·       Be a mentor and leader on the team and within the organization



Basic Qualifications:

·       Overall 10+ years of experience, with 6+ years of development experience in implementing Digital and Artificial Intelligence (NPU-NLP-ML) platforms.

·       Solid working experience of Python, R and knowledge graphs

·       Expertise in any one AI related frameworks (NLTK, Spacy, Scikit-Learn, Tensor flow)

·       Experience with platforms Cloud Platforms and products including Amazon AWS, LEX Bots, Lambda , Microsoft Azure or similar cloud technologies

·       Solid working experience of implementing Solr search engine, SQL, Elastic Search,  Neo4J

·       Experience in data analysis , modelling, reporting using Power BI / similar tools

·       Experience with Enterprise Content Management systems like Adobe AEM / Any Enterprise CMS

·       Experience in implementing knowledge graphs using Schema.org, Facebook Open graph, Google AMP pages is added advantage

·       Excellent collaboration and negotiation skills

·       Results driven with a positive can do attitude

·       Experience in implementing Intelligent Automation tools like Work fusion , UIPath / Automation Anywhere is added advantage

Qualifications:

·       Ms or Ph.D degree in Computer Science / Statistics / Mathematics / Data Science / Any

·       Previous industry experience or research experience in solving business problems applying machine learning and deep learning algorithms

·       Must be hands on technologist with prior experience in similar role

·       Good experience in practicing and executing projects in Agile Scrum or Agile Safe iterative methodologies

Brighter Brain
  • Atlanta, GA

Brighter Brain is seeking a skilled professional to serve as an internal resource for our consulting firm in the field of Data Science Development. Brighter Brain provides Fortune 500 clients throughout the United States with IT consultants in a wide-ranging technical sphere.

In order to fully maintain our incoming nationwide and international hires, we will be hiring a Senior Data Science SME (ML) with practical experience to relocate to Atlanta and coach/mentor our incoming classes of consultants. If you have a strong passion for the Data Science platform and are looking to join a seasoned team of IT professionals, this could be an advantageous next step.

Brighter Brain is an IT Management & Consultingfirm providing a unique take on IT Consulting. We currently offer expertise to US clients in the field of Mobile Development (iOS and Android), Hadoop, Microsoft SharePoint, and Exchange/ Office 365. We are currently seeking a highly skilled professional to serve as an internal resource for our company in the field of Data Science with expertise in Machine Learning (ML)

The ideal candidatewill be responsible for establishing our Data Science practice. The responsibilities include creation of a comprehensive training program, training, mentoring, and supporting ideal candidates, as they progress towards building their career in Data Science Consulting. This position is based out of our head office in Atlanta, GA.

If you have a strong passion for Data Science and are looking to join a seasoned team of IT professionals, this could be an advantageous next step.

The Senior Data Science SMEwill take on the following responsibilities:

-       Design, develop and maintain Data Science training material, focused around: ML Knowledge of DL, NN & NLP is a plus.

-       Interview potential candidates to ensure that they will be successful in the Data Science domain and training.

-       Train, guide and mentor junior to mid-level Data Science developers.

-       Prepare mock interviews to enhance the learning process provided by the company.

-       Prepare and support consultants for interviews for specific assignments involving development and implementation of Data Science.

-       Act as a primary resource for individuals working on a variety of projects throughout the US.

-       Interact with our Executive and Sales team to ensure that projects and employees are appropriately matched.

The ideal candidatewill not only possess a solid knowledge of the realm, but must also have the fluency in the following areas:

-       Hands-on expertise in using Data Science and building machine learning models and Deep learning models

-       Statistics and data modeling experience

-       Strong understanding of data sciences

-       Understanding of Big Data

-       Understanding of AWS and/or Azure

-       Understand the difference between Tensorflow, MxNet, etc

Skills Include:

  • Masters Degree in the Computer Science or mathematics fields

    10+ Years of professional experience in the IT Industry, in the AI realm

  • Strong understanding of MongoDB, Scala, Node.js, AWS, & Cognitive applications
  • Excellent knowledge in Python, Scala, JavaScript and its libraries, Node.js, Python, R and MatLab C/C++ Lua or any proficient AI language of choice
  • NoSQL databases, bot framework, data streaming and integrating unstructured Data Rules engines e.g. drools, ESBs e.g. MuleSoft Computer
  • Vision,Recommendation Systems, Pattern Recognition, Large Scale Data Mining or Artificial Intelligence, Neural Networks
  • Deep Learning frameworks like Tensorflow, Torch, Caffee, Theano, CNTK, cikit-
  • learn, numpy, scipy
  • Working knowledge of ML such as: Naïve Bayes Classification, Ordinary Least
  • Square
  • Regression, Logic Regression, Supportive Vector Machines, Ensemble Methods,
  • Clustering
  • Algorithms, Principal Component Analysis, Singular Value Decomposition, and
  • Independent Component Analysis.  
  • Natural Language Processing (NLP) concepts such as topic modeling, intents,
  • entities, and NLP frameworks such as SpaCy, NLTK, MeTA, gensim or other
  • toolkits for Natural Language Understanding (NLU)
  • Experience data profiling, data cleansing, data wrangling/mungline, ETL
  • Familiarity with Spark MLlib, Mahout Google, Bing, and IBM Watson APIs
  • Hands on experience as needed with training a variety of consultants
  • Analytical and problem-solving skills
  • Knowledge of IOT space
  • Understand Academic Data Science vs Corporate Data Science
  • Knowledge of the Consulting/Sales structure

Additional details about the position:

-       Able to relocate to Atlanta, Ga (Relocation package available)

-       Work schedule of 9 AM to 6 PM EST

Questions: Send your resume to Ansel Butler at Brighter Brain; make sure that there is a valid phone number and Skype ID either on the resume, or in the body of the email.

Ansel Essic Butler

EMAIL: ANSEL.BUTLER@BRIGHTERBRAIN.COM

404 791 5128

SKYPE: ANSEL.BUTLER@OUTLOOK.COM

Senior Corporate Recruiter

Brighter Brain LLC.

1785 The Exchange, Suite 200

Atlanta, GA. 30339

eBay
  • Kleinmachnow, Germany

About the team:



Core Product Technology (CPT) is a global team responsible for the end-to-end eBay product experience and technology platform. In addition, we are working on the strategy and execution of our payments initiative, transforming payments management on our Marketplace platform which will significantly improve the overall customer experience.


The opportunity

At eBay, we have started a new chapter in our iconic internet history of being the largest online marketplace in the world. With more than 1 billion listings (more than 80% of them selling new items) in over 400 markets, eBay is providing a robust platform where merchants of all sizes can compete and win. Every single day millions of users come to eBay to search for items in our diverse inventory of over a billion items.



eBay is starting a new Applied Research team in Germany and we are looking for a senior technologist to join the team. We’re searching for a hands-on person who has an applied research background with strong knowledge in machine learning and natural language processing (NLP). The German team’s mission is to improve the German and other European language search experience as well as to enhance our global search platform and machine learned ranking systems in partnership with our existing teams in San Jose California and Shanghai China.



This team will help customers find what they’re shopping for by developing full-stack solutions from indexing, to query serving and applied research to solve core ranking, query understanding and recall problems in our highly dynamic marketplace. The global search team works closely with the product management and quality engineering teams along with the Search Web and Native Front End and Search services, and Search Mobile. We build our systems using C++, Scala, Java, Hadoop/Spark/HBase, TensorFlow/Caffe, Kafka and other standard technologies. The team believes in agile development with autonomous and empowered teams.






Diversity and inclusion at eBay goes well beyond a moral necessity – it’s the foundation of our business model and absolutely critical to our ability to thrive in an increasingly competitive global landscape. To learn about eBay’s Diversity & Inclusion click here: https://www.ebayinc.com/our-company/diversity-inclusion/.
Avaloq Evolution AG
  • Zürich, Switzerland

The position


Are you passionate about data? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

A challenging role as Senior Data Scientist in a demanding, dynamic and international software company using the latest innovations in predictive analytics and visualization techniques. You will be driving the creation of statistical and machine learning models from prototyping until the final deployment.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.





Your profile


  • PhD or Master degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • 5+ years of experience in Statistical Modelling, Anomaly Detection, Machine Learning algorithms both Supervised and Unsupervised

  • Proven experience in applying data science methods to business problems

  • Ability to explain complex analytical concepts to people from other fields

  • Proficiency in at least one of the following: Python, R, Java/Scala, SQL and/or SAS

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, stream processing)

  • Expertise in text mining and natural language processing is a strong plus

  • Familiarity with network analysis and/or graph databases is a plus

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Experience in leading teams and mentoring others

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Experience in the financial industry is a strong plus

  • Fluent in English; German, Italian and French a plus



Professional requirements




  • Use machine learning tools and statistical techniques to produce solutions for customer demands and complex problems

  • Participate in pre-sales and pre-project analysis to develop prototypes and proof-of-concepts

  • Analyse customer behaviour and needs enabling customer-centric product development

  • Liaise and coordinate with internal infrastructure and architecture team regarding setting up and running a BigData & Analytics platform

  • Strengthen data science within Avaloq and establish a data science centre of expertise

  • Look for opportunities to use insights/datasets/code/models across other functions in Avaloq



Main place of work
Zurich

Contact
Avaloq Evolution AG
Alina Tauscher, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

careers@avaloq.com
www.avaloq.com/en/open-positions

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
Avaloq Evolution AG
  • Zürich, Switzerland

The position


Are you passionate about digital technology and how it can be used to create a seamless digital customer experience? Do you enjoy engaging with clients and stakeholders to shape the next generation of banking applications? Do you thrive when challenged to design solutions using cutting-edge artificial intelligence (AI)/machine learning (ML), data and digital technologies?

As Digital and AI Consultant, you will play a key role in driving Avaloq’s digital and AI agenda to write the future of banking by identifying, designing and advocating state-of-the-art financial solutions which include components across the AI/ML, data and digital technology spectrum. You will use your extensive expertise in AI/ML, data and digital technologies to contribute to a seamless digital customer experience across Avaloq’s products and services, closely collaborating with clients and stakeholders across Avaloq on senior management and operational level. A balanced mix of strategic and analytical capabilities as well as a hands-on attitude are key for the success of the role.


Your responsibilities


  • Proactively identify opportunities and use cases for the financial industry liaising with relevant stakeholders and develop solution concepts using elements across the Data, Artificial Intelligence/ML and digital technology spectrum

  • Identify AI capabilities that should be delivered through strategic partners/vendors and liaise with them to assess opportunities for collaboration

  • Provide leadership and best practices around AI/ML and digital capabilities and methods to support the realisation of business initiatives

  • Liaise with relevant stakeholders to define and drive Avaloq`s digital strategy and roadmap

  • Support the development of prototypes and proof-of-concepts contributing to a seamless digital customer experience of Avaloq`s products and services

  • Teach and mentor colleagues in the use of AI, machine learning, data and new technologies




Skills and qualifications


  • Master’s degree in Computer Science, Maths, Artificial Intelligence or a related discipline

  • 5+ years of relevant work experience, with a focus on AI/ML, data and digital technologies

  • Experience in collaborating with senior leadership, clients, business and IT

  • Strong client-centric mindset and a high level of business acumen

  • Proven experience in applying AI/ML, data and digital technology methods to business problems

  • Good working knowledge of AI/ML, data and digital technologies, e.g. predictive analytics, recommender systems, deep learning, NLP, image recognition and blockchain

  • Good understanding of technological enablers (e.g. API technology) for creating a seamless digital customer experience

  • Strong oral, written and presentation skills

  • Excellent organizational and problem-solving skills

  • Experience with AI/ML platforms and frameworks, e.g. AWS, Watson, Tensorflow, Keras, Pytorch, Caffe, etc. is a plus

  • Practical experience with digital and big data technologies (e.g. smart contracts, cryptocurrencies, IoT, RPA, Hadoop) is a plus

  • Work experience in the financial industry or in financial/technology consulting is a plus

  • Fluent in German and English





Main place of work
Zurich

Contact
Avaloq Evolution AG
Anna Drozdowska, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

www.avaloq.com/en/open-positions

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.

DISYS
  • Minneapolis, MN
Client: Banking/Financial Services
Location: 100% Remote
Duration: 12 month contract-to-hire
Position Title: NLU/NLP Predictive Modeling Consultant


***Client requirements will not allow OPT/CPT candidates for this position, or any other visa type requiring sponsorship. 

This is a new team within the organization set up specifically to perform analyses and gain insights into the "voice of the customer" through the following activities:
Review inbound customer emails, phone calls, survey results, etc.
Review data that is unstructured "natural language" text and speech data
Maintain focus on customer complaint identification and routing
Build machine learning models to scan customer communication (emails, voice, etc)
Identify complaints from non-complaints.
Classify complaints into categories
Identify escalated/high-risk complaints, e.g. claims of bias, discrimination, bait/switch, lying, etc...
Ensure routed to appropriate EO for special

Responsible for:
Focused on inbound retail (home mortgage/equity) emails
Email cleansing: removal of extraneous information (disclaimers, signatures, headers, PII)
Modeling: training models using state of art techniques
Scoring: "productionalizing" models to be consumed by the business
Governance: model documentation and Q/A with model risk group.
Implementation of model monitoring processes

Desired Qualifications:
Real-world experience building/deploying predictive models, any industry (must)
SQL background (must)
Self-starter, able to excel in fast-paced environment w/o a ton of direction (must)
Good communication skills (must)
Experience in text/speech analytics (preferred)
Python, SAS background (preferred)
Linux (nice to have)
Spark (Scala or PySpark) (nice to have)

Mix.com
  • Phoenix, AZ

Are you interested in scalability & distributed systems? Do you want to work to help shaping a discovery engine powered by cutting edge technologies and machine learning at scale? If you answered yes to the above questions, Mix's Research and Development is the team for you!


In this role, you'll be part of a small and innovative team comprised of engineers and data scientists working together to understand content by leveraging machine learning and NLP technologies. You will have the opportunity to work on core problems like detection of low quality content or spam, text semantic analysis, video and image processing, content quality assessment and monitoring. Our code operates at massive scale, ingesting, processing and indexing millions of URLs.



Responsibilities

  • Write code to build an infrastructure, which is capable of scaling based on the load
  • Collaborate with researchers and data scientists to integrate innovative Machine Learning and NLP techniques with our serving, cloud and data infrastructure
  • Automate build and deployment process, and setup monitoring and alerting systems
  • Participate in the engineering life-cycle, including writing documentation and conducting code reviews


Required Qualifications

  • Strong knowledge of algorithms, data structures, object oriented programming and distributed systems
  • Fluency in OO programming language, such as  Scala (preferred), Java, C, C++
  • 3+ years demonstrated expertise in stream processing platforms like Apache Flink, Apache Storm and Apache Kafka
  • 2+ years experience with a cloud platform like Amazon Web Services (AWS) or Microsoft Azure
  • 2+ years experience with monitoring frameworks, and analyzing production platforms, UNIX servers and mission critical systems with alerting and self-healing systems
  • Creative thinker and self-starter
  • Strong communication skills


Desired Qualifications

  • Experience with Hadoop, Hive, Spark or other MapReduce solutions
  • Knowledge of statistics or machine learning
Tokio Marine HCC
  • Houston, TX

Tokio Marine HCC is a leading specialty insurance group with offices in the United States, the United Kingdom and Continental Europe, transacting business in approximately 180 countries and underwriting more than 100 classes of specialty insurance. Tokio Marine HCC products and capabilities set the standard for the industry, and many of the Companys almost 3,000 employees are industry-leading experts.

Are you currently seeking a challenging Data Scientist opportunity to help develop text analytics capabilities? At TMHCC, we have an exciting opportunity for a Data Scientist in our corporate office in Houston, TX. In this role, you will be a key member of the actuarial team to provide analytical support for the various underwriting units including, but not limited to, pricing and claims support, budget support, and providing key statistics on results to underwriting units. The work will be technical and analytical. The individual will be doing predictive and data analytics.

Performance Objectives:

    Apply
    • your expertise to prepare internal and external, structured and unstructured data
    • Collaborate with the actuarial team to develop text analytics capabilities
    • Design, build and validate models using simple and advanced modeling techniques to help business leaders quantify risks and make better decisions
    • Support cross-functional teams to implement models
    • Effectively collaborate with business stakeholders across the organization to understand business processes and problems to develop effective analytical solutions
    • Develop monitoring solutions for business stakeholders after model implementations to monitor accuracy of predictions, proper usage, and business impact

Expectations:

    Withi
    • n the first 30 days, become familiar with TMHCCs policies and procedures
    • Within the first 60 days, take ownership to develop text analytics capabilities

Qualifications:

    A suc
    • cessful candidate ideally will have a minimum of two years of relevant and progressive professional experience in data science
    • Experience with Natural Language Processing (Text Analytics)
    • Experience with Python or R or similar data analysis programming language
    • Ability to handle sensitive and/or confidential material strictly in accordance with company policy and legal requirements
    • Flexibility to work outside of normal business hours and a willingness to learn
    • Sound analytical skills as well as problem-solving aptitude
    • Must be an exceptional communicator and able to make connections across the organization
    • Educational requirements: the ideal candidate will have a Bachelors degree Data Science, Computer Science, Actuarial Science, Mathematics, Statistics or related field, or the equivalent education and/or experience

The Tokio Marine HCC Group of Companies offers a competitive salary and employee benefit package. We are a successful, dynamic organization experiencing rapid growth and are seeking energetic and confident individuals to join our team of professionals. The Tokio Marine HCC Group of Companies is an equal-opportunity employer. Please visit www.tokiomarinehcc.com for more information about our companies.

UST Global
  • Dallas, TX

Data Scientist Overview: 


Have an interest and hands on experience within the field of Data Science. Including Batch and Streaming Analytics, Machine learning models, Natural Language processing and Natural language generation as well as other emerging technologies in the field of Advanced Analytics.


·         5+ years of experience in Hadoop eco system, Hive, Spark, Kafka, Java or R or Python and exposure to machine learning techniques.


·         Knowledge of SAS, Teradata, Sqoop, NiFi is a plus


·         Experience in data wrangling, advanced analytic modeling, and AI/ML capabilities is preferred


·         BA/BS required; preferably in Computer Science, Data Analytics, Data Science or Operations Research


·         Enjoy being an analytical thinker, motivated and excited about the field of Data Science, can be a decisive thought leader with solid critical thinking able to quickly connect technical and business dots


·         Has strong communication and organizational skills and has the ability to deal with ambiguity while juggling multiple priorities and projects at the same time


·         Able to understand statistical solutions and execute similar activities


This is a proactive posting as we are set to hire a number of Data Scientists in the months ahead. 

UST Global
  • Atlanta, GA

Data Scientist Overview: 


Have an interest and hands on experience within the field of Data Science. Including Batch and Streaming Analytics, Machine learning models, Natural Language processing and Natural language generation as well as other emerging technologies in the field of Advanced Analytics.


·         5+ years of experience in Hadoop eco system, Hive, Spark, Kafka, Java or R or Python and exposure to machine learning techniques.


·         Knowledge of SAS, Teradata, Sqoop, NiFi is a plus


·         Experience in data wrangling, advanced analytic modeling, and AI/ML capabilities is preferred


·         BA/BS required; preferably in Computer Science, Data Analytics, Data Science or Operations Research


·         Enjoy being an analytical thinker, motivated and excited about the field of Data Science, can be a decisive thought leader with solid critical thinking able to quickly connect technical and business dots


·         Has strong communication and organizational skills and has the ability to deal with ambiguity while juggling multiple priorities and projects at the same time


·         Able to understand statistical solutions and execute similar activities


This is a proactive posting as we are set to hire a number of Data Scientists in the months ahead. 

SparkCognition
  • Austin, TX

Overview

SparkCognition is an AI leader that offers business-critical solutions for customers in energy, oil and gas, manufacturing, finance, aerospace, defense, and security. A highly awarded company recognized for cutting-edge technology, SparkCognition develops AI-powered, cyber-physical software for the safety, security, reliability, and optimization of IT, OT, and the Industrial IoT.


SparkCognition is looking for innovative Data Scientists to join our team to help create the next generation of analytics and artificial intelligence solutions. At SparkCognition, you will immerse yourself in cutting-edge research and work with the latest technologies to deliver value in the Industrial IoT and Defense spaces.


Responsibilities

  • Building models to solve specific problems
  • Processing, cleansing, and verifying the integrity of data used for analysis
  • Feature engineering using various techniques for the enhancement of data
  • Performing feature selection on original and generated data
  • Using machine learning tools to develop and train models
  • Performing efficacy testing of the models
  • Building automated tools that enable data scientists to more effectively perform tasks such as data cleaning, feature generation, feature selection, or model building
  • Performing ad-hoc analysis and presenting results in a clear manner
  • Working with a team to help solve new, never-before-solved challenges across multiple industries


Qualifications

  • Must be a US Citizen
  • Understanding and experience using machine learning techniques and algorithms, including but not limited to: Linear Models, Neural Networks, Decision Trees, Bayesian Techniques, Clustering, Anomaly Detection, and more
  • Experience with data science languages, such as Python, R, MatLab, etc.
  • Experience with machine learning frameworks, such as PyTorch, TensorFlow, Theano, Keras, etc
  • Great communication skills
  • Good applied statistics skills, such as distributions, statistical testing, etc.
  • Good scripting and programming skills, especially Python
  • Experience managing large volumes of data (terabytes or more)
  • Graduate degree (or equivalent industry experience), in Computer Science, Statistics, Physics, Mathematics, Neuroscience, Linguistics, Electrical Engineering, Economics, or a related scientific discipline
  • Experience with distributed computing, such as Hadoop, Spark, or an MPP environment a plus
  • Experience with developing application on full stack of HTTP, JSON, REST, React, Java/C#, SQL and no-SQL databases a plus
  • Experience with NLP, Big Data Analytics, and Graphing techniques a plus
Cortical.io
  • Vienna, Austria
  • Salary: €55k - 65k

Cortical.io is a young entrepreneurial company focusing on natural language understanding (NLU). We use our patented technology in combination with sophisticated AI approaches to address problems others have failed to solve. Over the last couple of years, we have built an impressive client portfolio of global Fortune 100 companies. At this point, we are looking to expand our team at our headquarters in Vienna.


The ideal candidates for this position have some experience in Java product development, machine learning, and/or natural language processing (NLP). If you are keen on learning new technologies and want to contribute to software solutions that solve challenging NLP problems, then you should send us your application!


Basic requirements



  • Good working proficiency in written and spoken English

  • European Union citizenship or authorization to work in Austria


What you’ll be working on



  • Applying state-of-the-art NLP and machine-learning techniques to develop semantic solutions for large international businesses

  • Developing our core NLU technology, e.g. our semantic search engine or classification module


What you must have



  • A bachelor’s degree in computer science, AI, machine learning, information retrieval, or NLP

  • Two or more years’ experience as a Java software engineer

  • Basic knowledge of NLP

  • Professional experience in Java product development

  • Practical experience with


    • Integration tools (Maven, Jenkins, Git)

    • Software testing and code reviews

    • Common technologies such as Spring, REST, JSON, NoSQL, Docker, IntelliJ

    • UNIX-style environment


  • Good communication skills to interact with technical and business people at all levels of customers’ organizations


It would be great if you also had



  • A master’s degree in computer science, AI, machine learning, information retrieval, or NLP

  • Professional experience in NLP, machine learning, and/or information retrieval

  • Experience with other programming languages, e.g. Scala, Python, Shell scripting

  • Practical experience with NLP frameworks


What you’ll benefit from



  • A competitive salary

  • 25 vacation days a year, 13 public holidays, and the Austrian national health insurance and pension plan

  • A relaxed, diverse, and friendly work environment in a pleasant office with flexible working arrangements

  • The option of telecommuting occasionally

  • The satisfaction of engineering successful machine-learning solutions where competing technologies have come up short

  • Joining an expanding company that is already working with many big-name clients

BTI360
  • Ashburn, VA

Our customers are inundated with information from news articles, video feeds, social media and more. We’re looking to help them parse through it faster and focus on the information that’s most important that allows them to make better decisions. We're in the process of building a next-generation analysis and exploitation platform for video, audio, documents, and social media data. This platform will help users identify, discover and triage information via a UI that leverages best in class speech-to-text, machine translation, image recognition, OCR, and entity extraction services.


We're looking for data engineers to develop the infrastructure and systems behind our platform.  The ideal contributor should have experience building and maintaining data and ETL pipelines. They will be expected to work in a collaborative environment, able to communicate well with their teammates and customers. This is a great opportunity to work with a high-performing team in a fun environment.


At BTI360, we’re passionate about building great software and developing our people. Software doesn't build itself; teams of people do. That's why our primary focus is on developing better engineers, better teammates, and better leaders. By putting people first, we give our teammates more opportunities to grow and raise the bar of the software we develop.


Interested in learning more? Apply today!


Required Skills/Experience:



  • U.S. Citizenship - Must be able to obtain a security clearance

  • Bachelors Degree in Computer Science, Computer Engineering, Electrical Engineering or related field

  • Experience with Java, Kotlin, or Scala

  • Experience with scripting languages (Python, Bash, etc.)

  • Experience with object-oriented software development

  • Experience working within a UNIX/Linux environment

  • Experience working with a message-driven architecture (JMS, Kafka, Kinesis, SNS/SQS, etc.)

  • Ability to determine the right tool or technology for the task at hand

  • Works well in a team environment

  • Strong communication skills


Desired Skills:



  • Experience with massively parallel processing systems like Spark or Hadoop

  • Familiarity with data pipeline orchestration tools (Apache Airflow, Apache NiFi, Apache Oozie, etc.)

  • Familiarity in the AWS ecosystem of services (EMR, EKS, RDS, Kinesis, EC2, Lambda, CloudWatch)

  • Experience working with recommendation engines (Spark MLlib, Apache Mahout, etc.)

  • Experience building custom machine learning models with TensorFlow

  • Experience with natural language processing tools and techniques

  • Experience with Kubernetes and/or Docker container environment

  • Ability to identify external data specifications for common data representations

  • Experience building monitoring and alerting mechanisms for data pipelines

  • Experience with search technologies (Solr, ElasticSearch, Lucene)


BTI360 is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status, or any other protected class. 

GeoPhy
  • New York, NY

We seek a Machine Learning Engineer to support development of a suite of valuation models and data products. This person would work alongside software engineers building the technology to manage our data, and the data scientists conducting statistical analysis and developing models. As a Machine Learning engineer, you would complement these efforts by demonstrating how ML can enhance approaches to data discovery, feature engineering, and predictive modelling.


What you'll be responsible for



  • Develop proof-of-concept ML models to solve problems for a variety of data and domains, which might include but not limited to:


    • -text extraction and classification from millions of pages of technical reports;

    • -estimating building density from satellite images; or,

    • -training ML models on semantically related public records.




  • Explain the effectiveness and assumptions of your approach and guide collaborative research and methodology development with appropriate technical rigor.

  • Build predictive models in production at scale.

  • Contribute to decisions about our technology stack, particularly as it relates to end-to-end ML model and data flow and automation.

  • Stay current with latest ML algorithms and methods and share knowledge with colleagues in data science and engineering and externally as appropriate.


What we're looking for



  • Critical professional skills include: a curiosity to discover new approaches to problem-solving, a drive to initiate ideas and collaborate with colleagues to see them through, and an ability to communicate technical material clearly.

  • Skills in Python or R (+Scala a bonus), including ML libraries (e.g. SKLearn, NumPy, SciPy), SQL, parallelization and tools for large scale computing (e.g. Spark, Hadoop), matrix algebra, and vectorization

  • Experience with at least one of the DL frameworks (e.g. PyTorch, Caffe, TensorFlow, Theano, Keras) and a perspective on what distinguishes them

  • Experience with supervised and unsupervised learning algorithms, including cluster analysis (e.g. k-means, density-based), regression and classification with shallow algorithms (e.g. decision trees, XGBoost, and various ensemble methods) and with DL algorithms (e.g. RNN w/ LTSM, CNN)

  • Experience with advanced methods of ML model hyper-parameter tuning (e.g. Bayesian optimization)

  • Deep understanding of statistics and Bayesian inference, linear algebra (e.g. decomposition, image registration), vector calculus (e.g. gradients), time series analysis (e.g. Fourier Transform, ARIMA, Dynamic Time-Warping)

  • Proven track record of building production models (batch processing at minimum, online ML a bonus)

  • Experience in at least one of the following data domains: highly disparate public records, satellite images, text in various states of structure

  • Experience with remote computing and data management (e.g. AWS, GCP suite of tools)


Bonus points for



  • Being a technical lead on a successful ML-based product

  • Doing applied research (at graduate school level or equivalent) with: 1) Geospatial analysis, 2) Image processing (e.g. denoising, image registration), 3) NLP, or 4) ML with semantic databases

  • Experience with streaming models or online ML

  • Authored technical publications or presented work in the field

  • Domain expertise in real estate or the built environment


What we give you



  • You will have the opportunity to accelerate our rapidly growing organisation.

  • We're a lean team, so your impact will be felt immediately.

  • Agile working environment with flexible working hours and location, career advancement, and competitive compensation package

  • GeoPhy is a family friendly company. We arrange social activities to help our employees and families become familiar with each other and our culture

  • Diverse, unique colleagues from every corner of the world

Cloudreach
  • Philadelphia, PA

Cloudreach is looking to add a key team member to tackle unique challenges as we grow our software products that focus on optimizing computing performance, cost, and security, as well as enabling software modernization. As a key member of this team, your primary role will be to work directly with the R&D and engineering team as well as the product team. A successful candidate will have several years of engineering technical leadership experience with many of the specific tools listed below and enjoys the challenge of building state-of-the-art automated solutions for infrastructure management and monitoring of large-scale. The position requires deep knowledge of computing performance, including how to measure, model, and predict performance. The candidate must be able to lead the development of performance and cost optimization oriented products, which include developing computational systems that can reliably measure, model, and predict performance. This role requires comprehensive experimentation to ensure that performance models are valid across all scenarios. Lastly, a successful candidate must be able to design intuitive and yet comprehensive user interfaces that enable the user to leverage the deep knowledge computing performance, while the user does not have such knowledge. While this is not a research position, it shares methodologies of isolating specific questions, and developing rigorous solutions to the questions.

On the other hand, while a key differentiator is the ability to develop novel problems and solutions, there is considerable work on more straightforward performance-oriented product development.

What will you do at Cloudreach?

    • Hands-on development using primarily but not limited to Java, Python, SQL, Matlab, Scala, Tensorflow, C/C++, Linux modules, eBPF, or whatever is needed.
    • Lead and mentor a growing group of analytics engineers
    • Explore and help the team explore new methods to gain insights into performance, cost, and applications
    • Work with the product team to explore ideas that maintain a market leadership
    • Work with the entire engineering team to provide overall direction as it relates to performance and application analytics

What do we look for?

    • Strong understanding of computing performance in the context of enterprise and cloud computing
    • Significant experience in developing and implementing computational algorithms
    • Experience with building reliable data acquisition and processing systems
    • Experience in several of the following
      • Comprehensive performance evaluation for computing systems such as GPU, databases, enterprise applications, operating systems, networking, virtualization, and HPC
      • Deep and highly robust system monitoring and time series analysis
      • Anomaly detection
      • Cyber-security
      • Machine learning techniques
      • Code analysis
        • static and dynamic analysis
        • code complexity analysis
      • Natural language processing
        • Corpus generation
        • Named entity detection
        • Ontology generation
        • Text classification

Since Cloudreach focuses on cloud computing, networking, operating systems, enterprise applications, code development, etc., extensive domain knowledge is these or related areas is critical. Successful applicants should be able to read research papers and implement proposed solutions; research experience is helpful, but not required. Consequently, successful applicants will likely have a Masters degree or a Ph.D.

Requirements:

    • Education: Ph.D. prefered or Masters degree in Computer Science or related area.
    • Experience: 8+ years experience in software and performance-oriented analytics development
    • Leadership: Experience leading multiple product development teams in planning, executing and delivering on enterprise software projects
    • Architecture: Strong software architecture background and experience with the ability to mentor team members
    • Experience with one or more of the IaaS providers is a must (AWS, Azure, GCE cloud etc.)
    • Track record of delivering analytics-based products from conception to deployment and customer enablement

What are our cloudy perks?

    • A MacBook Pro and smartphone.
    • Unique cloudy culture -- we work hard and play hard.
    • Uncapped holidays and your birthday off.
    • World-class training and career development opportunities through our own Cloudy University.
    • Centrally-located offices.
    • Fully stocked kitchen and team cloudy lunches, taking over a restaurant or two every Friday.
    • Office amenities like pool tables and Xbox on the big screen TV.
    • Working with a collaborative, social team, and growing your skills faster than you will anywhere else.
    • Full benefits and 401k match.
    • Kick-off events at cool locations throughout the country.
Arm
  • Manchester, UK

We are building a new team at Arm in the area of Actionable Insight and we are looking for an AI Technology Engineer to join the team in Manchester.


You will help research, define and build the technology assets needed to create new software products and make the business successful and you will focus on the implementation of advanced Artificial Intelligence, Machine Learning technologies and Analytical Data Science.


This is an opportunity to be part of a new greenfield capability within a mature and successful company in a truly cutting edge and exciting area of technology in a new and growing market. You will join a wider team which includes Cloud & AI Engineering capabilities and collaborate to deliver a full service ecosystem.


You will be a creative, positive and adaptable individual with an AI & Technology background and an awareness of the emerging market in actionable Artificial Intelligence technologies.


What will you be accountable for?



  • You will develop new AI & Technology capabilities at the cutting edge of applied Artificial Intelligence technology, both on-device and on-cloud.

  • You will help define the technology strategy and roadmap and a champion within arm to deliver on that strategy

  • You will understand the AI & Technology needs of real customers and develop those technologies within your team.

  • You will collaborate with the Cloud and AI Engineering teams to develop software and share your knowledge to take solutions from development into full productive deployment.

  • You will work with market leading customers and suppliers in the smartphone, consumer device and automotive industries.

  • You will be excited about developing new skills in associated technologies to develop and deploy the solutions in different environments.


What skills, experience and qualifications will you need?



  • You can demonstrate an understanding of significant and applicable technologies within IoT services; machine learning; computer vision; data science; client computing, smartphones and sensor devices; wireless technologies, security and cloud.

  • You will be able prove and improve your of Deep Neural Networks knowledge (both training & application) and other machine learning approaches to solve real-world problems with Vision and Sound, up to and including motion / video & sensor fusion.

  • You have experience developing AI and ML models including DNN, CNN, RNN, LSTM and GRU

  • You will possess strong analytical and creative problem-solving abilities and be enthusiastic about solving challenges which are as-yet unsolved.

  • You will be excited about working globally with customers and internal & external partners.

  • You will be enthusiastic about learning about new technological developments in the AI field and bringing them to bear, and being instrumental in a team that will expand on those developments.


You will have experience integrating AI based solutions with the real world through different means, with a depth of experience in one or more of



  • Natural Language Processing (Speech)

  • Sound

  • Radar and other RF based sensors

  • Other, interesting and novel, non-vision based sensors


Are you excited by those responsibilities? Are you a fit for those requirements? If so, we'd love to hear from you.

Farfetch UK
  • London, UK

About the team:



We are a multidisciplinary team of Data Scientists and Software Engineers with a culture of empowerment, teamwork and fun. Our team is responsible for large-scale and complex machine learning projects directly providing business critical functionality to other teams and using the latest technologies in the field



Working collaboratively as a team and with our business colleagues, both here in London and across our other locations, you’ll be shaping the technical direction of a critically important part of Farfetch. We are a team that surrounds ourselves with talented colleagues and we are looking for brilliant Software Engineers who are open to taking on plenty of new challenges.



What you’ll do:



Our team works with vast quantities of messy data, such as unstructured text and images collected from the internet, applying machine learning techniques, such as deep learning, natural language processing and computer vision, to transform it into a format that can be readily used within the business. As an Engineer within our team you will help to shape and deliver the engineering components of the services that our team provides to the business. This includes the following:




  • Work with Project Lead to help design and implement new or existing parts of the system architecture.

  • Work on surfacing the team’s output through the construction of ETLs, APIs and web interfaces.

  • Work closely with the Data Scientists within the team to enable them to produce clean production quality code for their machine learning solutions.



Who you are:



First and foremost, you’re passionate about solving complex, challenging and interesting business problems. You have solid professional experience with Python and its ecosystem, with a  thorough approach to testing.



To be successful in this role you have strong experience with:



  • Python 3

  • Web frameworks, such as Flask or Django.

  • Celery, Airflow, PySpark or other processing frameworks.

  • Docker

  • ElasticSearch, Solr or a similar technology.



Bonus points if you have experience with:



  • Web scraping frameworks, such as Scrapy.

  • Terraform, Packer

  • Google Cloud Platform, such as Google BigQuery or Google Cloud Storage.



About the department:



We are the beating heart of Farfetch, supporting the running of the business and exploring new and exciting technologies across web, mobile and instore to help us transform the industry. Split across three main offices - London, Porto and Lisbon - we are the fastest growing teams in the business. We're committed to turning the company into the leading multi-channel platform and are constantly looking for brilliant people who can help us shape tomorrow's customer experience.





We are committed to equality of opportunity for all employees. Applications from individuals are encouraged regardless of age, disability, sex, gender reassignment, sexual orientation, pregnancy and maternity, race, religion or belief and marriage and civil partnerships.