OnlyDataJobs.com

JPMorgan Chase & Co.
  • Houston, TX
Our Corporate Technology team relies on smart, driven people like you to develop applications and provide tech support for all our corporate functions across our network. Your efforts will touch lives all over the financial spectrum and across all our divisions: Global Finance, Corporate Treasury, Risk Management, Human Resources, Compliance, Legal, and within the Corporate Administrative Office. Youll be part of a team specifically built to meet and exceed our evolving technology needs, as well as our technology controls agenda.
As a Machine Learning Engineer, you will provide high quality technology solutions that address business needs by developing applications within mature technology environments. You will utilize mature (3rd or 4th Generation) programming methodologies and languages and adhere to coding standards, procedures and techniques while contributing to the technical code documentation.
You will participate in project planning sessions with project managers, business analysts and team members to analyze business requirements and outline the proposed IT solution. You will participate in design reviews and provide input to the design recommendations; incorporate security requirements into design; and provide input to information/data flow, and understand and comply to Project Life Cycle Methodology in all planning steps. You will also adhere to IT Control Policies throughout design, development and testing and incorporate Corporate Architectural Standards into application design specifications.
Additionally, you will document the detailed application specifications, translate technical requirements into programmed application modules and develop/Enhance software application modules. You will participate in code reviews and ensure that all solutions are aligned to pre-defined architectural specifications; identify/troubleshoot application code-related issues; and review and provide feedback to the final user documentation.
Key Responsibilities
Provide leadership and direction for the key machine learning initiatives in the Operation Risk domain
Act as machine learning evangelist in the Operation Risk domain
Perform research and proof of concepts to determine ML/AI applicability to potential use cases
Mentor junior data scientists and team members new to machine learning
Display efficient work style with attention to detail, organization, and strong sense of urgency
Designing software and producing scalable and resilient technical designs
Creating Automated Unit Tests using Flexible/Open Source Frameworks
Digesting and understanding Business Requirements and designing new modules/functionality which meet the needs of our business partners.
Implement model reviews and machine learning governance framework
Manage code quality for total build effort.
Participate in design reviews and provide input to the design recommendations
Essentials
  • Advanced degree in Math, Computer Science or another quantitative field
  • Three to five years working experience in machine learning, preferably natural language processing
  • Ability to work in a team as a self-directed contributor with a proven track record of being detail orientated, innovative, creative, and strategic
  • Strong problem solving and data analytical skills
  • Industry experience building end-to-end Machine Learning systems leveraging Python (Scikit-Learn, Pandas, Numpy, Tensorflow, Keras, NLTK, Gensim et al.) or other similar languages
  • Experience of a variety of machine learning algorithms (classification, clustering, deep learning et al.) and natural language processing applications (topic modeling, sentiment analysis et al.)
  • Experience with NLP techniques to transform unstructured text data to structured data (lemmatization, stemming, Bag-of-words, word embedding et al.)
  • Experience visualizing/presenting data for stakeholders using Tableau, or open-source Python packages such as matplotlib, seaborn et al.
  • Familiar with Hive/Impala to manipulate data and draw insights from Hadoop
Personal Specification
Demonstrate Continual Improvement in terms of Individual Performance
Strong communication skills
Bright and enthusiastic, self-directed
Excellent analytical and problem solving skills
When you work at JPMorgan Chase & Co., youre not just working at a global financial institution. Youre an integral part of one of the worlds biggest tech companies. In 14 technology hubs worldwide, our team of 40,000+ technologists design, build and deploy everything from enterprise technology initiatives to big data and mobile solutions, as well as innovations in electronic payments, cybersecurity, machine learning, and cloud development. Our $9.5B+ annual investment in technology enables us to hire people to create innovative solutions that will not only transform the financial services industry, but also change the world.
Epidemic Sound AB
  • Stockholm, Sweden

We are now looking for an experienced Machine Learning Specialist (you’re likely currently a Data Scientist, or perhaps an advanced Insight Analyst who’s had the opportunity to use Machine Learning in a commercial environment). 



Job description


The position as a Machine Learning Specialist will report directly in to the CTO, in a fresh new team which functions as a de-centralised squad, delivering advanced analysis and machine learning to various departments throughout the company. You’ll work alongside Data Engineers and Developers to deploy microservices solving many different business needs. The use cases range from Customer Lifetime Value & Churn prediction – to building fantastic recommender engines to further personalize Epidemic Sound’s offering.


You will be working closely with the backend data team in developing robust, scalable algorithms. You will improve the personalization of the product by:



  • Analysing behaviours of visitors, identifying patterns and outliers which can indicate their likelihood to Churn.

  • Developing classification systems through feature extraction on music to identify type & ‘feel’ of any given content.

  • Creating recommender engines so that the music our users see first, is relevant to them based on their behaviours.

  • Contributing to the automation of previously manual tasks, by leveraging the classification systems you’ve contributed to building.

  • Consulting on appropriate implementation of algorithms in practice – and actively identifying new use cases that can help improve Epidemic Sound!



What are we looking for?


We’re looking for a team member with a “no task is too small” mindset – we are at the beginning of our Machine Learning journey – so we need someone who thinks building something from scratch sounds exciting. 


It would be music to our ears if you have:



  • Deep understanding of machine learning (neural networks, deep learning, classification, regression)

  • Experience with machine learning in production

  • Experience with: tensorflow, keras, pytorch, sciki-learn, scipy, numpy, pandas or similar

  • Experience with ml projects in customer value or music information retrieval (MIR)

  • Fluency in python programming and a passion for production ready code

  • Experience from Google Cloud and/or AWS

  • MSc in a Quantitative or Computer Science based subject (Machine Learning, Statistics, Applied Mathematics)


Curious about our music? Find our music on Spotify here → https://open.spotify.com/user/epidemicsound


Application


Do you want to be a part of our fantastic team? Please apply by clicking the link below.

Railinc
  • Headquarters: Cary, NC URL: h
Headquarters: Cary, NC URL: https://www.railinc.com/rportal/web/guest/home [https://www.railinc.com/rportal/web/guest/home] Primary Accountability/Responsibility: Responsible for modeling complex business problems, discovering business insights and identifying opportunities through the use of statistical, algorithmic, data mining, machine learning, and visualization techniques. In addition to advanced data analytics skills, this role is also proficient at integrating and preparing large, varied datasets, and communicating results and recommendations based on the model/analysis to both technical and non-technical audiences. This role is also able to recommend the most effective data science approaches for various challenges and will be able to implement these approaches independently without guidance as well as guide other data scientists in their efforts. Additionally, the lead data scientist will be able to manage data science teams and help drive the organization’s technology vision, be up to date with the state of technology, and help ensure the organization makes forward-looking strategic decisions in its data science approach. Job AccountabilityResponsibilitiesKey MeasuresEssential Functions • 35%: Conduct analytical research, develop prototypes and contribute to production-ready solutions. Business domains include but are not limited to:o Fleet Utilization for Railroads and Equipment Ownerso Miles & time between car repair incidentso Railroad yard traffic predictionso Predictive modeling of maintenance, ETAs, etc.o Internal measurement developmento Data quality evaluationso Data enrichmento Prototype reports to be included in production systems • 35%: Deliver analytical projects by:o Working with project managers to define schedule and deliver results to all customers and internal stakeholders and managing expectationso Collaborating with IT resources to develop and optimize production solutionso Creating and documenting repeatable solutions for meeting ongoing customer needso Communicating the results and methodology to internal and external stakeholders • 20%: Drive the organization’s data science and technology vision through research on the state of the art in data science technologies to ensure forward-thinking strategic decisions and recommendations • 10%: Gather, interpret, and translate customer needs into business problems that can be solved via advanced analytics methodologies by:o Working with business stakeholders and/or facilitate data analysis opportunity discussionso Leading solution analysis, definition, and requirements gathering for data serviceso Prioritizing data analysis using rail industry priorities & business cases • Collaborate with other data scientists to drive data analysis methodology, repeatable and structured • Exercise judgment within generally defined practices and policies in selecting methods and techniques for obtaining solutions • Takes the lead on analytical projects, and may lead the projects of other data scientists and analytical consultants • Work with manager to finalize priority and deadlines, and adjust & communicate as necessary • Customer feedback and delivery against commitments Non-Essential Functions • Support and improve internal decision making • Develop measurement dashboards • Conduct and report industry trend and market analysis to meet industry needs • Performs other duties as assigned • Customer feedback and delivery against commitmentsSuccess Factors: • Knowledge/Skills/Abilities Minimum Requirements • Superior analytical skills with working knowledge of basic statistical, predictive and optimization models • Experience in leading and managing data science teams • Strong programming proficiency and working experience (10+years) in a subset of Python, R, Scala, Java (Python preferred) • Strong proficiency and experience (10+ years) with data science and machine learning software stacks, e.g. NumPy, Pandas, SK-learn for Python • Programming proficiency in Spark and MLlib (3+ years) • Working experience with and understanding of large-scale data analysis systems, e.g. Hadoop or MPP databases (5+ years) • Proficiency and experience with SQL (10+ years) • Significant experience (10+ years) implementing machine learning and data science models, including through production • Strong theoretical and practical knowledge of machine learning models and algorithms (unsupervised and supervised), their use in applications, and their advantages and disadvantages • Strong knowledge of code-based data visualization tools (e.g. matplotlib) for data exploration and to present models and results to internal and external stakeholders • Experience with cloud-based systems and toolsets (3+ years) • Experience and understanding of experiment design and evaluation • Knowledge of big data engineering toolsets a plus • Superior data preparation skills; be able to access, transform and manipulate Railinc and external customer data in its base form • Superior problem-solving skills • Entrepreneurial mindset and business understanding • Up to date with the state of the art in the data science technology and related infrastructure and services space • Strong team management and leadership skills • Strong verbal and written communication skills • Ability to work effectively with clients, IT management, business management, project managers, and IT staff • Ability to manage multiple activities in a deadline-oriented environment; highly organized and flexible • Ability to work independently and jointly in unstructured environments in a self-directed way • Ability to take a leadership role on engagements and with customers • Strong teamwork skills and ability to work effectively with multiple internal customersAdditional Requirements: • Education • Experience • Certifications • Advanced degree (PhD or Masters) in an analytical or technical field (e.g. applied mathematics, statistics, physics, computer science, operations research, business, economics) • A minimum of 10 years related work experience • Strong statistical analysis and methodology experience • Experience with analytics tools and big data platforms • Experience with business intelligence and analyticsPhysical RequirementsList physical activities and requirements, including but not limited to: • Sedentary work: assignment involves sitting at workstation (desk) most of the time (up to 8 hours) with only occasional walking and/or standing • Keyboarding: Primarily using fingers for typing • Talking: Expressing or communicating verbally through use of spoken words (accurately conveying detailed or important spoken instructions to others) • Hearing: Ability to receive detailed information through oral communication and to make discriminations in sound. • Visual: Through close visual acuity, required to perform activities such as: preparing and analyzing data and figures; transcribing; viewing computer terminal; extensive reading (with or without correction) • Environment: work is performed within an office setting and therefore no substantial exposure to adverse environmental conditions (i.e. extreme heat, cold, noise, etc.). Customer visits may be done at railroad facilities, which would require appropriate safety equipment. • Travel: Some travel may be required (up to 25%).To apply: https://recruiting.ultipro.com/RAI1006RLNC/JobBoard/d0e5b848-54f5-44c4-8878-c03cf9954c79/OpportunityDetail?opportunityId=0bf87568-7bb1-4013-97e8-cc3e72cf8194 [https://recruiting.ultipro.com/RAI1006RLNC/JobBoard/d0e5b848-54f5-44c4-8878-c03cf9954c79/OpportunityDetail?opportunityId=0bf87568-7bb1-4013-97e8-cc3e72cf8194]
Facebook
  • Menlo Park, CA

Facebook's mission is to give people the power to build community and bring the world closer together. Through our family of apps and services, we're building a different kind of company that connects billions of people around the world, gives them ways to share what matters most to them, and helps bring people closer together. Whether we're creating new products or helping a small business expand its reach, people at Facebook are builders at heart. Our global teams are constantly iterating, solving problems, and working together to empower people around the world to build community and connect in meaningful ways. Together, we can help people build stronger communities - we're just getting started.


We are seeking a Data Scientist to join our AI Data Science and Analytics team, focusing on Video Understanding. Facebook is the second largest online video platform in the world. Video at its heart is a social experience, as people talk about and talk through video. Users upload their own videos. They go Live and broadcast to their friends. Pages upload tens of millions of hours a day. The mission of the Video Understanding team at Facebook is responsible to create highly personalized video experiences for Facebook users. This means understanding what a video is about, as well as what a given user is interested in. This role is focused on better understanding the ecosystem of videos on Facebook. That includes better understanding the types of videos we have (classification by different content types and publisher types); better understanding the types of user interactions we have (substantive discourse versus platitudes versus shares); better understanding the impact of video watching on our users, so that we can build brand new, personalized video experiences for our users.
We are looking for strong Data Scientists to help the team build compelling, personalized experiences around Video. This role works closely with both product and engineering teams to help define and execute on opportunities to improve and expand our video understanding and recommendation systems. Successful candidates for this role will have a background in a quantitative or technical field, will have experience with personalization, video ecosystems and working with large data sets, and will have experience in influencing decision making across different teams through data.

Responsibilities

  • Apply your analytical skills to gain deep insights into the data and ML model performance, and be able to clearly present your results and help guide cross-functional partners
  • Partner closely with Engineering, Product and User Research teams to solve problems and identify trends and opportunities. This is a very cross-functional role
  • Influencing the roadmap and decisions made by the team through presentation of data-based recommendations, and clearly communicating the state of the video understanding effort, experiment results, etc. to product teams
  • Partner with cross-functional teams to identify new opportunities requiring the use of modern analytical and modeling techniques
  • Effectively communicate insights and recommendations to upper management in support of strategic decision-making
  • Plan, and be able to conduct as needed, end-to-end analyses, from data requirement gathering, to data processing and modeling
  • Own ongoing deliverables and communications

Minimum Qualifications

  • MS degree in a quantitative discipline (e.g., statistics, operations research, econometrics, computer science, applied mathematics, physics, electrical engineering, industrial engineering) or equivalent experience
  • 10+ years experience doing quantitative analysis or statistical modeling
  • Knowledge of at least one modeling framework (e.g., scikit-learn, TensorFlow, SAS, R, MATLAB)
  • Experience influencing product strategy through data-centric presentations (to product, business, and other stakeholders)
  • Knowledge in at least one of the following areas: predictive modeling, machine learning, experimentation methods
  • Experience extracting and manipulating large datasets
  • Development experience in any scripting language (PHP, Python, Perl, etc.)
  • Experienced with packages such as NumPy, SciPy, pandas, scikit-learn, dplyr, ggplot2

Preferred Qualifications

  • 5+ years experience leading technical teams
  • Experience with distributed computing (Hive/Hadoop)

Facebook is proud to be an Equal Opportunity and Affirmative Action employer. We do not discriminate based upon race, religion, color, national origin, sex (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender, gender identity, gender expression, transgender status, sexual stereotypes, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics. We also consider qualified applicants with criminal histories, consistent with applicable federal, state and local law.

Facebook is committed to providing reasonable accommodations for candidates with disabilities in our recruiting process. If you need any assistance or accommodations due to a disability, please let us know at accommodations-ext@fb.com.

Andiamo Partners
  • Philadelphia, PA
Overview
Job Description:
We are a 30 year-old privately held trading firm located just outside of Philadelphia, PA. Building virtually all of our own trading technology from scratch, we are leaders and innovators in high performance trading.
We are seeking a Scientific Programmer to develop and productionize tools for scaling up and out the processing of petabytes of data on a cluster with thousands of cores. I n this role, you will be working on a team responsible for managing the data pipeline and the tools for generating rich data sets used by Equity, Futures, and Options strategies. This person will be building tools that analyze all data and signal feeds to aid in backtesting of strategies and new signal discovery. This is a highly visible team that works with traders and quantitative research strategists to provide key information used in trading.
In This Role, You Will
    • Build and maintain tools for data processing pipeline, including numerical algorithms and dependency-based scheduling and monitoring
    • Write performant code to optimize components across the data generation and data access pipeline
    • Work with historical tick-by-tick data for US Equities, Futures, and Options markets to build in-house high performance analytics and simulation platforms

What We're Looking For
    • MS or PHD in the Applied Sciences or related discipline or its foreign equivalent plus experience in numerical analysis and programming
    • Fluency in C++ and Python is preferred
    • Experience with NumPy and SciPi scientific computing packages
    • Enthusiasm for working with data, especially large sets of data. Strong aesthetics for cleanliness and correctness in data
    • Visa sponsorship is available for this position

Join Our Client's Team
We are a global quantitative trading firm founded with an entrepreneurial mindset and a rigorous analytical approach to decision making.
We commit our own capital to trade financial products around the world. Building virtually all of our own trading technology from scratch, we are leaders and innovators in high performance, low latency trading. Our traders, quants, developers, and systems engineers work side-by-side to develop and implement our trading strategies. Each individual brings their unique expertise every day to help us make optimal decisions in the global financial markets.
Andiamo Is An Equal Opportunity Employer
Andiamo provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Andiamo complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Recruiter Name: Vishal Mehta
Lawrence Harvey
  • Austin, TX

Applied Machine Learning Engineer - Austin TX


Lawrence Harvey is working with a phenomonal client here in the Austin that is transforming how enterprises execute on Artificial Intelligence and Machine Learning projects . It is a product based company quickly making a name for themselves, largely specializaing in visualization and NLP. They are  looking for a team lead to help meet the demand of their increasing clientele.


 As an Applied Machine Learning Engineer, you are responsible for implementing and maintaining AI capabilities in Alegions AI Enablement Platform and related software products.

It is important to have extensive experience in evaluating, integrating and deploying machine learning algorithms in a SaaS software platform.



Requirements:   


  • 2+ years of experience integrating machine learning algorithms into cloud platforms, including resource provisioning, installation, scaling, and validation, as well as building, training, and monitoring the machine learning production models
  • 5+ years of experience in developing software in Java, C++/C, and/or Python
  • Experience with all or some of the following machine learning, deep learning, computer vision, image processing, and data and image analysis tools (Tensorflow, Keras, Caffe2, Torch/PyTorch  OpenCV/FastCV, numpy, scipy, and scikit-learn)
  • Experience with data transformations, API wrappers and output formats used with machine learning algorithms
  • Hands-on, in-depth experience with AWS or other cloud infrastructure technologies



This company is truly changing the way data is processed and transformed within the visualization and NLP sphere. Be a part of something that is on the forefront of how we transform data.







Lawrence Harvey
  • Austin, TX

Senior Machine Learning Engineer - Austin TX


Lawrence Harvey is working with a phenomonal client here in the Austin that is transforming how enterprises execute on Artificial Intelligence and Machine Learning projects . It is a product based company quickly making a name for themselves, largely specializaing in visualization and NLP. They are  looking for a team lead to help meet the demand of their increasing clientele.


 As an Applied Machine Learning Engineer, you are responsible for implementing and maintaining AI capabilities in Alegions AI Enablement Platform and related software products.

It is important to have extensive experience in evaluating, integrating and deploying machine learning algorithms in a SaaS software platform.



Requirements:   


  • 2+ years of experience integrating machine learning algorithms into cloud platforms, including resource provisioning, installation, scaling, and validation, as well as building, training, and monitoring the machine learning production models
  • 5+ years of experience in developing software in Java, C++/C, and/or Python
  • Experience with all or some of the following machine learning, deep learning, computer vision, image processing, and data and image analysis tools (Tensorflow, Keras, Caffe2, Torch/PyTorch  OpenCV/FastCV, numpy, scipy, and scikit-learn)
  • Experience with data transformations, API wrappers and output formats used with machine learning algorithms
  • Hands-on, in-depth experience with AWS or other cloud infrastructure technologies



This company is truly changing the way data is processed and transformed within the visualization and NLP sphere. Be a part of something that is on the forefront of how we transform data.







Andiamo Partners
  • Philadelphia, PA
Overview
Job Description:
We are a 30 year-old privately held trading firm located just outside of Philadelphia, PA. Building virtually all of our own trading technology from scratch, we are leaders and innovators in high performance trading.
We are seeking a Scientific Programmer to develop and productionize tools for scaling up and out the processing of petabytes of data on a cluster with thousands of cores. I n this role, you will be working on a team responsible for managing the data pipeline and the tools for generating rich data sets used by Equity, Futures, and Options strategies. This person will be building tools that analyze all data and signal feeds to aid in backtesting of strategies and new signal discovery. This is a highly visible team that works with traders and quantitative research strategists to provide key information used in trading.
In This Role, You Will
    • Build and maintain tools for data processing pipeline, including numerical algorithms and dependency-based scheduling and monitoring
    • Write performant code to optimize components across the data generation and data access pipeline
    • Work with historical tick-by-tick data for US Equities, Futures, and Options markets to build in-house high performance analytics and simulation platforms

What We're Looking For
    • MS or PHD in the Applied Sciences or related discipline or its foreign equivalent plus experience in numerical analysis and programming
    • Fluency in C++ and Python is preferred
    • Experience with NumPy and SciPi scientific computing packages
    • Enthusiasm for working with data, especially large sets of data. Strong aesthetics for cleanliness and correctness in data
    • Visa sponsorship is available for this position

Join Our Client's Team
We are a global quantitative trading firm founded with an entrepreneurial mindset and a rigorous analytical approach to decision making.
We commit our own capital to trade financial products around the world. Building virtually all of our own trading technology from scratch, we are leaders and innovators in high performance, low latency trading. Our traders, quants, developers, and systems engineers work side-by-side to develop and implement our trading strategies. Each individual brings their unique expertise every day to help us make optimal decisions in the global financial markets.
Andiamo Is An Equal Opportunity Employer
Andiamo provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Andiamo complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Recruiter Name: Vishal Mehta
Calabrio, Inc.
  • Minneapolis, MN
Rapid Prototype Developer
Minneapolis
Its fun to work in a company where people truly BELIEVE in what they are doing! Were committed to bringing passion and customer focus to the business
We are looking for a passionate, driven software engineer to join our Rapid Prototype Team, with the focus on quickly developing new ideas with a fail fast and iterate mentality.
As the primary developer on the team, you will work closely with our Machine Learning (ML) Engineers to develop the teams early-stage ideas into prototypes. These prototypes are aimed at getting cutting-edge features in the hands of potential users as soon as possible, allowing these features to mature and eventually integrate into the Calabrio suite. You will be in charge of wrapping early-stage ML features in APIs, pulling data from various data sources, exposing features publicly and privately using web interfaces, building user interfaces, and capturing and storing user interactions. In this role, you will be unhindered by normal development processes and will have creative freedom to take your ideas and realize them as quickly as you develop them.
CORE EXPECTATIONS
    • Explore, prototype, and build new and innovative software applications
    • Collaborate with Machine Learning Engineers to transform concepts into internal and customer-facing prototypes
    • Ability to write back-end and front-end code in the framework that best suits the task
    • Build tools to visualize complex data in an easily digestible manner, especially for non-technical users
    • Ability to communicate clearly with customers, engineers, product owners, and executives alike
    • Think critically, analytically, and creatively

WHAT SKILLS ARE REQUIRED OF YOU TO APPLY?
    • Bachelors degree (computer science, engineering, or relevant field a plus)
    • 5+ years experience writing production Python code
    • Demonstrated proficiency in REST APIs, data cleaning, data processing, SQL databases, and NoSQL solutions
    • Familiarity with Linux/Unix
    • Experience managing and deploying code to cloud computing environments such as AWS
    • Comfortable in a results-oriented environment, specifically when faced with uncertainty, rapid change, and projects going unfinished

WHAT SKILLS WOULD SET YOU APART?
    • Curiosity in machine learning and artificial intelligence
    • Familiarity with the following Python modules at the API level: NumPy, SciPy, genism, spaCy, scikit-learn, Tensorflow, and Flask
    • Ability to build microservices in Docker

WHAT VALUES ARE IMPORTANT TO CALABRIO?
    • Collaboration amongst teams
    • Open communication across the company
    • Ambitious
    • Accountable
    • Customer Success!

Posted 6 Days Ago Full time R666
Calabrio is revolutionizing the way enterprises engage their customers with Calabrio ONE®, a unified suiteincluding call recording, quality management, workforce management and voice-of-the-customer analyticsthat records, captures and analyzes customer interactions to provide a single view of the customer, and improve the overall agent and customer experience. Calabrio ONE is easy to use, which empowers management to align activities and resources quickly with the demands of todays multichannel customer. The secure platform has a lower total cost of ownership and can be deployed and expanded on a public, private or hybrid cloud.
Calabrio has been a Star Tribune Top Workplace for 4 years in a row.
Find more at http://calabrio.com/ and follow @Calabrio on Twitter.
Andiamo Partners
  • Philadelphia, PA
Overview
Job Description:
We are a 30 year-old privately held trading firm located just outside of Philadelphia, PA. Building virtually all of our own trading technology from scratch, we are leaders and innovators in high performance trading.
We are seeking a Scientific Programmer to develop and productionize tools for scaling up and out the processing of petabytes of data on a cluster with thousands of cores. I n this role, you will be working on a team responsible for managing the data pipeline and the tools for generating rich data sets used by Equity, Futures, and Options strategies. This person will be building tools that analyze all data and signal feeds to aid in backtesting of strategies and new signal discovery. This is a highly visible team that works with traders and quantitative research strategists to provide key information used in trading.
In This Role, You Will
    • Build and maintain tools for data processing pipeline, including numerical algorithms and dependency-based scheduling and monitoring
    • Write performant code to optimize components across the data generation and data access pipeline
    • Work with historical tick-by-tick data for US Equities, Futures, and Options markets to build in-house high performance analytics and simulation platforms

What We're Looking For
    • MS or PHD in the Applied Sciences or related discipline or its foreign equivalent plus experience in numerical analysis and programming
    • Fluency in C++ and Python is preferred
    • Experience with NumPy and SciPi scientific computing packages
    • Enthusiasm for working with data, especially large sets of data. Strong aesthetics for cleanliness and correctness in data
    • Visa sponsorship is available for this position

Andiamo Is An Equal Opportunity Employer
Andiamo provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Andiamo complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Recruiter Name: Vishal Mehta
IBM
  • Austin, TX
The Role
We are seeking an architect to help ensure that data science applications and the most important learning frameworks leveraging the Python ecosystem offer industry-leading performance when run on the POWER microprocessor.

Your Impact
As part of the team that designed the #1 & #2 fastest supercomputers in the world, you will be part of a high profile initiative to optimize performance across all layers of the Python AI software stack for Power. This work will include libraries, Python packages, and learning frameworks such as TensorFlow and PyTorch for Power. This project provides opportunities to innovate in software optimization as well as exploitation of bleeding edge hardware architectures in upcoming processors.

Description
This role will involve identifying the libraries, frameworks and packages in the Python ecosystem that are most critical to machine learning/deep learning engineers and data scientists. Industry-standard benchmarks as well as open-source data sets and customer workloads will be used to stress this code in order to identify opportunities across the stack for performance improvement. Code will also be run on competitive platforms to understand relative areas of strength and weakness. Deliverables will include code implementation and improvements, tuning recommendations and whitepapers as well as sales collateral, and providing performance feedback to open source communities. Insights gained will be fed to hardware design teams to improve next-generation hardware.

Key Qualifications
  • Prior experience working with the Python community and CPython implementation.
  • Deep understanding of underlying mathematical operations that drive ML and Neural Networks
  • Experience with prototyping, extending the architecture and optimizing to leverage multi-threading, SIMD, or other accelerators such as GPUs or FPGAs
  • Strong understanding of memory layout, micro-architecture implications and experience working with compilers and interpreters
  • Hands-on experience with projects utilizing machine learning and deep learning techniques with at least one of the following - TensorFlow, PyTorch, Caffe.
  • Strong skills in Python, C, C++.
  • Previous experience contributing to open source projects.
  • Ability to work in a team, network with people outside of the team and effectively communicate in written and verbal presentations is essential.
  • Experience implementing or optimizing key Python packages such as NumPy, pandas, SciPy would be an asset.


*Required Professional and Technical Expertise
In this role, we require:

  • 5+ years in Python programming
  • Demonstrated experience implementing neural network or machine-learning algorithms
  • Experience with the internals of least one of TensorFlow or PyTorch (performance optimizations, feature additions, bugfixes etc.)
  • Knowledge of modern microprocessor design
  • Experience using acceleration technologies for data science applications (GPU, FPGA, SIMD, custom ASIC etc.)
  • Passion for continuous improvement in building knowledge base both technically and professionally
  • Minimum BS OR MS degree in Computer Science, Computer Engineering or a related technical discipline or equivalent experience.

*Preferred Professional and Technical Expertise

  • Experience with multiple frameworks
  • Recognized status in a key open source community
  • In-depth knowledge of AIs application in industry

Andiamo Partners
  • Philadelphia, PA
Overview
Job Description:
We are a 30 year-old privately held trading firm located just outside of Philadelphia, PA. Building virtually all of our own trading technology from scratch, we are leaders and innovators in high performance trading.
We are seeking a Scientific Programmer to develop and productionize tools for scaling up and out the processing of petabytes of data on a cluster with thousands of cores. I n this role, you will be working on a team responsible for managing the data pipeline and the tools for generating rich data sets used by Equity, Futures, and Options strategies. This person will be building tools that analyze all data and signal feeds to aid in backtesting of strategies and new signal discovery. This is a highly visible team that works with traders and quantitative research strategists to provide key information used in trading.
In This Role, You Will
    • Build and maintain tools for data processing pipeline, including numerical algorithms and dependency-based scheduling and monitoring
    • Write performant code to optimize components across the data generation and data access pipeline
    • Work with historical tick-by-tick data for US Equities, Futures, and Options markets to build in-house high performance analytics and simulation platforms

What We're Looking For
    • MS or PHD in the Applied Sciences or related discipline or its foreign equivalent plus experience in numerical analysis and programming
    • Fluency in C++ and Python is preferred
    • Experience with NumPy and SciPi scientific computing packages
    • Enthusiasm for working with data, especially large sets of data. Strong aesthetics for cleanliness and correctness in data
    • Visa sponsorship is available for this position

Join Our Client's Team
We are a global quantitative trading firm founded with an entrepreneurial mindset and a rigorous analytical approach to decision making.
We commit our own capital to trade financial products around the world. Building virtually all of our own trading technology from scratch, we are leaders and innovators in high performance, low latency trading. Our traders, quants, developers, and systems engineers work side-by-side to develop and implement our trading strategies. Each individual brings their unique expertise every day to help us make optimal decisions in the global financial markets.
Andiamo Is An Equal Opportunity Employer
Andiamo provides equal employment opportunities to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability or genetics. In addition to federal law requirements, Andiamo complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.
Recruiter Name: Vishal Mehta
Drooms GmbH
  • Frankfurt am Main, Deutschland

We are looking for a Knowledge Engineer (f/m) who has experience with NLP and Machine Learning and wants to join our Semantic team in Frankfurt, Germany. Beyond the required skills and abilities, we are looking for a highly passionate, optimistic, and confident individual who can work with equal dedication for projects that come along.


What you'll be doing



  • Apply Machine Learning and NLP techniques to develop high-end software solutions for the world’s leading real estate companies

  • Develop knowledge representations and vertical ontologies

  • Create and maintain linguistic resources (text corpora, dictionaries, rules, etc.)

  • Perform testing and validation of information

  • Utilize Knowledge Management’s best practices to develop solutions


What you need



  • Experience in Machine learning and NLP, especially with text processing

  • Confident in one programming language (e.g.: Python, Ruby, Java, etc)

  • Good knowledge of programming libraries and frameworks for ML or NLP (e.g.: numpy, mathplotlib, pandas, WEKA, DL4J, MLlib, Tensorflow, Keras)

  • Understanding of Semantic Web technologies stack

  • Comfortable working in an agile environment

  • Detailed orientated, structured and sociable teamplayer

  • Proficient written and spoken English


We are looking forward to meeting you! Apply now!

Calabrio, Inc.
  • Minneapolis, MN
Rapid Prototype Developer
Minneapolis
Its fun to work in a company where people truly BELIEVE in what they are doing! Were committed to bringing passion and customer focus to the business
We are looking for a passionate, driven software engineer to join our Rapid Prototype Team, with the focus on quickly developing new ideas with a fail fast and iterate mentality.
As the primary developer on the team, you will work closely with our Machine Learning (ML) Engineers to develop the teams early-stage ideas into prototypes. These prototypes are aimed at getting cutting-edge features in the hands of potential users as soon as possible, allowing these features to mature and eventually integrate into the Calabrio suite. You will be in charge of wrapping early-stage ML features in APIs, pulling data from various data sources, exposing features publicly and privately using web interfaces, building user interfaces, and capturing and storing user interactions. In this role, you will be unhindered by normal development processes and will have creative freedom to take your ideas and realize them as quickly as you develop them.
CORE EXPECTATIONS
    • Explore, prototype, and build new and innovative software applications
    • Collaborate with ML Engineers to transform concepts into internal and customer-facing prototypes
    • Ability to write back-end and front-end code in the framework that best suits the task
    • Build tools to visualize complex data in an easily digestible manner, especially for non-technical users
    • Ability to communicate clearly with customers, engineers, product owners, and executives alike
    • Think critically, analytically, and creatively

WHAT SKILLS ARE REQUIRED OF YOU TO APPLY?
    • Bachelors degree (computer science, engineering, or relevant field a plus)
    • 5+ years experience writing production Python code
    • Demonstrated proficiency in REST APIs, data cleaning, data processing, SQL databases, and NoSQL solutions
    • Familiarity with Linux/Unix
    • Experience managing and deploying code to cloud computing environments such as AWS
    • Comfortable in a results-oriented environment, specifically when faced with uncertainty, rapid change, and projects going unfinished

WHAT SKILLS WOULD SET YOU APART?
    • Curiosity in machine learning and artificial intelligence
    • Familiarity with the following Python modules at the API level: NumPy, SciPy, genism, spaCy, scikit-learn, Tensorflow, and Flask
    • Ability to build microservices in Docker

WHAT VALUES ARE IMPORTANT TO CALABRIO?
    • Collaboration amongst teams
    • Open communication across the company
    • Ambitious
    • Accountable
    • Customer Success!

Posted Yesterday Full time R666
Calabrio is revolutionizing the way enterprises engage their customers with Calabrio ONE®, a unified suiteincluding call recording, quality management, workforce management and voice-of-the-customer analyticsthat records, captures and analyzes customer interactions to provide a single view of the customer, and improve the overall agent and customer experience. Calabrio ONE is easy to use, which empowers management to align activities and resources quickly with the demands of todays multichannel customer. The secure platform has a lower total cost of ownership and can be deployed and expanded on a public, private or hybrid cloud.
Calabrio has been a Star Tribune Top Workplace for 4 years in a row.
Find more at http://calabrio.com/ and follow @Calabrio on Twitter.
TERAKI
  • Berlin, Germany

Teraki is a Berlin based tech driven company enabling true mobility. We stand for innovation in the rapidly developing connected car, self-driving and 3D mapping world. Teraki provides data reduction and data processing solutions for Automotive (IoT) applications and enables the launch of new applications by reducing hardware footprint, latency and costs. We help our customers on the challenges that are posed by the exploding amounts of data in connected vehicles for all sensor, video and 3D mapping data. 


In this role, you will contribute to the design and implementation of our backend system serving millions of cars, being highly efficient and easy to maintain. You will closely work with other Backend developers, DevOps Engineers, Software Engineers and Data Engineers. 


Your Responsibilities 



  • Design and implement scalable and future-proof micro-services, that use an event-driven, Kafka-based backbone and can serve millions of connected cars. 

  • Contribute to the building of fast, robust and scalable data-processing pipelines to run Machine Learning training and inferencing jobs on a distributed system. 

  • Develop and maintain packages and libraries used by Data Scientists and Research Scientists for the processing of telematics, video and point cloud data. 

  • Assess and evaluate technologies and tools to identify those fulfilling best our requirements and needs. 

  • Work closely with our Software-, DevOps- and Data-Engineers to continuously implement and improve features, following an agile and test-driven approach. 

  • You and your team have the ownership over the entire software development lifecycle from planning to coding, testing and maintenance. 

  • Peer review code to ensure best practices and standards are met while guiding the growth of less experienced developers. 


Who we are looking for 



  • Background in Computer Science, Software Engineering or related. 

  • 3+ years of professional experience building modern platform services. 

  • You have proven and solid experience with Linux, Python, REST API, Kafka, AWS (basic), numpy and pandas. 

  • You have significant experience in designing, scaling, debugging, and optimizing microservice based and event-driven systems. 

  • You are experienced with agile development methodologies and tools such as git, Jira and Confluence. You appreciate the importance of testing, software validation and clean code. 

  • You have some basic knowledge in Continuous Integration and Continuous Deployment. 

  • It’s a plus if you have some knowledge in PostgreSQL, Django, Dask, Docker or Kubernetes. 

  • “Can do” and above all “want to do” attitude. 

  • Motivated fast learner and problem solver who can work in a team as well as independently. 


What we offer 



  • A unique opportunity to actively contribute to future mobility challenges. 

  • To increase your know-how in state-of-the-art technologies: Data Analytics, Machine Learning and Embedded Development. 

  • Flat hierarchies and work in a small but highly motivated, multidisciplinary and multicultural team. We are an equal opportunity employer who values diversity. 

  • To work in a dynamic start-up environment in the heart of Berlin with the chance to play a big role in the success of Teraki. We do work that matters.


Do you like what you read? Then don’t wait, apply now! 

TERAKI
  • Berlin, Germany
  • Salary: €50k - 65k

Teraki is a Berlin based tech driven company enabling true mobility. We stand for innovation in the rapidly developing connected car, self-driving and 3D mapping world. Teraki provides data reduction and data processing solutions for Automotive (IoT) applications and enables the launch of new applications by reducing hardware footprint, latency and costs. We help our customers on the challenges that are posed by the exploding amounts of data in connected vehicles for all sensor, video and 3D mapping data. 


In this role, you will contribute to the design and implementation of our backend system serving millions of cars, being highly efficient and easy to maintain. You will closely work with other Backend developers, DevOps Engineers, Software Engineers and Data Engineers. 


Your Responsibilities 



  • Design and implement scalable and future-proof micro-services, that use an event-driven, Kafka-based backbone and can serve millions of connected cars. 

  • Contribute to the building of fast, robust and scalable data-processing pipelines to run Machine Learning training and inferencing jobs on a distributed system. 

  • Develop and maintain packages and libraries used by Data Scientists and Research Scientists for the processing of telematics, video and point cloud data. 

  • Assess and evaluate technologies and tools to identify those fulfilling best our requirements and needs. 

  • Work closely with our Software-, DevOps- and Data-Engineers to continuously implement and improve features, following an agile and test-driven approach. 

  • You and your team have the ownership over the entire software development lifecycle from planning to coding, testing and maintenance. 

  • Peer review code to ensure best practices and standards are met while guiding the growth of less experienced developers. 


Who we are looking for 



  • Background in Computer Science, Software Engineering or related. 

  • 3+ years of professional experience building modern platform services. 

  • You have proven and solid experience with Linux, Python, REST API, Kafka, AWS (basic), numpy and pandas. 

  • You have significant experience in designing, scaling, debugging, and optimizing microservice based and event-driven systems. 

  • You are experienced with agile development methodologies and tools such as git, Jira and Confluence. You appreciate the importance of testing, software validation and clean code. 

  • You have some basic knowledge in Continuous Integration and Continuous Deployment. 

  • It’s a plus if you have some knowledge in PostgreSQL, Django, Dask, Docker or Kubernetes. 

  • “Can do” and above all “want to do” attitude. 

  • Motivated fast learner and problem solver who can work in a team as well as independently. 


What we offer 



  • A unique opportunity to actively contribute to future mobility challenges. 

  • To increase your know-how in state-of-the-art technologies: Data Analytics, Machine Learning and Embedded Development. 

  • Flat hierarchies and work in a small but highly motivated, multidisciplinary and multicultural team. We are an equal opportunity employer who values diversity. 

  • To work in a dynamic start-up environment in the heart of Berlin with the chance to play a big role in the success of Teraki. We do work that matters.


Do you like what you read? Then don’t wait, apply now! 

TERAKI
  • Berlin, Germany
  • Salary: €50k - 65k

Teraki is a Berlin based tech driven company enabling true mobility. We stand for innovation in the rapidly developing connected car, self-driving and 3D mapping world. Teraki provides data reduction and data processing solutions for Automotive (IoT) applications and enables the launch of new applications by reducing hardware footprint, latency and costs. We help our customers on the challenges that are posed by the exploding amounts of data in connected vehicles for all sensor, video and 3D mapping data. 


In this role, you will contribute to the design and implementation of our backend system serving millions of cars, being highly efficient and easy to maintain. You will closely work with other Backend developers, DevOps Engineers, Software Engineers and Data Engineers. 


Your Responsibilities 



  • Design and implement scalable and future-proof micro-services, that use an event-driven, Kafka-based backbone and can serve millions of connected cars. 

  • Contribute to the building of fast, robust and scalable data-processing pipelines to run Machine Learning training and inferencing jobs on a distributed system. 

  • Develop and maintain packages and libraries used by Data Scientists and Research Scientists for the processing of telematics, video and point cloud data. 

  • Assess and evaluate technologies and tools to identify those fulfilling best our requirements and needs. 

  • Work closely with our Software-, DevOps- and Data-Engineers to continuously implement and improve features, following an agile and test-driven approach. 

  • You and your team have the ownership over the entire software development lifecycle from planning to coding, testing and maintenance. 

  • Peer review code to ensure best practices and standards are met while guiding the growth of less experienced developers. 


Who we are looking for 



  • Background in Computer Science, Software Engineering or related. 

  • 3+ years of professional experience building modern platform services. 

  • You have proven and solid experience with Linux, Python, REST API, Kafka, AWS (basic), numpy and pandas. 

  • You have significant experience in designing, scaling, debugging, and optimizing microservice based and event-driven systems. 

  • You are experienced with agile development methodologies and tools such as git, Jira and Confluence. You appreciate the importance of testing, software validation and clean code. 

  • You have some basic knowledge in Continuous Integration and Continuous Deployment. 

  • It’s a plus if you have some knowledge in PostgreSQL, Django, Dask, Docker or Kubernetes. 

  • “Can do” and above all “want to do” attitude. 

  • Motivated fast learner and problem solver who can work in a team as well as independently. 


What we offer 



  • A unique opportunity to actively contribute to future mobility challenges. 

  • To increase your know-how in state-of-the-art technologies: Data Analytics, Machine Learning and Embedded Development. 

  • Flat hierarchies and work in a small but highly motivated, multidisciplinary and multicultural team. We are an equal opportunity employer who values diversity. 

  • To work in a dynamic start-up environment in the heart of Berlin with the chance to play a big role in the success of Teraki. We do work that matters.


Do you like what you read? Then don’t wait, apply now! 

Visa
  • Austin, TX
Company Description
Visa operates the world's largest retail electronic payments network and is one of the most recognized global financial services brands. Visa facilitates global commerce through the transfer of value and information among financial institutions, merchants, consumers, businesses and government entities. We offer a range of branded payment product platforms, which our financial institution clients use to develop and offer credit, charge, deferred debit, prepaid and cash access programs to cardholders. Visa's card platforms provide consumers, businesses, merchants and government entities with a secure, convenient and reliable way to pay and be paid in 170 countries and territories.
Job Description
At Visa University, our mission is to turn our learning data into insights and get a deep understanding of how people use our resources to impact the product, strategy and direction of Visa University. In order to help us achieve this we are looking for someone who can build and scale an efficient analytics data suite and also deliver impactful dashboards and visualizations to track strategic initiatives and enable self-service insight delivery. The Staff Software Engineer, Learning & Development Technology is an individual contributor role within Corporate IT in our Austin-based Technology Hub. In this role you will participate in design, development, and technology delivery projects with many leadership opportunities. Additionally, this position provides application administration and end-user support services. There will be significant collaboration with business partners, multiple Visa IT teams and third-party vendors. The portfolio includes SaaS and hosted packaged applications as well as multiple content providers such as Pathgather (Degreed), Cornerstone, Watershed, Pluralsight, Lynda, Safari, and many others.
The ideal candidate will bring energy and enthusiasm to evolve our learning platforms, be able to easily understand business goals/requirements and be forward thinking to identify opportunities that may be effectively resolved with technology solutions. We believe in leading by example, ownership with high standards and being curious and creative. We are looking for an expert in business intelligence, data visualization and analytics to join the Visa University family and help drive a data-first culture across learning.
Responsibilities
  • Engage with product managers, design team and student experience team in Visa University to ensure that the right information is available and accessible to study user behavior, to build and track key metrics, to understand product performance and to fuel the analysis of experiments
  • Build lasting solutions and datasets to surface critical data and performance metrics and optimize products
  • Build and own the analytics layer of our data environment to make data standardized and easily accessible
  • Design, build, maintain and iterate a suite of visual dashboards to track key metrics and enable self-service data discovery
  • Participate in technology project delivery activities such as business requirement collaboration, estimation, conceptual approach, design, development, test case preparation, unit/integration test execution, support process documentation, and status updates
  • Participate in vendor demo and technical deep dive sessions for upcoming projects
  • Collaborate with, and mentor, data engineers to build efficient data pipelines and impactful visualizations
Qualifications
  • Minimum 8 years of experience in a business intelligence, data analysis or data visualization role and a degree in science, computer science, statistics, economics, mathematics, or similar
  • Significant experience in designing analytical data layers and in conducting ETL with very large and complex data sets
  • Expertise with Tableau desktop software (techniques such as LOD calculations, calculated fields, table calculations, and dashboard actions)
  • Expert in data visualization
  • High level of ability in JSON, SQL
  • Experience with Python is a must and experience with data science libraries is a plus (NumPy, Pandas, SciPy, Scikit Learn, NLTK, Deep Learning(Keras)
  • Experience with Machine Learning algorithms (Linear Regression, Multiple Regression, Decision Trees, Random Forest, Logistic Regression, Naive Bayes, SVM, K-means, K-nearest neighbor, Hierarchical Clustering)
  • Experience with HTML and JavaScript
  • Basic SFTP and encryption knowledge
  • Experience with Excel (Vlookups, pivots, macros, etc.)
  • Experience with xAPI is a plus
  • Ability to leverage HR systems such as Workday, Salesforce etc., to execute the above responsibilities
  • Understanding of statistical analysis, quantitative aptitude and the ability to gather and interpret data and information
  • You have a strong business sense and you are able to translate business problems to data driven solutions with minimal oversight
  • You are a communicative person who values building strong relationships with colleagues and stakeholders, enjoys mentoring and teaching others and you have the ability to explain complex topics in simple terms
Additional Information
All your information will be kept confidential according to EEO guidelines.
Job Number: REF15081Q
Brighter Brain
  • Atlanta, GA

Brighter Brain is seeking a skilled professional to serve as an internal resource for our consulting firm in the field of Data Science Development. Brighter Brain provides Fortune 500 clients throughout the United States with IT consultants in a wide-ranging technical sphere.

In order to fully maintain our incoming nationwide and international hires, we will be hiring a Senior Data Science SME (ML) with practical experience to relocate to Atlanta and coach/mentor our incoming classes of consultants. If you have a strong passion for the Data Science platform and are looking to join a seasoned team of IT professionals, this could be an advantageous next step.

Brighter Brain is an IT Management & Consultingfirm providing a unique take on IT Consulting. We currently offer expertise to US clients in the field of Mobile Development (iOS and Android), Hadoop, Microsoft SharePoint, and Exchange/ Office 365. We are currently seeking a highly skilled professional to serve as an internal resource for our company in the field of Data Science with expertise in Machine Learning (ML)

The ideal candidatewill be responsible for establishing our Data Science practice. The responsibilities include creation of a comprehensive training program, training, mentoring, and supporting ideal candidates, as they progress towards building their career in Data Science Consulting. This position is based out of our head office in Atlanta, GA.

If you have a strong passion for Data Science and are looking to join a seasoned team of IT professionals, this could be an advantageous next step.

The Senior Data Science SMEwill take on the following responsibilities:

-       Design, develop and maintain Data Science training material, focused around: ML Knowledge of DL, NN & NLP is a plus.

-       Interview potential candidates to ensure that they will be successful in the Data Science domain and training.

-       Train, guide and mentor junior to mid-level Data Science developers.

-       Prepare mock interviews to enhance the learning process provided by the company.

-       Prepare and support consultants for interviews for specific assignments involving development and implementation of Data Science.

-       Act as a primary resource for individuals working on a variety of projects throughout the US.

-       Interact with our Executive and Sales team to ensure that projects and employees are appropriately matched.

The ideal candidatewill not only possess a solid knowledge of the realm, but must also have the fluency in the following areas:

-       Hands-on expertise in using Data Science and building machine learning models and Deep learning models

-       Statistics and data modeling experience

-       Strong understanding of data sciences

-       Understanding of Big Data

-       Understanding of AWS and/or Azure

-       Understand the difference between Tensorflow, MxNet, etc

Skills Include:

  • Masters Degree in the Computer Science or mathematics fields

    10+ Years of professional experience in the IT Industry, in the AI realm

  • Strong understanding of MongoDB, Scala, Node.js, AWS, & Cognitive applications
  • Excellent knowledge in Python, Scala, JavaScript and its libraries, Node.js, Python, R and MatLab C/C++ Lua or any proficient AI language of choice
  • NoSQL databases, bot framework, data streaming and integrating unstructured Data Rules engines e.g. drools, ESBs e.g. MuleSoft Computer
  • Vision,Recommendation Systems, Pattern Recognition, Large Scale Data Mining or Artificial Intelligence, Neural Networks
  • Deep Learning frameworks like Tensorflow, Torch, Caffee, Theano, CNTK, cikit-
  • learn, numpy, scipy
  • Working knowledge of ML such as: Naïve Bayes Classification, Ordinary Least
  • Square
  • Regression, Logic Regression, Supportive Vector Machines, Ensemble Methods,
  • Clustering
  • Algorithms, Principal Component Analysis, Singular Value Decomposition, and
  • Independent Component Analysis.  
  • Natural Language Processing (NLP) concepts such as topic modeling, intents,
  • entities, and NLP frameworks such as SpaCy, NLTK, MeTA, gensim or other
  • toolkits for Natural Language Understanding (NLU)
  • Experience data profiling, data cleansing, data wrangling/mungline, ETL
  • Familiarity with Spark MLlib, Mahout Google, Bing, and IBM Watson APIs
  • Hands on experience as needed with training a variety of consultants
  • Analytical and problem-solving skills
  • Knowledge of IOT space
  • Understand Academic Data Science vs Corporate Data Science
  • Knowledge of the Consulting/Sales structure

Additional details about the position:

-       Able to relocate to Atlanta, Ga (Relocation package available)

-       Work schedule of 9 AM to 6 PM EST

Questions: Send your resume to Ansel Butler at Brighter Brain; make sure that there is a valid phone number and Skype ID either on the resume, or in the body of the email.

Ansel Essic Butler

EMAIL: ANSEL.BUTLER@BRIGHTERBRAIN.COM

404 791 5128

SKYPE: ANSEL.BUTLER@OUTLOOK.COM

Senior Corporate Recruiter

Brighter Brain LLC.

1785 The Exchange, Suite 200

Atlanta, GA. 30339

Man AHL
  • London, UK

The Role


As a Quant Platform Developer at AHL you will be building the tools, frameworks, libraries and applications which power our Quantitative Research and Systematic Trading. This includes responsibility for the continued success of “Raptor”, our in-house Quant Platform, next generation Data Engineering, and evolution of our production Trading System as we continually expand the markets and types of assets we trade, and the styles in which we trade them. Your challenges will be varied and might involve building new high performance data acquisition and processing pipelines, cluster-computing solutions, numerical algorithms, position management systems, visualisation and reporting tools, operational user interfaces, continuous build systems and other developer productivity tools.


The Team


Quant Platform Developers at AHL are all part of our broader technology team, members of a group of over sixty individuals representing eighteen nationalities. We have varied backgrounds including Computer Science, Mathematics, Physics, Engineering – even Classics - but what unifies us is a passion for technology and writing high-quality code.



Our developers are organised into small cross-functional teams, with our engineering roles broadly of two kinds: “Quant Platform Developers” otherwise known as our “Core Techs”, and “Quant Developers” which we often refer to as “Sector Techs”. We use the term “Sector Tech” because some of our teams are aligned with a particular asset class or market sector. People often rotate teams in order to learn more about our system, as well as find the position that best matches their interests.


Our Technology


Our systems are almost all running on Linux and most of our code is in Python, with the full scientific stack: numpy, scipy, pandas, scikit-learn to name a few of the libraries we use extensively. We implement the systems that require the highest data throughput in Java. For storage, we rely heavily on MongoDB and Oracle.



We use Airflow for workflow management, Kafka for data pipelines, Bitbucket for source control, Jenkins for continuous integration, Grafana + Prometheus for metrics collection, ELK for log shipping and monitoring, Docker for containerisation, OpenStack for our private cloud, Ansible for architecture automation, and HipChat for internal communication. But our technology list is never static: we constantly evaluate new tools and libraries.


Working Here


AHL has a small company, no-attitude feel. It is flat structured, open, transparent and collaborative, and you will have plenty of opportunity to grow and have enormous impact on what we do.  We are actively engaged with the broader technology community.



  • We host and sponsor London’s PyData and Machine Learning Meetups

  • We open-source some of our technology. See https://github.com/manahl

  • We regularly talk at leading industry conferences, and tweet about relevant technology and how we’re using it. See @manahltech



We’re fortunate enough to have a fantastic open-plan office overlooking the River Thames, and continually strive to make our environment a great place in which to work.



  • We organise regular social events, everything from photography through climbing, karting, wine tasting and monthly team lunches

  • We have annual away days and off-sites for the whole team

  • We have a canteen with a daily allowance for breakfast and lunch, and an on-site bar for in the evening

  • As well as PC’s and Macs, in our office you’ll also find numerous pieces of cool tech such as light cubes and 3D printers, guitars, ping-pong and table-football, and a piano.



We offer competitive compensation, a generous holiday allowance, various health and other flexible benefits. We are also committed to continuous learning and development via coaching, mentoring, regular conference attendance and sponsoring academic and professional qualifications.


Technology and Business Skills


At AHL we strive to hire only the brightest and best and most highly skilled and passionate technologists.



Essential



  • Exceptional technology skills; recognised by your peers as an expert in your domain

  • A proponent of strong collaborative software engineering techniques and methods: agile development, continuous integration, code review, unit testing, refactoring and related approaches

  • Expert knowledge in one or more programming languages, preferably Python, Java and/or C/C++

  • Proficient on Linux platforms with knowledge of various scripting languages

  • Strong knowledge of one or more relevant database technologies e.g. Oracle, MongoDB

  • Proficient with a range of open source frameworks and development tools e.g. NumPy/SciPy/Pandas, Pyramid, AngularJS, React

  • Familiarity with a variety of programming styles (e.g. OO, functional) and in-depth knowledge of design patterns.



Advantageous



  • An excellent understanding of financial markets and instruments

  • Experience of front office software and/or trading systems development e.g. in a hedge fund or investment bank

  • Expertise in building distributed systems with service-based or event-driven architectures, and concurrent processing

  • A knowledge of modern practices for data engineering and stream processing

  • An understanding of financial market data collection and processing

  • Experience of web based development and visualisation technology for portraying large and complex data sets and relationships

  • Relevant mathematical knowledge e.g. statistics, asset pricing theory, optimisation algorithms.


Personal Attributes



  • Strong academic record and a degree with high mathematical and computing content e.g. Computer Science, Mathematics, Engineering or Physics from a leading university

  • Craftsman-like approach to building software; takes pride in engineering excellence and instils these values in others

  • Demonstrable passion for technology e.g. personal projects, open-source involvement

  • Intellectually robust with a keenly analytic approach to problem solving

  • Self-organised with the ability to effectively manage time across multiple projects and with competing business demands and priorities

  • Focused on delivering value to the business with relentless efforts to improve process

  • Strong interpersonal skills; able to establish and maintain a close working relationship with quantitative researchers, traders and senior business people alike

  • Confident communicator; able to argue a point concisely and deal positively with conflicting views.