OnlyDataJobs.com

Freeport-McMoRan
  • Phoenix, AZ

Provide management and leadership to the Big Data project teams. Directs initiatives in the Freeport-McMoRan Big Data program. Provides analytical direction, expertise and support for the Big Data program; this includes project leadership for initiatives, coordination with business subject matter experts and travel to mine sites. This will be a global role that will coordinate with site and corporate stakeholders to ensure global alignment on service and project delivery. The role will also work with business operations management to ensure the program is focusing in areas most beneficial to the company.


  • Work closely with business, engineering and technology teams to develop solution to data-intensive business problems
  • Supervise internal and external science teams
  • Perform quality control of deliverables
  • Prepare reports and presentations, and communicate with Executives
  • Provide thought leadership in algorithmic and process innovations, and creativity in solving unconventional problems
  • Use statistical and programming tools such as R and Python to analyze data and develop machine-learning models
  • Perform other duties as required


Minimum Qualifications


  • Bachelors degree in an analytical field (statistics, mathematics, etc.) and eight (8) years of relevant work experience, OR
  • Masters degree in an analytical field (statistics, mathematics, etc.) and six (6) years of relevant work experience, OR
  • Proven track record of collaborating with business partners to translate business problems and needs into data-based analytical solutions
  • Proficient in predictive modeling:
  • Linear and logistic regression
  • Tree based techniques (CART, Random Forest, Gradient Boosting)
  • Time-Series Analysis
  • Anomaly detection
  • Survival Analysis
  • Strong Experience with SQL/Hive environments
  • Skilled with R and/or Python analysis environments
  • Experience with Big Data tools for machine learning, R, Hive, Python
  • Good communication skills


Preferred Qualifications


  • Doctorate degree in an analytical field
  • Willing and able to travel 20-30% or more


Criteria/Conditions


  • Ability to understand and apply verbal and written work and safety-related instructions and procedures given in English
  • Ability to communicate in English with respect to job assignments, job procedures, and applicable safety standards
  • Must be able to work in a potentially stressful environment
  • Position is in busy, non-smoking office located in downtown Phoenix, AZ
  • Location requires mobility in an office environment; each floor is accessible by elevator
  • Occasionally work will be performed in a mine, outdoor or manufacturing plant setting
  • Must be able to frequently sit, stand and walk
  • Must be able to frequently lift and carry up to ten (10) pounds
  • Personal protective equipment is required when performing work in a mine, outdoor, manufacturing or plant environment, including hard hat, hearing protection, safety glasses, safety footwear, and as needed, respirator, rubber steel-toe boots, protective clothing, gloves and any other protective equipment as required
  • Freeport-McMoRan promotes a drug/alcohol-free work environment through the use of mandatory pre-employment drug testing and on-going random drug testing as allowed by applicable State laws


Freeport-McMoRan has reviewed the jobs at its various office and operating sites and determined that many of these jobs require employees to perform essential job functions that pose a direct threat to the safety or health of the employees performing these tasks or others. Accordingly, the Company has designated the following positions as safety-sensitive:


  • Site-based positions, or positions which require unescorted access to site-based operational areas, which are held by employees who are required to receive MSHA, OSHA, DOT, HAZWOPER and/or Hazard Recognition Training; or
  • Positions which are held by employees who operate equipment, machinery or motor vehicles in furtherance of performing the essential functions of their job duties, including operating motor vehicles while on Company business or travel (for this purpose motor vehicles includes Company owned or leased motor vehicles and personal motor vehicles used by employees in furtherance of Company business or while on Company travel); or
  • Positions which Freeport-McMoRan has designated as safety sensitive positions in the applicable job or position description and which upon further review continue to be designated as safety-sensitive based on an individualized assessment of the actual duties performed by a specifically identified employee.


Equal Opportunity Employer/Protected Veteran/Disability


Requisition ID
1900606 

Freeport-McMoRan
  • Phoenix, AZ

Supports the activities for all Freeport-McMoRan Big Data programs. Provides analytical support and expertise for the Big Data program; this includes coordination with business subject matter experts and travel to mine sites. The role will provide analyses and statistical models as part of Big Data projects, and may be the project lead on analytics initiatives. The role will also provide visualizations and descriptive results of the analysis. This will be a global role that will coordinate with site and corporate stakeholders to ensure alignment on project delivery.


    Work
    • closely with business, engineering and technology teams to analyze data-intensive business problems.
    • Research and develop appropriate statistical methodology to translate these business problems into analytics solutions
    • Perform quality control of deliverables
    • Develop visualizations of results and prepare deliverable reports and presentations, and communicate with business partners
    • Provide thought leadership in algorithmic and process innovations, and creativity in solving unconventional problems
    • Develop, implement and maintain analytical solutions in the Big Data environment
    • Work with onshore and offshore resources to implement and maintain analytical solutions
    • Perform variable selection and other standard modeling tasks
    • Produce model performance metrics
    • Use statistical and programming tools such as R and Python to analyze data and develop machine-learning models
    • Perform other duties as requested


Minimum Qualifications


  • Bachelors degree in an analytical field (statistics, mathematics, etc.) and five (5) years of relevant work experience, OR 
  • Masters degree in an analytical field (statistics, mathematics, etc.) and three (3) years of relevant work experience

  • Proven track record of collaborating with business partners to translate operational problems and needs into data-based analytical solutions

  • Proficient in predictive modeling:

  • Linear and logistic regression

  • Tree based techniques (CART, Random Forest, Gradient Boosting)

  • Time-Series Analysis

  • Anomaly detection

  • Survival Analysis

  • Strong experience with SQL/Hive environments

  • Skilled with R and/or Python analysis environments

  • Experience with Big Data tools for machine learning, R, Hive, Python

  • Good communication skills


Preferred Qualifications


  • Masters degree in an analytical field
  • Willing and able to travel 20-30% or more


Criteria/Conditions


  • Ability to understand and apply verbal and written work and safety-related instructions and procedures given in English
  • Ability to communicate in English with respect to job assignments, job procedures, and applicable safety standards

  • Must be able to work in a potentially stressful environment

  • Position is in busy, non-smoking office located in Phoenix, AZ

  • Location requires mobility in an office environment; each floor is accessible by elevator and internal staircase

  • Occasionally work may be performed in a mine, outdoor or manufacturing plant setting

  • Must be able to frequently sit, stand and walk

  • Must be able to frequently lift and carry up to ten (10) pounds

  • Personal protective equipment is required when performing work in a mine, outdoor, manufacturing or plant environment, including hard hat, hearing protection, safety glasses, safety footwear, and as needed, respirator, rubber steel-toe boots, protective clothing, gloves and any other protective equipment as required

  • Freeport-McMoRan promotes a drug/alcohol free work environment through the use of mandatory pre-employment drug testing and on-going random drug testing as per applicable State Laws


Freeport-McMoRan has reviewed the jobs at its various office and operating sites and determined that many of these jobs require employees to perform essential job functions that pose a direct threat to the safety or health of the employees performing these tasks or others. Accordingly, the Company has designated the following positions as safety-sensitive:


  • Site-based positions, or positions which require unescorted access to site-based operational areas, which are held by employees who are required to receive MSHA, OSHA, DOT, HAZWOPER and/or Hazard Recognition Training; or
  • Positions which are held by employees who operate equipment, machinery or motor vehicles in furtherance of performing the essential functions of their job duties, including operating motor vehicles while on Company business or travel (for this purpose motor vehicles includes Company owned or leased motor vehicles and personal motor vehicles used by employees in furtherance of Company business or while on Company travel); or
  • Positions which Freeport-McMoRan has designated as safety sensitive positions in the applicable job or position description and which upon further review continue to be designated as safety-sensitive based on an individualized assessment of the actual duties performed by a specifically identified employee.


Equal Opportunity Employer/Protected Veteran/Disability


Requisition ID
1900604 

Pandora
  • Atlanta, GA

Music and data are at the heart of Pandora. As a member of the User Engagement data science team, you will help us build models and design experiments that impact the listening experience of millions of people every day.  We are looking for enthusiastic data scientists with experience in machine learning, statistical modeling and analysis, combined with strong CS fundamentals and coding abilities. Scientists on our team partner with engineers, product managers, and other key stakeholders in the product and marketing teams. You'll interact regularly with senior leadership to directly guide and shape Pandora's efforts in one or more of the following areas:


  • Building ML models and improving the core recommendation system that helps serve music to millions of listeners

  • Developing new ways to model user behavior using cutting-edge techniques

  • Optimizing models to drive user growth and engagement

  • Exploring new opportunities to integrate SiriusXM and Pandora radio programming



Successful candidates will have outstanding communication skills, demonstrated ability to work effectively in a small team, and a natural sense of curiosity and drive to experiment. We welcome diverse perspectives and a collaborative spirit.


Relocation and visa programs available.


Requirements:

  • PhD degree in quantitative field (for example: Computer Science, Machine Learning, Statistics, Biology, Neuroscience, Physics, or Mathematics)

  • Demonstrated background in machine learning and applied statistics

  • 2+ years of industry experience in a data science role

  • Strong Python programming skills

  • Experience with Hive or SQL databases

  • Experience implementing ML models at a large scale in a production environment

  • Experimental design and A/B testing


Plus Requirements:

  • Experience with the Hadoop technology stack
  • Experience with AWS (Amazon Web Services) or GCP (Google Cloud Platform) tools
  • Experience with deep learning APIs such as TensorFlow or PyTorch


Pandora is committed to diversity in its workforce. Pandora is an equal employment employer and considers qualified applicants without regard to gender, sexual orientation, gender identity, race, ethnicity, veteran or disability status. Women and people of color are encouraged to apply.

Pandora is also a VEVRAA federal contractor. Pandora requests priority referrals of protected veterans from each ESDS, as required by regulation.

If you believe you need a reasonable accommodation in order to search for a job opening or to apply for a position, please contact us by sending an email to disability@pandora.com. This email box is designed to assist job seekers who require a reasonable accommodation to the application process. A response to your request may take up to two business days.

In Your Email, Please Include The Following

The specific accommodation requested to complete the employment application process.

The location or office to which you would like to apply.

The subject of the email should read "Request for Reasonable Accommodation".
LPL Financial
  • San Diego, CA

As a Development Manager for Digital Experience Technology, responsible for strategy, analysis, design and implementation of modern, scalable, cloud native digital and artificial intelligence solutions. This role primarily will lead Intelligent Content Delivery program with focus on improving content search engine, implementing personalization engine and AI driven content ranking engine to deliver relevant and personalized intelligent content across Web, Mobile and AI Chat channels.  This role requires a successful and experienced digital, data and artificial intelligence practitioner, able to lead discussions with business and technology partners to identify business problems and delivering breakthrough solutions.

The successful candidate will have excellent verbal and written communication skills along with a demonstrated ability to mentor and manage a digital team. You must possess a unique blend of business and technical savvy; a big-picture vision, and the drive to make a vision a reality.


Key Responsibilities:
 

·       Define innovative Digital Content, AI and Automation offerings solving business problems with inputs from customers, business and product teams

·       Partner with business, product and technology teams to define problems, develop business case, build prototypes and create new offerings.

·       Evangelize and promote Cloud based micro-services architecture and Digital Technology capabilities adoption across organization

·       Lead and own technical design and development aspects of knowledge graphs, search engines, personalization engine and intelligent content engine platforms by applying digital technologies, machine learning and deep learning frameworks

·       Responsible for research, design and prototype robust and scalable models based on machine learning, data mining, and statistical modeling to answer key business problems

·       Manage onsite and offshore development teams implementing products and platforms in Agile

·       Collaborate with business, product , enterprise architecture and cross-functional teams to ensure strategic and tactical goals of project efforts are met

·       Work collaboratively with QA, DevOPS teams to adopt CI/CD tool chain and develop automation

·       Work with technical teams to ensure overall support and stability of platforms and assist with troubleshooting when production incidents arise

·       Be a mentor and leader on the team and within the organization



Basic Qualifications:

·       Overall 10+ years of experience, with 6+ years of development experience in implementing Digital and Artificial Intelligence (NPU-NLP-ML) platforms.

·       Solid working experience of Python, R and knowledge graphs

·       Expertise in any one AI related frameworks (NLTK, Spacy, Scikit-Learn, Tensor flow)

·       Experience with platforms Cloud Platforms and products including Amazon AWS, LEX Bots, Lambda , Microsoft Azure or similar cloud technologies

·       Solid working experience of implementing Solr search engine, SQL, Elastic Search,  Neo4J

·       Experience in data analysis , modelling, reporting using Power BI / similar tools

·       Experience with Enterprise Content Management systems like Adobe AEM / Any Enterprise CMS

·       Experience in implementing knowledge graphs using Schema.org, Facebook Open graph, Google AMP pages is added advantage

·       Excellent collaboration and negotiation skills

·       Results driven with a positive can do attitude

·       Experience in implementing Intelligent Automation tools like Work fusion , UIPath / Automation Anywhere is added advantage

Qualifications:

·       Ms or Ph.D degree in Computer Science / Statistics / Mathematics / Data Science / Any

·       Previous industry experience or research experience in solving business problems applying machine learning and deep learning algorithms

·       Must be hands on technologist with prior experience in similar role

·       Good experience in practicing and executing projects in Agile Scrum or Agile Safe iterative methodologies

iMoney Group
  • Kuala Lumpur, Malaysia
  • Salary: $84k - 96k

Reporting into the CEO, as iMoney’s Head of Data Science, you will be the guru for the full-suite of data related services supporting the organization, including reporting and business intelligence, data analytics, data science and the overall data infrastructure.



  • Unlock the full potential of the huge amounts of data that has been and is continuously being collected at iMoney

  • Craft the data vision and own, identify and implement the data analytics and data science roadmap for the iMoney Group across all business areas

  • Be the strategic leader and developmental coach for our current data team comprising two data analysts and build out additional data capabilities within the iMoney Group

  • Partner with the different business units and leverage data analytics, insights and science to drive all aspects of the customer conversion funnel including marketing channel attribution and optimization, onsite and offline (call-centre) user behavior and conversion, recommendation engines and product matching, customer segmentation, predictive analysis and propensity modelling

  • Utilize best in class practices with respect to data analytics, visualization, reporting dashboards and data science modelling

  • Establish iMoney as the market leader in the field of data analytics and innovative data science

  • Collaborate with the technology and product teams in continuously enhancing and delivering a robust, efficient and scalable data collection, structuring and warehousing infrastructure


Requirements:



  • Passionate about data and its ability to drive high business impact and growth

  • 10 years of experience in the field of data analytics and data science, including at least 3 years in a leadership role at a scale-up stage digital consumer business such as e-commerce, online lending platforms, digital banks, online financial marketplaces or similar

  • Hands on experience in any of the following tools: R, Python, Knime, SAS, SPSS

  • Clear understanding of databases and extensive knowledge of SQL, AWS Redshift, Hadoop, Hive, Teradata, Google Big Query

  • Experience in implementing and leveraging Tableau for business reporting and intelligence

  • Expertise in applying advanced predictive statistical techniques to develop regression, time-series, segmentation models. Exposure to design of experiments or neural network.

  • Responsibility and Attention to Detail - take responsibility for delivery of precise and accurate business intelligence, data analytics and insights to tight timescales and work to resolve problems when they occur

  • Project management skills - ability to scope out and implement larger data related projects as per business requirements including clear understanding of resource and timing requirements

  • People leadership skills - coaching, inspiring, career counselling, mentoring and capability development of team members and peers

  • Excellent stakeholder management, communication and presentation skills – fluent in English, breaking down complex problems with data-driven solutions and having a service-orientated mindset 

Huntech USA LLC
  • San Diego, CA

Great opportunity to work with the leader in semiconductor industry who unveiled the worlds first 7 nanometer PC platform, created from the ground up for the next generation of personal computing by bringing new features with thin and light designs, allowing for new form factors in the always-on, always-connected category. It features the new octa-core CPU, the fastest CPU ever designed and built, with a larger cache than previous compute platforms, faster multi-tasking and increased productivity for users, disrupting the performance expectations of current thin, light and fanless PC designs. This platform is currently sampling to customers and is expected to begin shipping in commercial devices in Q3 of 2019.


Staff Data Analyst

You will study the performance of the Global Engineering Grid/ Design workflows across engineering grid and provide insights in effective analytics in support of Grid 2.0 program. You will conduct research, design statistical studies and analyze data in support of Grid 2.0 program.  This job will challenge you to dive deep into the engineering grid/ design flow world and understand the unique challenges in operating engineering grid at a scale unrivaled in the industry.  You should have experience working in an EDA or manufacturing environment and comfortable workings in an environment where problems are not always well-defined.


Responsibilities:

  • Identify and pursue opportunities to improve the efficiency of global engineering grid and design workflows.
  • Develop systems to invest, analyze, and take automated action across real-time feeds of high volume data.
  • Research and implement new analytics approaches effective deployment of machine learning/ data modeling to solve business problems Identify patterns and trends from large, high-dimensional data sets; manipulate data into digestible and actionable reports.
  • Make business recommendations (e.g. cost-benefit, experiment analysis) with effectivepresentations of findings at multiple levels of stakeholders through visual displays of quantitative information.
  • Plan effectively to set priorities and manage projects, identify roadblocks and come up technical options.


Leverage your 8+ years of experience articulating business questions and using mathematical techniques to arrive at an answer using available data. 3 - 4 yrs advance Tableau is a must. Experience translating analysis results into business recommendations. Experience with statistical software (e.g., R, Python, MATLAB, pandas, scala) and database languages like SQL Experience with data warehousing concepts (Hadoop, mapR) and visualization tools (e.g. QlikView, Tableau, Angular, Thoughtspot). Strong business acumen,critical thinking ability, and attention to detail.


Background in data science, applied mathematics, or computational science and a history of solving difficult problems using a scientific approach with MS or BS degree in a quantitative discipline (e.g., Statistics, Applied Mathematics, Operations Research, Computer Science, Electrical Engineering) and understand how to design scientific studies. You should be familiar with the state of the art in machine learning/ data modeling/ forecasting and optimization techniques in a big data environment.



Data Analytics Software Test Engineer

As a member of the Corporate Engineering Services Group software test team, you will be responsible for testing various cutting edge data analytics products and solutions. You will be working with a dynamic engineering team to develop test plans, execute test plans, automate test cases, and troubleshoot and resolve issues.


Leverage your 1+ years of experience in the following:

  • Testing and systems validation for commercial software systems.
  • Testing of systems deployed in AWS Cloud.
  • Knowledge of SQL and databases.
  • Developing and implementing software and systems test plans.
  • Test automation development using Python or Java.
  • Strong problem solving and troubleshooting skills.
  • Experience in testing web-based and Android applications.
  • Familiar with Qualcomm QXDM and APEX tools.
  • Knowledge of software development in Python.
  • Strong written and oral communication skills
  • Working knowledge of JIRA and GitHub is preferred.


Education:

  • Required: Bachelor's, Computer Engineering and/or Computer Networks & Systems and/or Computer Science and/or Electrical Engineering
  • Preferred: Master's, Computer Engineering and/or Computer Networks & Systems and/or Computer Science and/or Electrical Engineering or equivalent experience


Interested? Please send a resume to our Founder & CEO, Raj Dadlani at raj@huntech.com and he will respond to interested candidates within 24 hours of resume receipt. We are dealing with a highly motivated hiring manager and shortlisting viable candidates by February 22, 2019.

MRE Consulting, Ltd.
  • Houston, TX

Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.


Our client is seeking to hire an Enterprise Data Architect. The position reports to the VP IT. The Data Architect is responsible for providing a standard common business vocabulary across all applications and data elements, expressing and defining strategic data requirements, outlining high level integrated designs to meet the various business unit requirements, and aligning with the overall enterprise strategy and related business architecture.


Essential Duties & Responsibilities:
Provide insight and strategies for changing databased storage and utilization requirements for the company and provide direction on potential solutions
Assist in the definition and implementation of a federated data model consisting of a mixture of multi-cloud and on premises environments to support operations and business strategies
Assist in managing vendor cloud environments and multi-cloud database connectivity.
Analyze structural data requirements for new/existing applications and platforms
Submit reports to management that outline the changing data needs of the company and develop related solutions
Align database implementation methods to make sure they support company policies and any external regulations
Interpret data, analyze results and provide ongoing reporting and support
Implement data collection systems and other strategies that optimize efficiency and data quality
Acquire available data sources and maintain data systems
Identify, analyze, and interpret trends or patterns in data sets
Scrub data as needed, review reports, printouts, and performance indicators to identify inconsistencies
Develop database design and architecture documentation for the management and executive teams
Monitor various data base systems to confirm optimal performance standards are met
Contribute to content updates within resource portals and other operational needs
Assist in presentations and interpretations of analytical findings and actively participate in discussions of results, internally and externally
Help maintain the integrity and security of the company database
Ensure transactional activities are processed in accordance with standard operating procedures The employee will be on call 24 hours 7 days per week.


Qualifications
Minimum of 10 + years of experience.
Proven work experience as a Data Architect, Data Scientist, or similar role
In-depth understanding of database structure principles
Strong knowledge of data mining and segmentation techniques
Expertise in MS SQL and other database platforms
Familiarity with data visualization tools
Experience with formal Enterprise Architecture tools (like BiZZ design)
Experience in managing cloud-based environments
Aptitude regarding data models, data mining, and in cloud-based applications.
Advanced analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy
Adept at report writing and presenting findings
Proficiency in systems support and monitoring
Experience with complex data structures in the Oil and Gas Industry a plus


Education 
A bachelors degree in Computer Science, Math, Statistics, or related quantitative field required.


Travel Requirements
The percentage of travel anticipated for this position is 10 20%, including overnight extended stays.


All qualified candidates should apply by providing a current Word resume and denoting skill set experience as it relates to this requirement.

Brighter Brain
  • Atlanta, GA

Brighter Brain is seeking a skilled professional to serve as an internal resource for our consulting firm in the field of Data Science Development. Brighter Brain provides Fortune 500 clients throughout the United States with IT consultants in a wide-ranging technical sphere.

In order to fully maintain our incoming nationwide and international hires, we will be hiring a Senior Data Science SME (ML) with practical experience to relocate to Atlanta and coach/mentor our incoming classes of consultants. If you have a strong passion for the Data Science platform and are looking to join a seasoned team of IT professionals, this could be an advantageous next step.

Brighter Brain is an IT Management & Consultingfirm providing a unique take on IT Consulting. We currently offer expertise to US clients in the field of Mobile Development (iOS and Android), Hadoop, Microsoft SharePoint, and Exchange/ Office 365. We are currently seeking a highly skilled professional to serve as an internal resource for our company in the field of Data Science with expertise in Machine Learning (ML)

The ideal candidatewill be responsible for establishing our Data Science practice. The responsibilities include creation of a comprehensive training program, training, mentoring, and supporting ideal candidates, as they progress towards building their career in Data Science Consulting. This position is based out of our head office in Atlanta, GA.

If you have a strong passion for Data Science and are looking to join a seasoned team of IT professionals, this could be an advantageous next step.

The Senior Data Science SMEwill take on the following responsibilities:

-       Design, develop and maintain Data Science training material, focused around: ML Knowledge of DL, NN & NLP is a plus.

-       Interview potential candidates to ensure that they will be successful in the Data Science domain and training.

-       Train, guide and mentor junior to mid-level Data Science developers.

-       Prepare mock interviews to enhance the learning process provided by the company.

-       Prepare and support consultants for interviews for specific assignments involving development and implementation of Data Science.

-       Act as a primary resource for individuals working on a variety of projects throughout the US.

-       Interact with our Executive and Sales team to ensure that projects and employees are appropriately matched.

The ideal candidatewill not only possess a solid knowledge of the realm, but must also have the fluency in the following areas:

-       Hands-on expertise in using Data Science and building machine learning models and Deep learning models

-       Statistics and data modeling experience

-       Strong understanding of data sciences

-       Understanding of Big Data

-       Understanding of AWS and/or Azure

-       Understand the difference between Tensorflow, MxNet, etc

Skills Include:

  • Masters Degree in the Computer Science or mathematics fields

    10+ Years of professional experience in the IT Industry, in the AI realm

  • Strong understanding of MongoDB, Scala, Node.js, AWS, & Cognitive applications
  • Excellent knowledge in Python, Scala, JavaScript and its libraries, Node.js, Python, R and MatLab C/C++ Lua or any proficient AI language of choice
  • NoSQL databases, bot framework, data streaming and integrating unstructured Data Rules engines e.g. drools, ESBs e.g. MuleSoft Computer
  • Vision,Recommendation Systems, Pattern Recognition, Large Scale Data Mining or Artificial Intelligence, Neural Networks
  • Deep Learning frameworks like Tensorflow, Torch, Caffee, Theano, CNTK, cikit-
  • learn, numpy, scipy
  • Working knowledge of ML such as: Naïve Bayes Classification, Ordinary Least
  • Square
  • Regression, Logic Regression, Supportive Vector Machines, Ensemble Methods,
  • Clustering
  • Algorithms, Principal Component Analysis, Singular Value Decomposition, and
  • Independent Component Analysis.  
  • Natural Language Processing (NLP) concepts such as topic modeling, intents,
  • entities, and NLP frameworks such as SpaCy, NLTK, MeTA, gensim or other
  • toolkits for Natural Language Understanding (NLU)
  • Experience data profiling, data cleansing, data wrangling/mungline, ETL
  • Familiarity with Spark MLlib, Mahout Google, Bing, and IBM Watson APIs
  • Hands on experience as needed with training a variety of consultants
  • Analytical and problem-solving skills
  • Knowledge of IOT space
  • Understand Academic Data Science vs Corporate Data Science
  • Knowledge of the Consulting/Sales structure

Additional details about the position:

-       Able to relocate to Atlanta, Ga (Relocation package available)

-       Work schedule of 9 AM to 6 PM EST

Questions: Send your resume to Ansel Butler at Brighter Brain; make sure that there is a valid phone number and Skype ID either on the resume, or in the body of the email.

Ansel Essic Butler

EMAIL: ANSEL.BUTLER@BRIGHTERBRAIN.COM

404 791 5128

SKYPE: ANSEL.BUTLER@OUTLOOK.COM

Senior Corporate Recruiter

Brighter Brain LLC.

1785 The Exchange, Suite 200

Atlanta, GA. 30339

Expedia, Inc.
  • Bellevue, WA

What is the first thing you do while planning your travel? Do you want to work on a team that helps travelers like you go places and make our world more connected?

Expedia Flights team is the traffic powerhouse for the Expedia group and our flights shopping platform is one of the largest in the world serving over 150 million queries a day and powering some of the strongest brands in the industry like Orbitz, Expedia, Travelocity, Wotif, Hotwire and ebookers. 

Our technology operations are global, with representation in US, Mexico, Australia and India.


What makes Flights technology unique?



  • We are one of the few companies in the world that develop a proprietary flight search engine which is used by millions of users every single day

  • We are moving one of the world’s biggest flights platforms to AWS

  • We handle several 100 thousand booking transactions daily and connect with all major GDS partners you can think of in the world

  • We collect terabytes of flight data and are actively looking to use ML to show the right content to our customers


Expedia is looking for an extraordinary Distinguished Engineer to join the Flight Search Team.  Best Fare Search, Expedia’s proprietary flight search and pricing engine, performs complex manipulations on massive and highly volatile datasets to power airline flight shopping for millions of customers every single day.


You will have the opportunity to understand and shape the marketplace. This role will pursue extremely hard problems, craft solutions and make design decisions which can have a large impact across the company. The systems you design and implement will be expected to meet the levels of scalability and robustness needed for this high-volume and high-visibility product.


Bring your programming smarts, problem solving skills, and passion for software engineering and join us as we solidify and grow our position as the leaders in the travel industry.


What you’ll do: 



  • Lead, influence, and be a contributor across our entire technology team while acting as an area expert for your team and flight search services

  • Primary designer and architect for the continued evolution of Best Fare Search and flight search services for Expedia Group

  • Design for high-performance, highly scalable, and reliable server applications in our data center and the cloud

  • Produce production quality code and have a strong eye for the operational aspects of the platform such as performance tuning, monitoring, and fault-tolerance

  • Design, interpret, analyze and work with large amounts of data to identify issues and patterns

  • Contribute to advancing the team’s design methodology and quality programming practices

  • Technical ownership of critical flight search systems and services from inception through operating in production


Who you are: 



  • Functional Expertise

  • At least 15 years of industry experience in a variety of contexts, during which you’ve built remarkably scalable, robust, and fault-tolerant systems

  • Expertise in solving large scale flight search problems a significant plus

  • Exceptional coding skills in C#, C++ or Java and proficiency with XML and SQL

  • Experience working in a cloud or virtual environment

  • Expertise with continuous integration/delivery and leveraging a dev ops mindset

  • Previous experience delivering data insights by querying dataset in a big data environment(Hadoop, SQL, AWS Aurora, S3 etc.) and performing real-time streaming analytics

  • Production focus: previous history of being hands on in solving critical production issues that affect our valued customers and drive those insights back into the product in true dev ops style

  • Knowledge of airline and/or global distribution system (GDS) preferred


People Leadership
Inspiring and approachable as a leader
Create an environment where people can realize their full potential
Be humble and lead with open, candid relationships
Inspire peripheral relationships across Expedia Group
Passionate about engaging and developing talent; attract, develop, engage and retain talented individuals with a compelling, unifying vision that steers and motivates
Strong people skills and ability to successfully lead up, down, and across the organization
Demonstrated the ability to mentor and grow more junior developers into strong, leading engineers
Proven capacity to establish trusted, effective relationships across diverse sets of partners


Additional Competencies



  • Natural bar-raiser: curious and passionate, with a desire to continuously learn more, which you use to understand basic business operations and the organizational levers that drive profitable growth

  • Bias to action, being familiar with methods and approaches needed to get things done in a collaborative, lean and fast-moving environment

  • Respond effectively to complex and ambiguous problems and situations

  • Lead mostly with questions rather than opinions, thriving on the opportunity to own, innovate, create, and constantly re-evaluate

  • Comfortable making recommendations across competing and equally critical business needs

  • Simplify, clearly and succinctly convey complex information and ideas to individuals at all levels of the organization

  • Motivated by goal achievement and continuous improvement, with the enthusiasm and drive to motivate your team and the wider organization



Why join us:
Expedia Group recognizes our success is dependent on the success of our people.  We are the world's travel platform, made up of the most knowledgeable, passionate, and creative people in our business.  Our brands recognize the power of travel to break down barriers and make people's lives better – that responsibility inspires us to be the place where exceptional people want to do their best work, and to provide them the tools to do so. 


Whether you're applying to work in engineering or customer support, marketing or lodging supply, at Expedia Group we act as one team, working towards a common goal; to bring the world within reach.  We relentlessly strive for better, but not at the cost of the customer.  We act with humility and optimism, respecting ideas big and small.  We value diversity and voices of all volumes. We are a global organization but keep our feet on the ground, so we can act fast and stay simple.  Our teams also have the chance to give back on a local level and make a difference through our corporate social responsibility program, Expedia Cares.


If you have a hunger to make a difference with one of the most loved consumer brands in the world and to work in the dynamic travel industry, this is the job for you.


Our family of travel brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Egencia®, trivago®, HomeAway®, Orbitz®, Travelocity®, Wotif®, lastminute.com.au®, ebookers®, CheapTickets®, Hotwire®, Classic Vacations®, Expedia® Media Solutions, CarRentals.com™, Expedia Local Expert®, Expedia® CruiseShipCenters®, SilverRail Technologies, Inc., ALICE and Traveldoo®.



Expedia is committed to creating an inclusive work environment with a diverse workforce.   All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.  This employer participates in E-Verify. The employer will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS) with information from each new employee's I-9 to confirm work authorization.

AIRBUS
  • Blagnac, France

Description of the job



Vacancies for 3 Data Scientists (m/f) have arisen within Airbus Commercial Aircraft in Toulouse. You will join the PLM Systems & Integration Tests team within IM Develop department.  



IM Develop organization is established to ensure Product Life Cycle Management (PLM) Support and Services as requested by Programmes, CoE and CoC. The department is the home within Airbus to lead the development, the implementation, the maintenance and the support of PLM to all Airbus programs in line with the corporate strategy.



Within the frame of its Digital Design, Manufacturing & Services (DDMS) project, Airbus is undergoing a significant digital transformation to benefit from the latest advances in new technologies and targets a major efficiency breakthrough across the program and product lifecycle. It will be enabled by a set of innovative concepts such as model based system engineering, modular product lines, digital continuity and concurrent co-design of the product, its industrial setup and operability features.



As a Data Scientist (m/f), you will be integrated in a team of the IM Develop department and appointed to dedicated missions. You will work in an international environment where you will able to develop in-depth knowledge of local specificities: engineering, manufacturing, costing, etc.



Tasks & accountabilities



Your main tasks and responsibilities will be to:




  • Analyze large amounts of information to discover trends and patterns, build predictive models, implement cost models and machine-learning algorithms based on technical data and DMU models.

  • Combine models through ensemble modelling

  • Present information using data visualization techniques

  • Propose solutions and strategies to business challenges

  • Implement features extraction by analyzing CAD models and engineering Bill of Material

  • Collaborate with engineering, costing (FCC) to implement new costing models in python

  • Design and propose new short/medium- and long-term forecasting methods

  • Consolidate, compare and enlarge the data required for the various types of modelling

  • Attend technical events/conferences and reinforce Data Science skills within Airbus




Required skills



We are looking for candidates with the following skills and experience:




  • Strong knowledge of python development in the frame of industrial projects

  • Experience in data mining & machine-learning

  • Knowledge of Scala, Java or C++,… familiarity with R, SQL is an asset

  • Experience using business intelligence tools

  • Analytical mindset

  • Strong math skills (e.g. statistics, algebra)

  • Problem-solving aptitude

  • Excellent communication and presentation skills

  • PLM knowledge and 3D CAD programming would be a plus

  • French & English: advanced level

Citizens Advice
  • London, UK
  • Salary: £40k - 45k

As a Database engineer in the DevOps team here at Citizens Advice you will help us develop and implement our data strategy. You will have the opportunity to work with both core database technologies and big data solutions.


Past


Starting from scratch, we have built a deep tech-stack with AWS services at its core. We created a new CRM system, migrated a huge amount of data to AWS Aurora PG and used AWS RDS to run some of our business critical databases.


You will have gained a solid background and in-depth knowledge of AWS RDS, SQL/Admin against DBMS's such as PostgreSql / MySQL / SQL Server, Dynamo / Aurora. You will have dealt with Data Warehousing, ETL, DB Mirroring/Replication, and DB Security Mechanisms & Techniques.


Present


We use AWS RDS including Aurora as the standard DB implementation for our applications. We parse data in S3 using Spark jobs and we are planning to implement a data lake solution in AWS.


Our tools and technologies include:



  • Postgres on AWS RDS

  • SQL Server for our Data Warehouse

  • Liquibase for managing the DW schema

  • Jenkins 2 for task automation

  • Spark / Parquet / AWS Glue for parsing raw data

  • Docker / docker-compose for local testing


You will be developing, supporting and maintaining automation tools to drive database, reporting and maintenance tasks.


As part of our internal engineering platform offering, R&D time will give you the opportunity to develop POC solutions to integrate with the rest of the business.


Future


You will seek continuous improvement and implement solutions to help Citizens Advice deliver digital products better and quicker.


You will be helping us implement a data lake solution to improve operations and to offer innovative services.


You will have dedicated investment time at Citizens Advice to learn new skills, technologies, research topics or work on tools that make this possible.

Intercontinental Exchange
  • Atlanta, GA
Job Purpose
The Data Analytics team is seeing a dynamic, self-motivated Data Scientist, who is able to work independently on data analysis, datamining, report development and customer requirement gathering.
Responsibilities
  • Applies data analysis and data modeling techniques, based upon a detailed understanding of the corporate information requirements, to establish, modify, or maintain data structures and their associated components
  • Participates in the development and maintenance of corporate data standards
  • Supports stakeholders and business users to define data and analytic requirements
  • Works with the business to identify additional internal and external data sources to bring into the data environment and mesh with existing data
  • Story board, create, ad publish standard reports, data visualizations, analysis and presentations
  • Develop and implement workflows using Alteryx and/or R
  • Develop and implement various operational and sales Tableau dashboards
Knowledge And Experience
  • Bachelors degree in statistics/engineering/math/quantitative analytics/economics/finance or a related quantitative discipline required
  • Masters in engineering/physics/statistics/economics/math/science preferred
  • 1+ years of experience with data science techniques and real-world application experience
  • 2+ years of experience supporting the development of analytics solutions leveraging tools like Tableau Desktop and Tableau Online
  • 1+ years of experience working with SQL, developing complex SQL queries, and leveraging SQL in Tableau
  • 1+ years of experience in Alteryx, and R coding
  • Deep understanding of Data Governance and Data Modeling
  • Ability to actualize requirements
  • Advanced written and oral communication skills with the ability to summarize findings and present in a clear, concise manner to peers, management, and others
Additional Information
    • Job Type Standard
    • Schedule Full-time
Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-5-8 years of Java experience, Scala and Python experience a plus

-3+ years of experience as an analyst, data scientist, or related quantitative role.

-3+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-3-5years of Java experience, Scala and Python experience a plus

-2+ years of experience as an analyst, data scientist, or related quantitative role.

-2+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

Impetus
  • Phoenix, AZ

      Multiple Positions I Multiple Locations : Phoenix, AZ/ Richmond, VA/ Tampa, FL

      Emplyment Type :: Full time || Contract


      As a Big Data Engineer, you will have the opportunity to make a significant impact, both to our organization and those of our Fortune 500 clients. You will work directly with clients at the intersection of business and technology. You will leverage your experience with Hadoop and software engineering to help our clients use insights gleaned from their data to drive Value.


      You will also be given substantial opportunities to develop and expand your technical skillset with emerging Big Data technologies so you can continually innovate, learn, and hit the gas pedal on your career.



Required:
  • 4+ years of IT experience
  • Very good experience in Hadoop, Hive, Spark Batch. (Streaming exp is good to have)
  • Good to have experience with 1 NoSQL - HBase/ Cassandra.
  • Experience with Java/J2EE & Web Services, Scala/ Python is good to have
  • AWS (ETL implementation with AWS on Hadoop) good to have
  • Writing utilities/program to enhance product capability to fulfill specific customer requirement
  • Learning new technology/solution to solve customer problems
  • Provide feedback/learning to product team


Soft Skills:

    A team player who understands the roles and responsibilities of all the team members and facilitates a one team culture
    Strong communication skills both verbal and written
    Quick learner who can work independently on the tasks assigned after initial hand holding
Migo
  • Taipei, Taiwan

  • Responsibility 

    • Collaborate with data scientists to phase in statistical, predictive machine learning and AI models to production scale and continuously optimizing performance.

    • Design, build, optimize, launch and support new and existing data models and ETL processes in production based on data products and stakeholder needs.

    • Define and manage SLA and accuracy for all data sets in allocated areas of ownership.

    • Design and continuously improve data infrastructure and identify infra issues and drive to resolution.

    • Support software development team to build and maintain data collectors in Migo application ecosystem based on data warehouse and analytics user requirements.





  • Basic Qualification:

    • Bachelor's degree in Computer Science, Information Management or related field.

    • 2+ years hands-on experience in the data warehouse space, custom ETL design, implementation and maintenance.

    • 2+ years hands-on experience in SQL or similar languages and development experience in at least one scripting language (Python preferred).

    • Strong data architecture, data modeling, schema design and effective project management skills.

    • Excellent communication skills and proven experience in leading data driven projects from definition through interpretation and execution.

    • Experience with large data sets and data profiling techniques.

    • Ability to initiate and drive projects, and communicate data warehouse plans to internal clients/stakeholders.





  • Preferred Qualification:

    • Experience with big data and distributed computing technology such as Hive, Spark, Presto, Parquet

    • Experience building and maintaining production level data lake with Hadoop Cluster or AWS S3.

    • Experience with batch processing and streaming data pipeline/architecture design patterns such as lambda architecture or kappa architecture.








    • AI

    • ETL

    • (SLA)


    • Migo







    • 2data warehouse space, custom ETL

    • 2SQL (Python)

    • data modeling






    • Hive, Spark, Presto, Parquet

    • Hadoop Cluster or AWS S3.

    • lambda architecture or kappa architecture.


Hydrogen Group
  • Austin, TX

Data Scientist
x2 Roles
Permanent
Austin, TX
Remote + Flex Hours


Our client is a very well funded venture capital start-up ran by an experienced team of technical entrenreneurrs based in Austin, TX. Having sucessfully raisefd over $10 million in Series A funding they are looking to expand their existing Data Science and Analytics team by adding 2 new members with plans to grow total headcount to 10 by the summer.

The sucessful candidate will be focused on working with client company's data accross multople sources and developing algorithems and models which will be used to improve performance.


Requirements:

  • The ability to solve problems that no one else has before in a start-up environment - in return you will be given flexible working hours, ability to work remote as you deem fit.
  • Background in EITHER (or Both) Machine learning or statistics - PhD + advanced academic qualifications welcome but not essential.
  • Ability to work in R, Python, TensorFlow, RStudio preffered by not essential


Interviews taking place from 21st ownwards with offers made ASAP

IT - Data Scientist - x2 - Austin, TX - R - Machine Learning - IT - Analytics - Statistics - AI - Python - IT - Data Scientist - x2 - Austin, TX - Keras - Tensor Flow - RStudio - Data Scientist - x2 - Austin, TX -

Elev8 Hire Solutions
  • Atlanta, GA

Jr. Data Scientist


Our client in the Midtown area is looking for a Jr. Data Scientist with a passion for Machine Learning, knows the how's & why's of algorithms, and is excited about the fraud industry. You'll be a pivotal piece in the Atlanta/US team in development and application of adaptive real-time analytical modeling algorithms. So if that gets you excited, apply!


Role Expectations:


  • End-to-end processing and modeling of large customer data sets.
  • Working with customers to understand the opportunities and constraints of their existing data in the context of machine learning and predictive modeling.
  • Develop statistical models and algorithms for integration with companys product.
  • Apply analytical theory to real-world problems on large and dynamic datasets.
  • Produce materials to feedback analytic results to customers (reports, presentations, visualizations).
  • Providing input into future data science strategy and product development.
  • Working with development teams to support and enhance the analytical infrastructure.
  • Work with the QA team to advise on effective analytical testing.
  • Evaluate and improve the analytical results on live systems.
  • Develop an understanding of the industry data structures and processes.


Team working with:


  • Currently 6 other Data Scientist local to Atlanta, the rest of the team (10+) in Cambridge
  • 130 people in the entire company


Top skills required:


  • Degree-level qualification with good mathematical background and knowledge of statistics.
  • Professional experience using Random Forests, machine learning algorithms, development skills with C or Python
  • First-hand experience of putting Data Storage into production
  • Experience in implementing statistical models and analytical algorithms in software.
  • Practical experience of the handling and mining of large, diverse, data sets.
  • Must have a USA work visa or Passport.


Nice to have:


  • Ph.D. or other postgraduate qualification would be an extreme advantage
  • An indication of how relevant technologies have been used (not just a list).
  • Attention to grammatical detail, layout and presentation.


Benefits:


  • Regular bonus scheme
  • 20 days annual leave
  • Healthcare package
  • Free Friday lunches
  • Regular social outings
  • Fridge and cupboards packed full of edible treats
  • Annual summer social and Christmas dinner
118118Money
  • Austin, TX

Seeking an individual with a keen eye for good design combined with the ability to communicate those designs through informative design artifacts. Candidates should be familiar with an Agile development process (and understand its limitations), able to mediate between product / business needs and developer architectural needs. They should be ready to get their hands dirty coding complex pieces of the overall architecture.

We are .NET Core on the backend, Angular 2 on a mobile web front-end, and native on Android and iOS. We host our code across AWS and on-premises VMs, and use various data backends (SQL Server, Oracle, Mongo).

Very important is interest in (and hopefully, experience with) modern big data pipelines and machine learning. Experience with streaming platforms feeding Apache Spark jobs that train machine learning models would be music to our ears. Financial platforms generate massive amounts of data, and re-architecting aspects of our microservices to support that will be a key responsibility.

118118 Money is a private financial services company with R&D headquartered in Austin along highway 360, in front of the Bull Creek Nature preserve. We have offices around the world, so the candidate should be open to occasional travel abroad. The atmosphere is casual, and has a startup feel. You will see your software creations deployed quickly.

Responsibilities

    • Help us to build a big data pipeline and add machine learning capability to more areas of our platform.
    • Manage code from development through deployment, including support and maintenance.
    • Perform code reviews, assist and coach more junior developers to adhere to proper design patterns.
    • Build fault-tolerant distributed systems.

Requirements

    • Expertise in .NET, C#, HTML5, CSS3, Javascript
    • Experience with some flavor of ASP.NET MVC
    • Experience with SQL Server
    • Expertise in the design of elegant and intuitive REST APIs.
    • Cloud development experience (Amazon, Azure, etc)
    • Keen understanding of security principles as they pertain to service design.
    • Expertise in object-oriented design principles.

Desired

    • Machine Learning experience
    • Mobile development experience
    • Kafka / message streaming experience
    • Apache Spark experience
    • Knowledge of the ins and outs of Docker containers
    • Experience with MongoDB
FCA Fiat Chrysler Automobiles
  • Detroit, MI

Fiat Chrysler Automobiles is looking to fill the full-time position of a Data Scientist. This position is responsible for delivering insights to the commercial functions in which FCA operates.


The Data Scientist is a role in the Business Analytics & Data Services (BA) department and reports through the CIO. They will play a pivotal role in the planning, execution  and delivery of data science and machine learning-based projects. The bulk of the work with be in areas of data exploration and preparation, data collection and integration, machine learning (ML) and statistical modelling and data pipe-lining and deployment.

The newly hired data scientist will be a key interface between the ICT Sales & Marketing team, the Business and the BA team. Candidates need to be very much self-driven, curious and creative.

Primary Responsibilities:

    • Problem Analysis and Project Management:
      • Guide and inspire the organization about the business potential and strategy of artificial intelligence (AI)/data science
      • Identify data-driven/ML business opportunities
      • Collaborate across the business to understand IT and business constraints
      • Prioritize, scope and manage data science projects and the corresponding key performance indicators (KPIs) for success
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
      • Generate and test hypotheses about the underlying mechanics of the business process.
      • Network with domain experts to better understand the business mechanics that generated the data.
    • Data Collection and Integration:
      • Understand new data sources and process pipelines. Catalog and document their use in solving business problems.
      • Create data pipelines and assets the enable more efficiency and repeatability of data science activities.
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
    • Machine Learning and Statistical Modelling:
      • Apply various ML and advanced analytics techniques to perform classification or prediction tasks
      • Integrate domain knowledge into the ML solution; for example, from an understanding of financial risk, customer journey, quality prediction, sales, marketing
      • Testing of ML models, such as cross-validation, A/B testing, bias and fairness
    • Operationalization:
      • Collaborate with ML operations (MLOps), data engineers, and IT to evaluate and implement ML deployment options
      • (Help to) integrate model performance management tools into the current business infrastructure
      • (Help to) implement champion/challenger test (A/B tests) on production systems
      • Continuously monitor execution and health of production ML models
      • Establish best practices around ML production infrastructure
    • Other Responsibilities:
      • Train other business and IT staff on basic data science principles and techniques
      • Train peers on specialist data science topics
      • Promote collaboration with the data science COE within the organization.

Basic Qualifications:

    • A bachelors  in computer science, data science, operations research, statistics, applied mathematics, or a related quantitative field [or equivalent work experience such as, economics, engineering and physics] is required. Alternate experience and education in equivalent areas such as economics, engineering or physics, is acceptable. Experience in more than one area is strongly preferred.
    • Candidates should have three to six years of relevant project experience in successfully launching, planning, executing] data science projects. Preferably in the domains of automotive or customer behavior prediction.
    • Coding knowledge and experience in several languages: for example, R, Python, SQL, Java, C++, etc.
    • Experience of working across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service, and others.
    • Experience with distributed data/computing and database tools: MapReduce, Hadoop, Hive, Kafka, MySQL, Postgres, DB2 or Greenplum, etc.
    • All candidates must be self-driven, curious and creative.
    • They must demonstrate the ability to work in diverse, cross-functional teams.
    • Should be confident, energetic self-starters, with strong moderation and communication skills.

Preferred Qualifications:

    • A master's degree or PhD in statistics, ML, computer science or the natural sciences, especially physics or any engineering disciplines or equivalent.
    • Experience in one or more of the following commercial/open-source data discovery/analysis platforms: RStudio, Spark, KNIME, RapidMiner, Alteryx, Dataiku, H2O, SAS Enterprise Miner (SAS EM) and/or SAS Visual Data Mining and Machine Learning, Microsoft AzureML, IBM Watson Studio or SPSS Modeler, Amazon SageMaker, Google Cloud ML, SAP Predictive Analytics.
    • Knowledge and experience in statistical and data mining techniques: generalized linear model (GLM)/regression, random forest, boosting, trees, text mining, hierarchical clustering, deep learning, convolutional neural network (CNN), recurrent neural network (RNN), T-distributed Stochastic Neighbor Embedding (t-SNE), graph analysis, etc.
    • A specialization in text analytics, image recognition, graph analysis or other specialized ML techniques such as deep learning, etc., is preferred.
    • Ideally, the candidates are adept in agile methodologies and well-versed in applying DevOps/MLOps methods to the construction of ML and data science pipelines.
    • Knowledge of industry standard BA tools, including Cognos, QlikView, Business Objects, and other tools that could be used for enterprise solutions
    • Should exhibit superior presentation skills, including storytelling and other techniques to guide and inspire and explain analytics capabilities and techniques to the organization.