OnlyDataJobs.com

RM Dayton Analytics
  • Dallas, TX
Job Summary & Responsibilities

RM Dayton Analytics premiere retail client has an immediate opening for a Web Analytics & Measurements Analyst.  

Overview:

 

The Analytics & Measurement Specialist position will identify data driven insights, inform test design and measure test results for various pilot test initiatives.

 

Responsibilities:

  • Proposes, executes and communicates discovery analytics to identify new tests or inform test design/sizing
  • Determine relevant metrics to effectively measure the performance of each test
  • Evaluates test performance against KPIs (including deep dives/segment cuts) according to measurement playbook, helps apply learnings to future tests and assists with making scaling decisions
  • Assists in sizing annualized impact of winning tests
  • Receive discovery analysis requests and turn them into concrete analysis plans
  • Perform relevant analytics to support test execution results and deep dives
  • Provide accurate and timely reporting of entire portfolio of active tests results

Requirements:

  • At least 2 years of experience in a business analytics role, preferably in testing or web analytics
  • Experience with A/B, multivariate and/or incremental test design and implementation
  • Experience in data manipulation and analysis using SQL, SAS, R, or Python
  • Bachelors degree in a quantitative discipline (Statistics, Applied Mathematics, Economics, etc) or sufficient on the job use of related skills
  • Exceptional standards for quality and strong attention to detail
  • Experience working in an Agile/Scrum environment
  • Exposure to big data tools such as Hadoop/Hive a plus
  • Experience developing various types of predictive models with a targeted result of increasing revenue is a plus

Equal Opportunity Employer. All qualified applicants will receive consideration for employment and will not be discriminated against based on race. color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability, age, pregnancy, genetic information or any other consideration prohibited by law or contract.

JPMorgan Chase & Co.
  • Houston, TX
Our Corporate Technology team relies on smart, driven people like you to develop applications and provide tech support for all our corporate functions across our network. Your efforts will touch lives all over the financial spectrum and across all our divisions: Global Finance, Corporate Treasury, Risk Management, Human Resources, Compliance, Legal, and within the Corporate Administrative Office. Youll be part of a team specifically built to meet and exceed our evolving technology needs, as well as our technology controls agenda.
As a Machine Learning Engineer, you will provide high quality technology solutions that address business needs by developing applications within mature technology environments. You will utilize mature (3rd or 4th Generation) programming methodologies and languages and adhere to coding standards, procedures and techniques while contributing to the technical code documentation.
You will participate in project planning sessions with project managers, business analysts and team members to analyze business requirements and outline the proposed IT solution. You will participate in design reviews and provide input to the design recommendations; incorporate security requirements into design; and provide input to information/data flow, and understand and comply to Project Life Cycle Methodology in all planning steps. You will also adhere to IT Control Policies throughout design, development and testing and incorporate Corporate Architectural Standards into application design specifications.
Additionally, you will document the detailed application specifications, translate technical requirements into programmed application modules and develop/Enhance software application modules. You will participate in code reviews and ensure that all solutions are aligned to pre-defined architectural specifications; identify/troubleshoot application code-related issues; and review and provide feedback to the final user documentation.
Key Responsibilities
Provide leadership and direction for the key machine learning initiatives in the Operation Risk domain
Act as machine learning evangelist in the Operation Risk domain
Perform research and proof of concepts to determine ML/AI applicability to potential use cases
Mentor junior data scientists and team members new to machine learning
Display efficient work style with attention to detail, organization, and strong sense of urgency
Designing software and producing scalable and resilient technical designs
Creating Automated Unit Tests using Flexible/Open Source Frameworks
Digesting and understanding Business Requirements and designing new modules/functionality which meet the needs of our business partners.
Implement model reviews and machine learning governance framework
Manage code quality for total build effort.
Participate in design reviews and provide input to the design recommendations
Essentials
  • Advanced degree in Math, Computer Science or another quantitative field
  • Three to five years working experience in machine learning, preferably natural language processing
  • Ability to work in a team as a self-directed contributor with a proven track record of being detail orientated, innovative, creative, and strategic
  • Strong problem solving and data analytical skills
  • Industry experience building end-to-end Machine Learning systems leveraging Python (Scikit-Learn, Pandas, Numpy, Tensorflow, Keras, NLTK, Gensim et al.) or other similar languages
  • Experience of a variety of machine learning algorithms (classification, clustering, deep learning et al.) and natural language processing applications (topic modeling, sentiment analysis et al.)
  • Experience with NLP techniques to transform unstructured text data to structured data (lemmatization, stemming, Bag-of-words, word embedding et al.)
  • Experience visualizing/presenting data for stakeholders using Tableau, or open-source Python packages such as matplotlib, seaborn et al.
  • Familiar with Hive/Impala to manipulate data and draw insights from Hadoop
Personal Specification
Demonstrate Continual Improvement in terms of Individual Performance
Strong communication skills
Bright and enthusiastic, self-directed
Excellent analytical and problem solving skills
When you work at JPMorgan Chase & Co., youre not just working at a global financial institution. Youre an integral part of one of the worlds biggest tech companies. In 14 technology hubs worldwide, our team of 40,000+ technologists design, build and deploy everything from enterprise technology initiatives to big data and mobile solutions, as well as innovations in electronic payments, cybersecurity, machine learning, and cloud development. Our $9.5B+ annual investment in technology enables us to hire people to create innovative solutions that will not only transform the financial services industry, but also change the world.
Varen Technologies
  • Annapolis Junction, MD
  • Salary: $90k - 180k

Varen Technologies is seeking an experienced and flexible Cloud Software Engineer to augment the existing platform team for a large analytic cloud repository. A successful candidate for this position has experience working with large Hadoop and Accumulo based clusters and a familiarity with open-source technologies. Additional knowledge of Linux OS development, Prometheus, Grafana, Kafka and CentOS would benefit the candidate.


The Platform Team developers build/package/patch the components (typically open-source products) and put initial monitoring in place to ensure the component is up and running, if applicable. The platform team builds subject matter expertise and the integration team installs. Ideal candidates would have familiarity with open-source products and be willing/able to learn new technologies.


REQUIRED EXPERIENCE:



  • 5 years of experience in Software Engineering

  • 4 years of experience developing software with high level languages such as Java, C, C++

  • Demonstrated experience working with OpenSource (NoSQL) products that support highly distributed, massively parallel computation needs such as Hbase, CloudBase/Acumulo, Big Table, etc

  • Demonstrated work experience with the Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc.


DESIRED EXPERIENCE:



  • Hadoop/Cloud Developer Certification

  • Experience developing and deploying analytics within a heterogeneous schema environment.

  • Experience designing and developing automated analytic software, techniques, and algorithms.


 CLEARANCE REQUIREMENT:



  • TS/SCI w/ Polygraph

Epidemic Sound AB
  • Stockholm, Sweden

We are now looking for an experienced Machine Learning Specialist (you’re likely currently a Data Scientist, or perhaps an advanced Insight Analyst who’s had the opportunity to use Machine Learning in a commercial environment). 



Job description


The position as a Machine Learning Specialist will report directly in to the CTO, in a fresh new team which functions as a de-centralised squad, delivering advanced analysis and machine learning to various departments throughout the company. You’ll work alongside Data Engineers and Developers to deploy microservices solving many different business needs. The use cases range from Customer Lifetime Value & Churn prediction – to building fantastic recommender engines to further personalize Epidemic Sound’s offering.


You will be working closely with the backend data team in developing robust, scalable algorithms. You will improve the personalization of the product by:



  • Analysing behaviours of visitors, identifying patterns and outliers which can indicate their likelihood to Churn.

  • Developing classification systems through feature extraction on music to identify type & ‘feel’ of any given content.

  • Creating recommender engines so that the music our users see first, is relevant to them based on their behaviours.

  • Contributing to the automation of previously manual tasks, by leveraging the classification systems you’ve contributed to building.

  • Consulting on appropriate implementation of algorithms in practice – and actively identifying new use cases that can help improve Epidemic Sound!



What are we looking for?


We’re looking for a team member with a “no task is too small” mindset – we are at the beginning of our Machine Learning journey – so we need someone who thinks building something from scratch sounds exciting. 


It would be music to our ears if you have:



  • Deep understanding of machine learning (neural networks, deep learning, classification, regression)

  • Experience with machine learning in production

  • Experience with: tensorflow, keras, pytorch, sciki-learn, scipy, numpy, pandas or similar

  • Experience with ml projects in customer value or music information retrieval (MIR)

  • Fluency in python programming and a passion for production ready code

  • Experience from Google Cloud and/or AWS

  • MSc in a Quantitative or Computer Science based subject (Machine Learning, Statistics, Applied Mathematics)


Curious about our music? Find our music on Spotify here → https://open.spotify.com/user/epidemicsound


Application


Do you want to be a part of our fantastic team? Please apply by clicking the link below.

HelloFresh
  • Berlin, Germany

At HelloFresh, we want to change the way people eat. Over the past 5 years we've seen this mission spread beyond our wildest dreams. So, how did we do it? Our weekly recipe boxes full of exciting recipes and lovingly sourced, fresh ingredients have blossomed into a community of inspired, energised home cooks that expands across the globe. Now we're the fastest growing company in Europe, active and growing in 9 different countries across 3 continents.


Our story started in Berlin. As Europe’s tech hub, and the home of our global headquarters, it’s a dynamic, progressive environment where innovation is nurtured and promoted. Since we started, we’ve worked exceptionally hard and we’ve received almost US$ 300 million in investment which together have allowed us to create an award winning product.


As a member of HelloTech you’ll be exposed to a modern technology stack and a slick cross functional agile team setup. We have developed a refined product and provide scalability on a global level. Join our HelloTech team and help us to build a fresh food global champion!



About the job


At HelloFresh, Data Science is an interdisciplinary team that designs, implements, and maintains state of the art machine learning models to automate and optimize marketing, operations, logistics and customer experience. With our rapid company expansion, we are currently looking for a Technical Lead Data Scientist that is excited about turning innovative ideas into powerful data driven products. Working closely with the Data Scientists and Machine Learning Engineers, you will help bringing these ideas into reality and improve the experience of millions of customers around the globe.


To succeed in this role, you’ll need a hunger for discovering information in massive amounts of data, an attitude towards experimentation, fluency in machine learning techniques and software engineering best practices.



As a Technical Lead Data Scientist you will be responsible for



  • Deriving and maintaining a technical vision for end-to-end machine learning pipelines.

  • Playing a key role in ensuring the ongoing alignment and standards of our technology vision for the data science team.

  • Leading by example by translating business problems into quantitative terms and productionized models to presenting the results in clear and effective manner through visualization and prototyping.

  • Collaborating with cross-functional teams to discover innovative ways of leveraging vast repositories of user generated data.

  • Turning prototypes into production ready implementation.

  • Measuring performance over time, tuning, and inspecting models.

  • Sharing data solutions and intel through Open Source and promoting a data driven culture within a large international organization.



Who we’re looking for



  • Significant experience in tech industry.

  • Excellent educational background, MSc in a Computer Science, Mathematics, Physics, Computer Linguistics, or similar quantitative field.

  • Great engineering abilities and the capacity to design resilient distributed systems that are flexible and scalable.

  • A strong understanding of the state of the art in machine learning methods and in depth techniques for practical application.

  • Great coding skills in Python and knowledge of data libraries as sklearn and pandas. Knowledge about big data frameworks as spark is a plus.

  • Exceptional written and verbal communication skills, with an ability to listen and show empathy.

  • A self-organized individual, with excellent focus and prioritization of workload using business data and metrics.

  • A passionate leader with a proven track record of mentoring team members.



What we offer



  • HelloFresh is a place that lets you implement your own ideas and contribute to our open source repositories.

  • The opportunity to get into one of the most intellectually demanding roles at one of the largest technology companies in Europe.

  • Cutting edge technology, allowing you to work with state-of-the-art tools and software solutions.

  • Competitive compensation and plenty of room for personal growth.

  • Great international exposure and team atmosphere.

  • Work in a modern, spacious office in the heart of Berlin with excellent transport links and employee perks.

U.S. Bank
  • Minneapolis, MN

U.S. Bank is currently seeking three Agile Studio Analysts and two Executive Insights Analysts. Descriptions are as follows:


U.S. Bank is currently seeking an Agile Studio Analyst, who will report to an Analytics Manager. This individual will provide advanced analytics and support within U.S. Banks Agile Studios, and he/she will feel at ease querying databases, integrating data from multiple/disparate sources, conducting sophisticated quantitative analyses, and presenting their findings as clear, actionable insights and recommendations that enable revenue growth, channel adoption and customer acquisition, and retention in an Agile team environment. The ideal candidate will be a motivated professional, who is passionate about data and data visualization. He/she must have strong technical and analytical skills, with the ability to understand the business goals/needs, have experience with Agile methodologies and or processes, and be committed to delivering data-driven insights that lead to actionable recommendations.

This individual will also work to build strong and effective relationships across ERA, the Agile Studios, and the enterprise to determine analytic needs, predict trends, and ensure business questions are being answered in the most appropriate manner, in service of acquiring, deepening, and retaining precious U.S. Bank Customer and client relationships.

He/she will also possess a strong willingness to learn and teach and will aid in leading change across a matrixed organization and driving employee engagement. He/she will simplify complexity by successfully utilizing data and analytics tools and techniques to enable better, more customer-centered decisions. This role will be an expert in advanced analytical tools and statistical techniques to separate signal from noise to derive insights that will help answer key business questions.

Basic Qualifications

  • Bachelors degree in a quantitative field such as applied mathematics, economics, statistics, engineering or related field required, or equivalent, relevant work experience
  • Five to seven years of experience in analytics, demonstrating a progressive increase in responsibility and scope, including:  
    • Expert user of advanced analytics tools and techniques to perform analysis and interpret data some examples are: SQL, SAS, Python, Excel, Alteryx, etc.
    • Expert user of analytics data visualization tools, such as Tableau
    • Expert in analytic storytelling and presentation, including PowerPoint and other tools
    • Demonstrated project management, organization skills, and Agile experience

Preferred Qualifications

  • Advanced degree attainment
  • Experience in financial services with knowledge of products, customers, and transaction and interaction data, including source systems
  • Demonstrated understanding of statistical analyses methodologies
  • Ability to work and thrive in a collaborative work environment, as well as independently, to drive results
  • Impeccable attention to detail, while being comfortable with data ambiguity
  • Proven track record of driving analytics into actionable business outcomes
  • Demonstrated ability to influence change across a complex organization


U.S. Bank is currently seeking an Executive Insights Analyst, who will report to an Analytics Manager. This individual will provide advanced analytics and support for the Executive Insights team, and he/she will feel at ease querying databases, integrating data from multiple/disparate sources, conducting sophisticated quantitative analyses, and presenting their findings to executives as clear, actionable insights and recommendations that enable revenue growth, channel adoption and customer acquisition, and retention. The ideal candidate will be a motivated professional, who is passionate about data and data visualization. He/she must have strong technical and analytical skills, with the ability to understand the business goals, needs, and strategy and have experience interacting with executive-level employees, and be committed to delivering data-driven insights that lead to actionable recommendations.

This individual will also work to build strong and effective relationships across ERA, the management committee, and the enterprise to determine analytic needs and definitions, predict trends, and ensure business questions are being answered swiftly and in the most appropriate manner, in service of acquiring, deepening, and retaining precious U.S. Bank Customer and client relationships.

He/she will also possess a strong willingness to learn and teach and will aid in leading change across a matrixed organization and driving employee engagement. He/she will simplify complexity by successfully utilizing data and analytics tools and techniques to enable better, more customer-centered decisions. This role will be an expert in advanced analytical tools and statistical techniques to separate signal from noise to derive insights that will help answer key business questions.

Basic Qualifications

  • Bachelors degree in a quantitative field such as applied mathematics, economics, statistics, engineering or related field required, or equivalent, relevant work experience
  • Five to seven years of experience in analytics, demonstrating a progressive increase in responsibility and scope, including:  
    • Expert user of advanced analytics tools and techniques to perform analysis and interpret data some examples are: SQL, SAS, Python, Excel, Alteryx, etc.
    • Expert user of analytics data visualization tools, such as Tableau
    • Expert in analytic storytelling and presentation, including PowerPoint and other tools
    • Demonstrated project management and organization skills

Preferred Qualifications

  • Advanced degree attainment
  • Experience in financial services with knowledge of products, customers, and transaction and interaction data, including source systems
  • Demonstrated understanding of statistical analyses methodologies
  • Ability to work and thrive in a collaborative work environment, as well as independently, to drive results
  • Impeccable attention to detail, while being comfortable with data ambiguity
  • Proven track record of driving analytics into actionable business outcomes
  • Demonstrated ability to influence change across a complex organization
Indeed - Tokyo, JP
  • Tokyo, Japan

Your job.



The role of Data Science at Indeed is to follow the data. We log, analyze, visualize, and model terabytes of job search data. Our Data Scientists build and implement machine learning models to make timely decisions. Each of us is a mixture of a statistician, scientist, machine learning expert, and engineer. We have a passion for building and improving Internet-scale products. We seek to understand human behavior via careful experimentation and analysis, to “help people get jobs”.

You're someone who wants to see the impact of your work making a difference every day. You understand how to use data to make decisions and how to train others to do so. You find passion in the craft and are constantly seeking improvement and better ways to solve tough problems. You produce the highest quality Data Science solutions and lead others to do the same.


You understand that the best managers serve their teams by removing roadblocks and giving individual contributors autonomy and ownership. You have high standards and will take pride in Indeed like we do as well as push us to be better. You have delivered challenging technical solutions at scale. You have led Data Science or engineering teams, and earned the respect of talented practitioners. You are equally happy talking about deep learning and statistical inference, as you are brainstorming about practical experimental design and technology career development. You love being in the mix technically while providing leadership to your teams.


About you.


Requirements   



  • Significant prior success as a Data Scientist working on challenging problems at scale

  • 5+ years of industrial Data Science experience, with expertise in machine learning and statistical modeling

  • The ability to guide a team to achieve important goals together

  • Have full stack experience in data collection, aggregation, analysis, visualization, productionization, and monitoring of Data Science products

  • Strong desire to solve tough problems with scientific rigour at scale

  • An understanding of the value derived from getting results early and iterating

  • Strong ability to coach Data Scientists, helping them improve their skills and grow their careers

  • Ph.D. or M.S. in a quantitative field such as Computer Science, Operations Research, Statistics, Econometrics or Mathematics

  • Passion to answer Product/Engineering questions with data

  • Proficiency with the English language 


We get excited about candidates who



  • Can do small data modeling work: R, Python, Julia, Octave

  • Can do big data modeling work: Hadoop, Pig, Scala, Spark

  • Can fish for data: SQL, Pandas, MongoDB

  • Can deploy Data Science solutions: Java, Python, C++

  • Can communicate concisely and persuasively with engineers and product managers



Indeed provides a variety of benefits that help us focus on our mission of helping people get jobs.


View our bounty of perks: http://indeedhi.re/IndeedBenefits  

IBM
  • Austin, TX

IBM Global Business Services: Join a Leader. Consult with us. IBM Global Business Services helps top-tier clients solve their most complex business and technical issues. As the Advanced Analytics Leader, you will deliver innovative business consulting, business process design, systems integration, and application design and management to leading commercial and public-sector organizations in 17 industries worldwide. With access to resources that only a global leader can provide, as a consultant you will learn valuable skills, gain access to a vast and diverse network of talented professionals, and enjoy unparalleled career, training, and educational opportunities.
 

As the Advanced Analytics Leader you will have an analytics background with in depth knowledge in-SAP, HANA, Big Data-Hadoop, Machine learning . The responsibilities include delivery on consulting engagements, sales activities, and thought leadership. You will also have strong leadership acumen, an ability to operate in positions requiring significant self-direction and motivation, and a proven track record in Analytics consultative selling solutions to senior business and IT leaders. You will be empowered to manage multiple priorities and capable of developing strong relationships at assigned accounts and must be able to:
 

  • Lead and manage data science projects. Support machine learning offerings and be a thought leader in machine learning initiatives across analytics
  • Have a proven track record of drawing insights from data and translating those insights into tangible business outcomes
  • Ability to implement new technologies with cutting-edge machine learning and statistical modeling techniques
  • Establish and maintain deal focused trusted relationships with clients and partners to scope, solution, propose, close and deliver complex projects
  • Identify, notice, validate, and qualify opportunities and help close them on an as needed basis. Maintain a strong pipeline of opportunities and progress them during the quarter
  • Recruit, motivate, mentor and develop team members

Bottom Line? We outthink ordinary. Discover what you can do at IBM.


Required Professional and Technical Expertise :

  • At least 5 years experience in professional services consulting, and sales at a national, or global management consulting firm
  • At least 3 years experience leading and delivering SAP HANA solutions - calculation view modeling, PAL, SQL scripting and performance tuning
  • At least 3 years experience working on full life cycle implementation projects with SAP
  • Strong understanding of SAP data SAP ERP/CRM/APO across modules SD, MM, PP, FI and SAP processes O2C, R2R, P2P
  • At least 2 years experience working on data science projects with of a variety of machine learning and data mining techniques (clustering, decision tree learning, GLM, Bayesian modeling artificial neural networks, etc.)
  • Expertise using statistical computer languages (R, Python, HANA PAL etc.) to manipulate data and draw insights from large data sets
  • Knowledge of working with HANA studio/web ide with calculation views, store procedures, flowgraphs, XS applications

Preferred Professional and Technical Expertise :

  • At least 5 years experience in applying predictive analytic methodologies in a commercial environment
  • At least 5 years experience in professional SAP services consulting, and sales at a national, or global management consulting firm
  • At least 5 years experience working with SAP HANA solution in areas of calculation view modeling, PAL, SQL scripting and performance tuning
  • At least 5 years experience working on full life cycle implementation projects with SAP
  • Strong understanding of SAP data SAP ERP/CRM/APO across modules SD, MM, PP,FI  and SAP processes O2C, R2R, P2P
  • At least 5 years deployment experience on data science projects with a variety of machine learning and data mining techniques (clustering, decision tree learning, artificial neural networks, etc.)
  • Masters in Mathematics, Statistics, Computer Science or similar degree

BENEFITS
Health Insurance. Paid time off. Corporate Holidays. Sick leave. Family planning. Financial Guidance. Competitive 401K. Training and Learning. We continue to expand our benefits and programs, offering some of the best support, guidance and coverage for a diverse employee population.
  • http://www-01.ibm.com/employment/us/benefits/
  • https://www-03.ibm.com/press/us/en/pressrelease/50744.wss
 
CAREER GROWTH
Our goal is to be essential to the world, which starts with our people. Company wide we kicked off an internal talent strategy program called Go Organic. At our core, we are committed to believing and investing in our workforce through:
 
  • Skill development: helping our employees grow their foundational skills
  • Finding the dream job at IBM: navigating our company with the potential for many careers by channeling an employees strengths and career aspirations
  • Diversity of people: Diversity of thought driving collective innovation
 
In 2015, Go Organic filled approximately 50% of our open positions with internal talent that were promoted into the role.


CORPORATE CITIZENSHIP
With an employee population of 375,000 in over 170 countries, amazingly we connect, collaborate, and care. IBMers drive a corporate culture of shared responsibility. We love grand challenges and everyday improvements for our company and for the world. We care about each other, our clients, and the communities we live, work, and play in!
  • http://www.ibm.com/ibm/responsibility/initiatives.html
  • http://www.ibm.com/ibm/responsibility/corporateservicecorps
State Farm
  • Dallas, TX

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


Skills & Professional Experience

·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

·        Experience with unstructured data sets: text analytics, image recognition etc.

·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

·        Exercise sound judgment to diagnose & resolve problems within area of expertise

·        Familiarity with CI/CD development methods, Git and Docker a plus


Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


Remote work option is not available.


There is no sponsorship for an employment visa for the position at this time.


Competencies desired:
Critical Thinking
Leadership
Initiative
Resourcefulness
Relationship Building
Eliassen Group
  • Atlanta, GA

Machine Learning Engineer

$120-135k+ annual compensation plus bonus potential (flexible for right person)


We are seeking an experienced Machine Learning Engineer to join our Next Generation software team building an innovative platform that will be utilized worldwide. In this role you will play a pivotal role innovating new, globally consumed B2B software products. 


Qualifications: 

  • Theoretical and practical understanding of data mining and machine learning techniques such as: GLM/Regression, Random Forest, Boosting, Trees, text mining, Deep learning, social network analysis, Optimization, NLP, Probabilistic Inference, Information Retrieval, Recommendation Systems.
  • Strong coding and debugging skills in one or more of the following technologies: Java, Python, PySpark.ML, R, H2O, SparklyR , Pandas, Scikit-learn, Spark-Mllib


Please forward resumes to Steve Fritsch (sfritsch@eliassen.com)

Grubhub
  • Philadelphia, PA

More About the Role:

Grubhub is looking for a experienced Director of Data Engineering to build and lead all our Data Warehouse and Data Pipeline efforts. You will be heading multiple teams of passionate and skilled database engineers responsible for building and owning batch and streaming data pipelines that process tens of terabytes of data daily and support all of the analytics, business intelligence, data science and reporting data needs across the organization.

Grubhub Big data platforms are cutting edge and built primarily around a few core technologies including AWS EMR, Hive, Cassandra, S3 and Redshift for data storage. Apache Spark, python and Scala for data processing, Presto query engine and Azkaban for workflow management. You will also be interacting with several other technologies in the ecosystem primarily from acquisitions, partners or affiliates including Mysql, Postgress, SQL Server, Salesforce etc.


The Director of Web Engineering is responsible for the overall planning, organizing, and execution of the consumer-facing web applications. This includes directing all engineering efforts to meet product requirements, support, monitoring, and iterative improvement of existing web applications, strategic development of new application solutions when necessary and the high-level architecture and detailed design on the implementation and delivery of web application components. The role requires proven experience in planning, designing, developing and deploying performant, scalable and resilient web application systems.


Some Challenges Youll Tackle :

  • Understand and manage the data needs of different stakeholders across multiple business verticals including Finance, Marketing, Logistics, Product etc. Develop the vision and map strategy to provide proactive solutions and enable stakeholders to extract insights and value from data.
  • Understand end to end data management interactions and dependencies across complex data pipelines and data transformation and how they impact business decisions
  • Design best practices for big data processing, data modeling and warehouse development throughout the company. Design best practices for big data processing throughout the company
  • Translate from technical to business, and vice versa. You need to be able to speak with the least technically-minded client (internal or external) and make technology make sense to them. Then turn around and do it the other way
  • Evaluate new technologies and solutions to solve business problems

What's Actually Required:

  • 5+ years in a leadership/management capacity around data engineering, building data warehouse, data mart and data pipelines
  • Experience managing multiple stakeholders and managing team through multiple team leads
  • Experience with big data architectures and data modeling to efficiently process large volumes of data
  • Experience developing large data processing pipelines on Apache Spark.
  • Experience with Python or Scala programming languages
  • SQL is virtually as effortless as your native spoken language
  • Background in ETL and data processing, know how to transform data to meet business goals
  • Experience running agile projects
  • Excellent communication, adaptability and collaboration skills

 And Of Course, Perks! 

  • Flexible PTO. Its true, no strings attached and all the time you need to recharge.
  • PTO. Its true, we provide you a generous amount of time to recharge.
  • Better Benefits. Get quality insurance, flex-spending accounts, retirement options and commuter perks.  
  • Free Food. Kitchens are stocked and free Grubhub each week.
  • Stock Up. All of our employees are owners, in fact, theyre granted Restricted Stock Units, which means were all in it to win it.
  • Casual Culture. Catch rays on the rooftop or get comfy on a couch and get to know your coworkers because work, should be a place you want to be.

Grubhub is an equal opportunity employer. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, and other legally protected characteristics. The EEO is the Law poster is available here: DOL Poster. If you are applying for a job in the U.S. and need a reasonable accommodation for any part of the employment process, please send an e-mail to talentacquisition@grubhub.com and let us know the nature of your request and contact information. Please note that only those inquiries concerning a request for reasonable accommodation will be responded to from this e-mail address.

Vaco - San Diego
  • San Diego, CA

Senior Backend Engineer- Context Awareness Team




About Us:


We are a platform for today's busy families, bringing them closer together by helping them better sync, communicate with and protect the people they care about most.


Our mobile app provides millions of families in over 140 countries with services such as private location sharing, location history, drive details, crash detection, roadside assistance and help alerts through our free and paid membership subscription.


Founded in 2008, we are based in San Francisco with offices in San Diego, Las Vegas and Ft. Lauderdale.


We have raised +$100M from investors such as Bessemer Venture Partners, DCM, Fontinalis Partners, BMW iVentures, Allstate, Bullpen Capital, Founders Fund (FF Angel), Launch Capital, Kapor Capital, and 500 startups.




About the Context Awareness Backend role:


* Work closely within a cross-platform team which provides up-to-date and real-time location and driving information to the families which use our app


* Build and support an engine for collecting, processing, and storing tens of thousands of locations saves per second


* Build and support the systems which collect, process, and store millions of drives daily


* Maintain and improve the systems which alert users in real-time when a vehicular collision occurs


* Research, prototype, and build new systems to provide location context to users, potentially using machine learning


* Ensure that our APIs are able to process millions of requests per minute, looking for ways to scale us up by 5x over the next few years


* Be a very active contributor to our diverse codebase; we have a lot of Java, are growing in Scala, and have legacy systems in PHP, Python, and Golang


* Engage with feature developers to ensure code is written with performance, scale, and maintainability in mind


* Use automation tools as often as possible, and develop and improve these tools


* Handle 4.5 billion daily API calls comfortably






You're that someone with these relevant skills:


* Proficient in JVM languages. This team uses primarily Java (Spring Boot) and Scala (Akka, Lagom); deep knowledge of either is required, of both is great


* Familiarity with PHP, Python, and/or Go (to maintain/convert our legacy code bases) are pros


* Excellent understanding of data stores, distributed systems, data modeling and their associated pitfalls.


* Several years' experience with microservices


* Experience with the AWS environment and its various tools


* Agile software development experience


* Ability to work in a cross-functional team


* A desire to bring innovative solutions to the challenges of scaling the API and platform




Some of the things you'll do:


* Build new services in Java and/or Scala


* Break up legacy monoliths in PHP and Python into Java and/or Scala microservices


* Design new systems


* Conduct technical and code reviews and approve pull requests


* Take specs and translate them into reality




Successful candidates will have:


* Minimum 5 years of relevant experience required


* Strong attention to detail


* A commitment to the importance of craftsmanship and incremental delivery


* Comfort with the uncertainty of working with new technologies


* Strong and clear communication skills


* Ability to work effectively with remote teammates


* A sense of humor and the ability not to take yourself too seriously






Perks:


* Competitive pay and benefits


* Medical, dental and vision insurance plans


* 401k plan


* $200/month Quality of Life perk


* Whatever makes you stronger makes us stronger. We buy you the things you need to improve yourself and get your job done.

Blue Chip Talent
  • Ann Arbor, MI

Summary:
This position is part of the business intelligence team and will be responsible for projects including BI and Analytics but with a lot of opportunity to innovate by incorporating advanced capabilities of descriptive and predictive insights into BI Deliverables. As a dynamic and effective BI team member, you will liaise with business across all domains to understand their growing analytic data needs and develop and operationalize solutions with business impact. The ideal candidate will leverage their knowledge of business intelligence tools, statistical and data mining techniques, data warehouse and SQL to find innovative solutions utilizing data from multiple sources. We are looking for a strong team member who can communicate with the business as well as IT.

GENERAL RESPONSIBILITIES

  • Drive the utilization of new data sources for impactful business insights
  • Condense large data sets into clear and concise observations and recommendations
  • Design and develop BI/Analytics solution to facilitate end user experience and impact the business
  • Generate new ideas and execute on solutions
  • Demonstrate expert knowledge of at least one analytics tool
  • Understand and apply advanced techniques for data extraction, cleaning and preparation tasks
  • Understand dimensional modeling and data warehouse concepts
  • Able to clearly articulate findings and answers through effective data visualization approaches
  • Work with stakeholders to determine analytics data requirements and implement solutions to provide actionable business insight
  • Serve as a mentor for other team members
  • Perform BI on call duties
  • Regularly share best practices to help develop others
  • Able to effectively communicate with multiple stakeholders across the organization
  • Able to work in a team-oriented, collaborative, fast-paced and diverse environment
  • 5+ years of Experience with implementing and supporting BI/Analytics solutions
  • 5+ years of Experience with databases and querying language, particularly SQL
  • 3+ years of Experience with statistical tool such as R and SAS
  • 5+ years of experience delivering BI/DW solutions within an Agile construct
  • Experience working with large data sets; experience with distributed computing tools (Hadoop, Map/Reduce, Hive, Spark) and other emerging technologies is highly desired
  • Familiarity with statistical methods used in descriptive and inferential statistics
  •  Knowledge of statistical modeling and machine learning is preferable
  • Fluency with programming language such as Python
Elev8 Hire Solutions
  • Atlanta, GA

Sr. Python Developer/Data Scientist

We are an AI parts inventory optimization software that makes it easy for enterprises to manage their maintenance and repair operations (MRO). We've raised millions in venture funding from both Silicon Valley and Boston and are growing fast. Are you a driven software engineer interested in being a part of that growth?

Role Expectations/Responsibilities:

  • Write clean, maintainable, thoroughly tested,code
  • Participate in product, design, and code reviews
  • QA and ship code daily
  • Identify, incorporate, and communicate best practices

Required Skills:

  • 10+ years of software engineering experience
  • Proficient in Python
  • Proficient in SciPy and SciKit
  • Experience in a python testing framework
  • Proficient in neural network theory
  • Proficient in general machine learning algorithms
  • Proficient in using Tensorflow
  • Proficient in using Keras
  • Proficient in architecting, testing, optimizing, and deploying deep learning models
  • Competency in Git
  • Experience with data structures, algorithm design, problem-solving, and complexity analysis

Nice to have:

  • Experience in a startup environment
  • Experience working on a small team with high visibility
  • Ability to handle a high volume of projects

Benefits:

  • Health/dental/vision coverage
  • 15 Days PTO
  • Option for 1 day remote
Booster Fuels
  • Dallas, TX

About Booster

Do you want to impact the lives of nearly all Americans by eliminating the errand of going to the gas station? Do you want to impact the lives of everyone by eliminating the errand of going to the gas station? Booster has redesigned the infrastructure to deliver gas and replaced the gas station with an app. Our technology and operational expertise enable Booster to deliver fresh gas at the same price as traditional gas stations. This is a difficult problem to master and we are making it happen. Every day, we solve incredibly hard problems to create an experience for our customers that is absolutely magical.


Booster is powered by data. We are creating the best way for consumers and businesses to get gas by applying data, algorithms and machine learning to problems in logistics, retail, personalization, pricing and more.


Are you interested in working at the intersection of applied quantitative research, engineering development, and data science? If so, this is the job for you.


About the Role

The Booster Operations Team is looking for a passionate and solution-oriented Operations Data Analyst to develop and implement analytical solutions that deepen our collective understanding of customers, influence innovation, and deliver actionable insights to accelerate the growth and efficiency of Booster operations.


This position requires a deep quantitative domain expertise along with the ability to manage internal and external stakeholder teams. You will be responsible for delivering key insights to the operations team, the development of core KPIs and metrics, partnering to design optimized schedules and routes, balancing supply and demand, and analyzing and providing guidance on process stability and quality, with a keen focus on data quality and completeness.


You must be passionate about and undaunted by an operations engine with trucks, drivers, complex supply chains, and rapid growth. Successful candidates will be entrepreneurial, discontent with the status-quo and obsessed with improving anything they touch.


What You Will Be Doing

  • Delight customers

  • Work on a dynamic and constantly changing delivery platform

  • Ensure customer orders are planned efficiently and delivered on time

  • Come up with scalable solutions to continuously evolving logistics problems

  • Build predictive models

  • Forecast supply and demand

  • Optimize delivery network and dispatch operations

  • Forecast consumer demand

  • Estimated time of arrival

  • Partner with Operations and Technology leaders to build data-centric solutions to business impact

  • Help us realize a data-informed culture and teach people how to Think Like A Scientist

  • Find other exciting problems to solve


Requirements

  • Bachelor's degree or higher in Operations Research, Applied Mathematics or a related field

  • Comfortable with research methodologies to address abstract business and product problems with utmost precision, while staying grounded in common sense. Makes complex problems simple.

  • 4+ years of strong quantitative industry experience (2+ years with a PhD)

  • Expertise in mathematical optimization and implementing tailored solution approaches

  • Experience in data mining, predictive modeling and statistical analysis

  • Solid common sense and business acumen

  • Writing production applications using Python, R, and SQL

  • Superior analytical skills and a strong sense of ownership in your work

  • Self-motivated drive to build, launch and iterate on products

  • Comfortable giving definition to ambiguous problems, can do this independently with limited guidance


Benefits

  • Stay healthy: 100% employer paid medical, dental, vision, disability and life insurance coverage

  • Refuel: open vacation policy, take the time you need when you need it

  • Monthly team building events (yoga, safari adventures, wine blending, paint nights and bocce tournaments to name a few)

  • Early Stock Options at a fast-growing startup with a strong VC backing


Individuals seeking employment at Booster are considered without regards to race, religion, color, national origin, ancestry, sex, gender, gender identity, gender expression, sexual orientation, marital status, age, physical or mental disability or medical condition (except where physical fitness is a valid occupational qualification), genetic information, veteran status, or any other consideration made unlawful by federal, state or local laws.


Booster does not discriminate against employees or applicants because they have inquired about, discussed, or disclosed their own pay or the pay of another employee or applicant.


Disclaimer: The above statements are intended to describe the general nature and level of work being performed by associates assigned to this classification. They are not to be construed as an exhaustive list of all responsibilities, duties, and skills required of personnel so classified. All personnel may be required to perform duties outside of their normal responsibilities from time to time, as needed


Booster doesn't accept unsolicited agency resumes and won't pay fees to any third-party agency or firm that doesn't have a signed agreement with Booster.

Levatas
  • Palm Beach Gardens, FL

Levatas is an AI solutions company. We help our partners understand and deploy artificial intelligence solutions across their enterprise using Data Science, Machine Learning, Predictive Analytics, Automation, Machine Learning, and Natural Language Processing.



Our ability to create the future of business is only as strong as the smart, creative people who make up our team. We believe that doing the best work of our lives shouldn't come at the expense of happiness and balance, which is why we're consistently recognized as a best place to work and top company culture.



Levatas is seeking a qualified Senior Software Engineer to join our Technology team at Levatas headquarters. We're looking for someone who's ready to do their best work of their career, architecting, designing, and implementing software solutions, with specific concentration in cross-disciplinary problem solving and collaborative development for our growing base of first-class business clients.



Core responsibilities:




  • Design, develop, and maintain software solutions to meet business requirements of the clients

  • Develop and document project deliverables such as requirement specifications, proposed solutions, detailed designs, project plans, system impact analysis and proof of concepts

  • Program, test, build, integrate and implement web based multi-tier applications of varying complexities

  • Analyze and fully understand project requirements to formulate and implement programmatic solutions that efficiently and effectively address clients  requirements

  • Integrate applications by designing database architecture and server scripting; studying and establishing connectivity with network systems, search engines, and information servers

  • Use engineering principles to conduct technical investigations involved with the modification of material; component; or process specifications and requirements

  • Advise, mentor, train or assist engineers and developers at other skill levels, as needed, to ensure timely releases of high quality code

  • Provide technical consulting services to projects and production system issues.



The following are profile items that interest us:




  • 2-5 years experience in Amazon (AWS) native serverless application development

  • Experience in working with various AWS services such as EC2, ECS, EBS, S3, Glacier, SNS, SQS, IAM, Auto scaling, OpsWorks, Route 53, VPC, CloudFront, Direct Connect, Cloud Trail, Cloud Watch and building CI/CD on AWS environment using AWS Code Commit, Code Build, Code Deploy and Code Pipeline.

  • 3-5 years Nodejs development experience, preferably writing Lambda functions in AWS.

  • 5 years of full-time work experience in software engineering, information technology, or related domains.

  • An unlimited passion about software development

  • Willing to work across the stack to tackle technical challenges anywhere in the system.

  • Interest in working in a cross-functional team that touches many of the core systems and user flows of our customers.

  • Demonstrable experience in compile-time languages such as .NET C#, Java, Swift, or Kotlin.

  • Demonstrable experience working with relational and NoSQL database technologies

  • Demonstrable experience building web applications with HTML5, Javascript, CSS, using JavaScript frameworks like AngularJS, VueJS, or ReactJS

  • Knowledge and understanding of data science, machine learning, Tensorflow (or similar platform like Keras), and Python a huge plus

  • Knowledge and understanding of data transformation processes, ETL, etc. a plus.

  • Experience with designing and building large scale production systems or features.

  • Ability to leverage and integrate with third party APIs.

  • Experience with SOA (Service Oriented Architecture) designs a plus.

  • Advanced analytical thinking; experienced with making product decisions based on data and A/B testing.

  • Exposure to architectural patterns of a large, high-scale web application.

  • Strong interpersonal and communication skills

  • Experience working with Scrum or a similar Agile management process



This role is based in Palm Beach County, Florida.

Perficient, Inc.
  • Phoenix, AZ
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient Data Solutions is looking for an experienced Hadoop Administrator with experience administering Cloudera on AWS. This postition is located in Boston, however the candidate can be located in any well connected city. Perficient is on a mission to help enterprises take advantage of modern data and analytics architectures, tools, and patterns to improve the business operations and better engage customers. This is an excellent opportunity for the right individual to assist Perficient and its customers to grow the capabilities necessary to improve care through better use of data and information, and in the process take their career to the next level.
Job Overview
The Hadoop System Administrator (SA) is responsible for effective provisioning, installation/configuration, operation, and maintenance of systems hardware and software and related infrastructure to enable Hadoop and analytics on Big Data. This individual participates in technical research and development to enable continuing innovation within the infrastructure. This individual ensures that system hardware, operating systems, software systems, and related procedures adhere to organizational values, enabling staff, volunteers, and Partners.
This individual will assist project teams with technical issues in the Initiation and Planning phases of our standard Project Management Methodology. These activities include the definition of needs, benefits, and technical strategy; research & development within the project life-cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. Participation on projects is focused on smoothing the transition of projects from development staff to production staff by performing operations activities within the project life-cycle.
This individual is accountable for the following systems: Linux and Windows systems that support GIS infrastructure; Linux, Windows and Application systems that support Asset Management; Responsibilities on these systems include SA engineering and provisioning, operations and support, maintenance and research and development to ensure continual innovation.
Responsibilities
  • Provide end to end vision and hands on experience with Cloudera and AWS Platforms especially best practices around HIVE and HBASE
  • Experience automating common adminstratvie tasks in Cloudera and AWS
  • Troubleshoot and develop on Hadoop technologies including HDFS, Kafka, Hive, Pig, Flume, HBase, Spark, Impala and Hadoop ETL development via tools such as ODI for Big Data and APIs to extract data from source. Troubleshooting for AWS Technologies like EWR, EC2, S3, Cloud Foundation, etc.
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Administration of Cloudera clusters on AWS services, security, scalability, configuration and availability and access
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data Modeling
  • Performance tune HIVE and HBASE jobs
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components is a plus
  • Lead capacity planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
Qualifications
  • 3 Plus years of Hadoop Administration
  • Cloudera and AWS certifications are strongly desired.
  • Bachelor's degree, with a technical major, such as engineering or computer science.
  • Four to six years of Linus/Unix system administration experience.
  • Ability to travel up to 50 percent, preferred.
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Perficient, Inc.
  • Detroit, MI
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient Data Solutions is looking for an experienced Hadoop Administrator with experience administering Cloudera on AWS. This postition is located in Boston, however the candidate can be located in any well connected city. Perficient is on a mission to help enterprises take advantage of modern data and analytics architectures, tools, and patterns to improve the business operations and better engage customers. This is an excellent opportunity for the right individual to assist Perficient and its customers to grow the capabilities necessary to improve care through better use of data and information, and in the process take their career to the next level.
Job Overview
The Hadoop System Administrator (SA) is responsible for effective provisioning, installation/configuration, operation, and maintenance of systems hardware and software and related infrastructure to enable Hadoop and analytics on Big Data. This individual participates in technical research and development to enable continuing innovation within the infrastructure. This individual ensures that system hardware, operating systems, software systems, and related procedures adhere to organizational values, enabling staff, volunteers, and Partners.
This individual will assist project teams with technical issues in the Initiation and Planning phases of our standard Project Management Methodology. These activities include the definition of needs, benefits, and technical strategy; research & development within the project life-cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. Participation on projects is focused on smoothing the transition of projects from development staff to production staff by performing operations activities within the project life-cycle.
This individual is accountable for the following systems: Linux and Windows systems that support GIS infrastructure; Linux, Windows and Application systems that support Asset Management; Responsibilities on these systems include SA engineering and provisioning, operations and support, maintenance and research and development to ensure continual innovation.
Responsibilities
  • Provide end to end vision and hands on experience with Cloudera and AWS Platforms especially best practices around HIVE and HBASE
  • Experience automating common adminstratvie tasks in Cloudera and AWS
  • Troubleshoot and develop on Hadoop technologies including HDFS, Kafka, Hive, Pig, Flume, HBase, Spark, Impala and Hadoop ETL development via tools such as ODI for Big Data and APIs to extract data from source. Troubleshooting for AWS Technologies like EWR, EC2, S3, Cloud Foundation, etc.
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Administration of Cloudera clusters on AWS services, security, scalability, configuration and availability and access
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data Modeling
  • Performance tune HIVE and HBASE jobs
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components is a plus
  • Lead capacity planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
Qualifications
  • 3 Plus years of Hadoop Administration
  • Cloudera and AWS certifications are strongly desired.
  • Bachelor's degree, with a technical major, such as engineering or computer science.
  • Four to six years of Linus/Unix system administration experience.
  • Ability to travel up to 50 percent, preferred.
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
The Wellcome Trust Sanger Institute
  • Cambridge, UK


Salary £36,737 to £44,451 (dependant on experience) plus excellent benefits. Fixed term for 3 years

The Wellcome Sanger Institute is seeking for an experienced Bioinformatician to provide computational analysis for the new international Cancer Dependency Map consortium, and to other projects engaged in the analysis of data from genome-editing and functional-genomics screens, in collaboration with Open Targets.

You will join the Cancer Dependency Map Analytics team, actively interacting with the Cancer Dependency Map consortium, whose broad goal is to identify vulnerabilities and dependencies that could be exploited therapeutically in every cancer cell.

You will be able to implement and use new computational pipelines for pre-processing and quality control assessment of data from genome-editing screens and for individual project requirements. This will include extending existing software, writing, documenting and maintaining code packages on public/internal repositories. We encourage applications with the background in genomic data curation and familiarity with the management of data from large-scale in-vitro drug/functional-genomic screens.

Finally, you will interact with Open Targets partners and collaborators, and with web development teams to coordinate data/results flows on the public domain.

This is an exciting opportunity to work at one of the world's leading genomic centres at the forefront of genomic research. You will have access to Sanger's computational resources, including a 15000+ core computational cluster, the largest in life science research in Europe, and multiple petabytes of high-speed cluster file systems.

We are part of a dynamic and collaborative environment at the Genome Campus and, although we seek someone who can work independently, you will have the opportunity to interact with researchers across many programmes at the Institute.

Essential Skills

  • PhD in a relevant subject area (Physics, Computer Science, Engineering, Statistics, Mathematics, Computational Biology, Bioinformatics)
  • Full working proficiency in a scripting language (e.g. R, Python, Perl)
  • Full working proficiency with software versioning systems (eg. Git, gitub, svn)
  • Previous experience in creating, documenting, and maintaining finished software
  • Previous experience with implementing omics-data analysis pipelines
  • Basic knowledge of statistics and combinatorics
  • Full working proficiency in UNIX/Linux
  • Ability to communicate ideas and results effectively
  • Ability to work independently and organise own workload


Ideal Skills

  • Ability to devise novel quantitative models, use relevant mathematics-heavy literature
  • Experience in formulating the world in statistical models and applying them to real data
  • Basic knowledge of genomics and molecular biology
  • Previous experience with data from high throughput assays
  • Previous experience with data from genetic screens
  • Full working proficiency in a compiled language (e.g. C, C++, Java, Fortran)


Other information



Open Targets is a pioneering public-private initiative between GlaxoSmithKline (GSK), Biogen, Takeda, Celgene, Sanofi, EMBL-EBI (European Bioinformatics Institute) and the WSI (Wellcome Sanger Institute), located on the Wellcome Genome Campus in Hinxton, near Cambridge, UK.

Open Targets aims to generate evidence on the biological validity of therapeutic targets and provide an initial assessment of the likely effectiveness of pharmacological intervention on these targets, using genome-scale experiments and analysis. Open Targets aims to provide an R&D framework that applies to all aspects of human disease, to improve the success rate for discovering new medicines and share its data openly in the interests of accelerating drug discovery.

Our Campus: Set over 125 acres, the stunning and dynamic Wellcome Genome Campus is the biggest aggregate concentration of people in the world working on the common theme of Genomes and BioData. It brings together a diverse and exceptional scientific community, committed to delivering life-changing science with the reach, scale and imagination to pursue some of humanity's greatest challenges.

Our Benefits: Our employees have access to a comprehensive range of benefits and facilities

Genome Research Limited hold an Athena SWAN Bronze Award and will consider all individuals without discrimination and are committed to creating an inclusive environment for all employees, where everyone can thrive.

Please include a covering letter and CV with your application. Closing date: 21st April 2019

Contact for informal enquiries