OnlyDataJobs.com

118118Money
  • Austin, TX

Seeking an individual with a keen eye for good design combined with the ability to communicate those designs through informative design artifacts. Candidates should be familiar with an Agile development process (and understand its limitations), able to mediate between product / business needs and developer architectural needs. They should be ready to get their hands dirty coding complex pieces of the overall architecture.

We are .NET Core on the backend, Angular 2 on a mobile web front-end, and native on Android and iOS. We host our code across AWS and on-premises VMs, and use various data backends (SQL Server, Oracle, Mongo).

Very important is interest in (and hopefully, experience with) modern big data pipelines and machine learning. Experience with streaming platforms feeding Apache Spark jobs that train machine learning models would be music to our ears. Financial platforms generate massive amounts of data, and re-architecting aspects of our microservices to support that will be a key responsibility.

118118 Money is a private financial services company with R&D headquartered in Austin along highway 360, in front of the Bull Creek Nature preserve. We have offices around the world, so the candidate should be open to occasional travel abroad. The atmosphere is casual, and has a startup feel. You will see your software creations deployed quickly.

Responsibilities

    • Help us to build a big data pipeline and add machine learning capability to more areas of our platform.
    • Manage code from development through deployment, including support and maintenance.
    • Perform code reviews, assist and coach more junior developers to adhere to proper design patterns.
    • Build fault-tolerant distributed systems.

Requirements

    • Expertise in .NET, C#, HTML5, CSS3, Javascript
    • Experience with some flavor of ASP.NET MVC
    • Experience with SQL Server
    • Expertise in the design of elegant and intuitive REST APIs.
    • Cloud development experience (Amazon, Azure, etc)
    • Keen understanding of security principles as they pertain to service design.
    • Expertise in object-oriented design principles.

Desired

    • Machine Learning experience
    • Mobile development experience
    • Kafka / message streaming experience
    • Apache Spark experience
    • Knowledge of the ins and outs of Docker containers
    • Experience with MongoDB
FCA Fiat Chrysler Automobiles
  • Detroit, MI

Fiat Chrysler Automobiles is looking to fill the full-time position of a Data Scientist. This position is responsible for delivering insights to the commercial functions in which FCA operates.


The Data Scientist is a role in the Business Analytics & Data Services (BA) department and reports through the CIO. They will play a pivotal role in the planning, execution  and delivery of data science and machine learning-based projects. The bulk of the work with be in areas of data exploration and preparation, data collection and integration, machine learning (ML) and statistical modelling and data pipe-lining and deployment.

The newly hired data scientist will be a key interface between the ICT Sales & Marketing team, the Business and the BA team. Candidates need to be very much self-driven, curious and creative.

Primary Responsibilities:

    • Problem Analysis and Project Management:
      • Guide and inspire the organization about the business potential and strategy of artificial intelligence (AI)/data science
      • Identify data-driven/ML business opportunities
      • Collaborate across the business to understand IT and business constraints
      • Prioritize, scope and manage data science projects and the corresponding key performance indicators (KPIs) for success
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
      • Generate and test hypotheses about the underlying mechanics of the business process.
      • Network with domain experts to better understand the business mechanics that generated the data.
    • Data Collection and Integration:
      • Understand new data sources and process pipelines. Catalog and document their use in solving business problems.
      • Create data pipelines and assets the enable more efficiency and repeatability of data science activities.
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
    • Machine Learning and Statistical Modelling:
      • Apply various ML and advanced analytics techniques to perform classification or prediction tasks
      • Integrate domain knowledge into the ML solution; for example, from an understanding of financial risk, customer journey, quality prediction, sales, marketing
      • Testing of ML models, such as cross-validation, A/B testing, bias and fairness
    • Operationalization:
      • Collaborate with ML operations (MLOps), data engineers, and IT to evaluate and implement ML deployment options
      • (Help to) integrate model performance management tools into the current business infrastructure
      • (Help to) implement champion/challenger test (A/B tests) on production systems
      • Continuously monitor execution and health of production ML models
      • Establish best practices around ML production infrastructure
    • Other Responsibilities:
      • Train other business and IT staff on basic data science principles and techniques
      • Train peers on specialist data science topics
      • Promote collaboration with the data science COE within the organization.

Basic Qualifications:

    • A bachelors  in computer science, data science, operations research, statistics, applied mathematics, or a related quantitative field [or equivalent work experience such as, economics, engineering and physics] is required. Alternate experience and education in equivalent areas such as economics, engineering or physics, is acceptable. Experience in more than one area is strongly preferred.
    • Candidates should have three to six years of relevant project experience in successfully launching, planning, executing] data science projects. Preferably in the domains of automotive or customer behavior prediction.
    • Coding knowledge and experience in several languages: for example, R, Python, SQL, Java, C++, etc.
    • Experience of working across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service, and others.
    • Experience with distributed data/computing and database tools: MapReduce, Hadoop, Hive, Kafka, MySQL, Postgres, DB2 or Greenplum, etc.
    • All candidates must be self-driven, curious and creative.
    • They must demonstrate the ability to work in diverse, cross-functional teams.
    • Should be confident, energetic self-starters, with strong moderation and communication skills.

Preferred Qualifications:

    • A master's degree or PhD in statistics, ML, computer science or the natural sciences, especially physics or any engineering disciplines or equivalent.
    • Experience in one or more of the following commercial/open-source data discovery/analysis platforms: RStudio, Spark, KNIME, RapidMiner, Alteryx, Dataiku, H2O, SAS Enterprise Miner (SAS EM) and/or SAS Visual Data Mining and Machine Learning, Microsoft AzureML, IBM Watson Studio or SPSS Modeler, Amazon SageMaker, Google Cloud ML, SAP Predictive Analytics.
    • Knowledge and experience in statistical and data mining techniques: generalized linear model (GLM)/regression, random forest, boosting, trees, text mining, hierarchical clustering, deep learning, convolutional neural network (CNN), recurrent neural network (RNN), T-distributed Stochastic Neighbor Embedding (t-SNE), graph analysis, etc.
    • A specialization in text analytics, image recognition, graph analysis or other specialized ML techniques such as deep learning, etc., is preferred.
    • Ideally, the candidates are adept in agile methodologies and well-versed in applying DevOps/MLOps methods to the construction of ML and data science pipelines.
    • Knowledge of industry standard BA tools, including Cognos, QlikView, Business Objects, and other tools that could be used for enterprise solutions
    • Should exhibit superior presentation skills, including storytelling and other techniques to guide and inspire and explain analytics capabilities and techniques to the organization.
FlixBus
  • Berlin, Germany

Your Tasks – Paint the world green



  • Holistic cloud-based infrastructure automation

  • Distributed data processing clusters as well as data streaming platforms based on Kafka, Flink and Spark

  • Microservice platforms based on Docker

  • Development infrastructure and QA automation

  • Continuous Integration/Delivery/Deployment


Your Profile – Ready to hop on board



  • Experience in building and operating complex infrastructure

  • Expert-level: Linux, System Administration

  • Experience with Cloud Services, Expert-Level with either AWS or GCP  

  • Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShift

  • Mindset: "Automate Everything", "Infrastructure as Code", "Pipelines as Code", "Everything as Code"

  • Hands-on experience with "Infrastructure as Code" tools: TerraForm, CloudFormation, Packer

  • Experience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)

  • Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELK

  • At least basic knowledge in designing and implementing Service Level Agreements

  • Solid knowledge of Network and general Security Engineering

  • At least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, Bamboo

  • At least basic hands-on DBA experience, experience with data backup and recovery

  • Experience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory

Recursion
  • Salt Lake City, UT

At Recursion we combine experimental biology, automation, and artificial intelligence to quickly and efficiently identify treatments for human diseases. We’re transforming drug discovery into a data science problem and to do that we’re building a platform for rapid biological experimentation, data generation, automated analysis, model training, and prediction.


THE PROBLEMS YOU’LL SOLVE


As a Machine Learning Engineer, you'll report to the VP of Engineering and will work with others on the data, machine learning, and engineering teams to build the infrastructure and systems to enable both ML prototyping and production grade deployment of ML solutions that lift our drug discovery platform to new levels of effectiveness and efficiency. We are looking for experienced Machine Learning Engineers who value experimentation and the rigorous use of the scientific method, high collaboration across multiple functions, and intense curiosity driving them to keep our systems cutting edge. In this role you will:



  • Build, scale, and operate compute clusters for deep learning.You’ll be a part of a team responsible for the ML infrastructure, whether that be large-scale on-prem GPU clusters or cloud-based TPU pods.

  • Create a world-class ML research platform.You’ll work with Data Scientists, ML Researchers, and Systems and Data Engineers to create an ML platform that allows them to efficiently prepare hundreds of terabytes of data for training and processing, train cutting-edge deep learning models, backtest them on thousands of past experiments, and deploy working solutions to production. Examples of ML platforms like this are Uber’s Michelangelo and Facebook’s FBLearner Flow.

  • Be a mentor to peers. You will share your technical knowledge and experiences, resulting in an increase in their productivity and effectiveness.


THE EXPERIENCE YOU’LL NEED



  • An ability to be resourceful and collaborative in order to complete large projects. You’ll be working cross-functionally to build these systems and must always have the end (internal) user in mind.

  • Experience implementing, training, and evaluating deep learning models using modern ML frameworks through collaboration with others, reading of ML papers, or primary research.

  • A demonstration of accelerating ML research efforts through improved systems, processes and frameworks.

  • A track record of learning new technologies as needed to get things done. Our current tech stack uses Python and the pydata libraries, TensorFlow, Keras, Kubernetes + Docker, Big Query, and other cloud services provided by Google Cloud Platform.

  • Biology background is not necessary, but intellectual curiosity is a must!


THE PERKS YOU’LL ENJOY



  • Coverage of health, vision, and dental insurance premiums (in most cases 100%)

  • 401(k) with generous matching (immediate vesting)

  • Stock option grants

  • Two one-week paid company closures (summer and winter) in addition to flexible, generous vacation/sick leave

  • Commuter benefit and vehicle parking to ease your commute

  • Complimentary chef-prepared lunches and well-stocked snack bars

  • Generous paid parental leave for birth, non-birth, and adoptive parents

  • Fully-paid gym membership to Metro Fitness, located just feet away from our new headquarters

  • Gleaming new 100,000 square foot headquarters complete with a 70-foot climbing wall, showers, lockers, and bike parking



WHAT WE DO


We have raised over $80M to apply machine learning to one of the most unique datasets in existence - over a petabyte of imaging data spanning more than 10 billion cells treated with hundreds of thousands of different biological and chemical perturbations, generated in our own labs - in order to find treatments for hundreds of diseases. Our long term mission is to decode biology to radically improve lives and we want to understand biology so well that we can fix most things that go wrong in our bodies. Our data scientists, machine learning researchers and engineers work on some of the most challenging and interesting problems in computational drug discovery, and collaborate with some of the brightest minds in the deep learning community (Yoshua Bengio is one of our advisors), who help our machine learning team design novel ways of tackling these problems.



Recursion is an equal opportunity employer and complies with all applicable federal, state, and local fair employment practices laws. Recursion strictly prohibits and does not tolerate discrimination against applicants because of race, color, religion, creed, national origin or ancestry, ethnicity, sex, pregnancy, gender (including gender nonconformity and status as a transgender individual), age, physical or mental disability, citizenship, past, current, or prospective service in the uniformed services, or any other characteristic protected under applicable federal, state, or local law.

The HT Group
  • Austin, TX

Full Stack Engineer, Java/Scala Direct Hire Austin

Do you have a track record of building both internal- and external-facing software services in a dynamic environment? Are you passionate about introducing disruptive and innovative software solutions for the shipping and logistics industry? Are you ready to deliver immediate impact with the software you create?

We are looking for Full Stack Engineers to craft, implement and deploy new features, services, platforms, and products. If you are curious, driven, and naturally explore how to build elegant and creative solutions to complex technical challenges, this may be the right fit for you. If you value a sense of community and shared commitment, youll collaborate closely with others in a full-stack role to ship software that delivers immediate and continuous business value. Are you up for the challenge?

Tech Tools:

  • Application stack runs entirely on Docker frontend and backend
  • Infrastructure is 100% Amazon Web Services and we use AWS services whenever possible. Current examples: EC2 Elastic Container Service (Docker), Kinesis, SQS, Lambda and Redshift
  • Java and Scala are the languages of choice for long-lived backend services
  • Python for tooling and data science
  • Postgres is the SQL database of choice
  • Actively migrating to a modern JavaScript-centric frontend built on Node, React/Relay, and GraphQL as some of our core UI technologies

Responsibilities:

  • Build both internal and external REST/JSON services running on our 100% Docker-based application stack or within AWS Lambda
  • Build data pipelines around event-based and streaming-based AWS services and application features
  • Write deployment, monitoring, and internal tooling to operate our software with as much efficiency as we build it
  • Share ownership of all facets of software delivery, including development, operations, and test
  • Mentor junior members of the team and coach them to be even better at what they do

Requirements:

  • Embrace the AWS + DevOps philosophy and believe this is an innovative approach to creating and deploying products and technical solutions that require software engineers to be truly full-stack
  • Have high-quality standards, pay attention to details, and love writing beautiful, well-designed and tested code that can stand the test of time
  • Have built high-quality software, solved technical problems at scale and believe in shipping software iteratively and often
  • Proficient in and have delivered software in Java, Scala, and possibly other JVM languages
  • Developed a strong command over Computer Science fundamentals
GrubHub Seamless
  • New York, NY

Got a taste for something new?

We’re Grubhub, the nation’s leading online and mobile food ordering company. Since 2004 we’ve been connecting hungry diners to the local restaurants they love. We’re moving eating forward with no signs of slowing down.

With more than 90,000 restaurants and over 15.6 million diners across 1,700 U.S. cities and London, we’re delivering like never before. Incredible tech is our bread and butter, but amazing people are our secret ingredient. Rigorously analytical and customer-obsessed, our employees develop the fresh ideas and brilliant programs that keep our brands going and growing.

Long story short, keeping our people happy, challenged and well-fed is priority one. Interested? Let’s talk. We’re eager to show you what we bring to the table.

About the Opportunity: 

Senior Site Reliability Engineers are embedded in Big Data specific Dev teams to focus on the operational aspects of our services, and our SREs run their respective products and services from conception to continuous operation.  We're looking for engineers who want to be a part of developing infrastructure software, maintaining it and scaling. If you enjoy focusing on reliability, performance, capacity planning, and the automation everything, you’d probably like this position.





Some Challenges You’ll Tackle





TOOLS OUR SRE TEAM WORKS WITH:



  • Python – our primary infrastructure language

  • Cassandra

  • Docker (in production!)

  • Splunk, Spark, Hadoop, and PrestoDB

  • AWS

  • Python and Fabric for automation and our CD pipeline

  • Jenkins for builds and task execution

  • Linux (CentOS and Ubuntu)

  • DataDog for metrics and alerting

  • Puppet





You Should Have






  • Experience in AWS services like Kinesis, IAM, EMR, Redshift, and S3

  • Experience managing Linux systems

  • Configuration management tool experiences like Puppet, Chef, or Ansible

  • Continuous integration, testing, and deployment using Git, Jenkins, Jenkins DSL

  • Exceptional communication and troubleshooting skills.


NICE TO HAVE:



  • Python or Java / Scala development experience

  • Bonus points for deploying/operating large-ish Hadoop clusters in AWS/GCP and use of EMR, DC/OS, Dataproc.

  • Experience in Streaming data platforms, (Spark streaming, Kafka)

  • Experience developing solutions leveraging Docker

SafetyCulture
  • Surry Hills, Australia
  • Salary: A$120k - 140k

The Role



  • Be an integral member on the team responsible for design, implement and maintain distributed big data capable system with high-quality components (Kafka, EMR + Spark, Akka, etc).

  • Embrace the challenge of dealing with big data on a daily basis (Kafka, RDS, Redshift, S3, Athena, Hadoop/HBase), perform data ETL, and build tools for proper data ingestion from multiple data sources.

  • Collaborate closely with data infrastructure engineers and data analysts across different teams, find bottlenecks and solve the problem

  • Design, implement and maintain the heterogeneous data processing platform to automate the execution and management of data-related jobs and pipelines

  • Implement automated data workflow in collaboration with data analysts, continue to improve, maintain and improve system in line with growth

  • Collaborate with Software Engineers on application events, and ensuring right data can be extracted

  • Contribute to resources management for computation and capacity planning

  • Diving deep into code and constantly innovating


Requirements



  • Experience with AWS data technologies (EC2, EMR, S3, Redshift, ECS, Data Pipeline, etc) and infrastructure.

  • Working knowledge in big data frameworks such as Apache Spark, Kafka, Zookeeper, Hadoop, Flink, Storm, etc

  • Rich experience with Linux and database systems

  • Experience with relational and NoSQL database, query optimization, and data modelling

  • Familiar with one or more of the following: Scala/Java, SQL, Python, Shell, Golang, R, etc

  • Experience with container technologies (Docker, k8s), Agile development, DevOps and CI tools.

  • Excellent problem-solving skills

  • Excellent verbal and written communication skills 

State Farm
  • Atlanta, GA

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


Skills & Professional Experience

·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

·        Experience with unstructured data sets: text analytics, image recognition etc.

·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

·        Exercise sound judgment to diagnose & resolve problems within area of expertise

·        Familiarity with CI/CD development methods, Git and Docker a plus


Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


Remote work option is not available.


There is no sponsorship for an employment visa for the position at this time.


Competencies desired:
Critical Thinking
Leadership
Initiative
Resourcefulness
Relationship Building
State Farm
  • Dallas, TX

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


Skills & Professional Experience

·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

·        Experience with unstructured data sets: text analytics, image recognition etc.

·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

·        Exercise sound judgment to diagnose & resolve problems within area of expertise

·        Familiarity with CI/CD development methods, Git and Docker a plus


Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


Remote work option is not available.


There is no sponsorship for an employment visa for the position at this time.


Competencies desired:
Critical Thinking
Leadership
Initiative
Resourcefulness
Relationship Building
Riccione Resources
  • Dallas, TX

Sr. Data Engineer Hadoop, Spark, Data Pipelines, Growing Company

One of our clients is looking for a Sr. Data Engineer in the Fort Worth, TX area! Build your data expertise with projects centering on large Data Warehouses and new data models! Think outside the box to solve challenging problems! Thrive in the variety of technologies you will use in this role!

Why should I apply here?

    • Culture built on creativity and respect for engineering expertise
    • Nominated as one of the Best Places to Work in DFW
    • Entrepreneurial environment, growing portfolio and revenue stream
    • One of the fastest growing mid-size tech companies in DFW
    • Executive management with past successes in building firms
    • Leader of its technology niche, setting the standards
    • A robust, fast-paced work environment
    • Great technical challenges for top-notch engineers
    • Potential for career growth, emphasis on work/life balance
    • A remodeled office with a bistro, lounge, and foosball

What will I be doing?

    • Building data expertise and owning data quality for the transfer pipelines that you create to transform and move data to the companys large Data Warehouse
    • Architecting, constructing, and launching new data models that provide intuitive analytics to customers
    • Designing and developing new systems and tools to enable clients to optimize and track advertising campaigns
    • Using your expert skills across a number of platforms and tools such as Ruby, SQL, Linux shell scripting, Git, and Chef
    • Working across multiple teams in high visibility roles and owning the solution end-to-end
    • Providing support for existing production systems
    • Broadly influencing the companys clients and internal analysts

What skills/experiences do I need?

    • B.S. or M.S. degree in Computer Science or a related technical field
    • 5+ years of experience working with Hadoop and Spark
    • 5+ years of experience with Python or Ruby development
    • 5+ years of experience with efficient SQL (Postgres, Vertica, Oracle, etc.)
    • 5+ years of experience building and supporting applications on Linux-based systems
    • Background in engineering Spark data pipelines
    • Understanding of distributed systems

What will make my résumé stand out?

    • Ability to customize an ETL or ELT
    • Experience building an actual data warehouse schema

Location: Fort Worth, TX

Citizenship: U.S. citizens and those authorized to work in the U.S. are encouraged to apply. This company is currently unable to provide sponsorship (e.g., H1B).

Salary: 115 130k + 401k Match

---------------------------------------------------


~SW1317~

Gravity IT Resources
  • Miami, FL

Overview of Position:

We undertaking an ambitious digital transformation across Sales, Service, Marketing, and eCommerce. We are looking for a web data analytics wizard with prior experience in digital data preparation, discovery, and predictive analytics.

The data scientist/web analyst will work with external partners, digital business partners, enterprise analytics, and technology team to strategically plan and develop datasets, measure web analytics, and execute on predictive and prescriptive use cases. The role demands the ability to (1) Learn quickly (2) Work in a fast-paced, team-driven environment (3) Manage multiple efforts simultaneously (4) Adept at using large datasets and using models to test effectiveness of different courses of action (5) Promote data driven decision making throughout the organization (6) Define and measure success of capabilities we provide the organization.


Primary Duties and Responsibilities

    Analy
    • ze data captured through Google Analytics and develop meaningful actionable insights on digital behavior. Put t
    • ogether a customer 360 data frame by connecting CRM Sales, Service, Marketing cloud data with Commerce Web behavior data and wrangle the data into a usable form. Use p
    • redictive modelling to increase and optimize customer experiences across online & offline channels. Evalu
    • ate customer experience and conversions to provide insights & tactical recommendations for web optimization
    • Execute on digital predictive use cases and collaborate with enterprise analytics team to ensure use of best tools and methodologies.
    • Lead support for enterprise voice of customer feedback analytics.
    • Enhance and maintain digital data library and definitions.

Minimum Qualifications

  • Bachelors degree in Statistics, Computer Science, Marketing, Engineering or equivalent
  • 3 years or more of working experience in building predictive models.
  • Experience in Google Analytics or similar web behavior tracking tools is required.
  • Experience in R is a must with working knowledge of connecting to multiple data sources such as amazon redshift, salesforce, google analytics, etc.
  • Working knowledge in machine learning algorithms such as Random Forest, K-means, Apriori, Support Vector machine, etc.
  • Experience in A/B testing or multivariate testing.
  • Experience in media tracking tags and pixels, UTM, and custom tracking methods.
  • Microsoft Office Excel & PPT (advanced).

Preferred Qualifications

  • Masters degree in statistics or equivalent.
  • Google Analytics 360 experience/certification.
  • SQL workbench, Postgres.
  • Alteryx experience is a plus.
  • Tableau experience is a plus.
  • Experience in HTML, JavaScript.
  • Experience in SAP analytics cloud or SAP desktop predictive tool is a plus
Signify Health
  • Dallas, TX

Position Overview:

Signify Health is looking for a savvy Data Engineer to join our growing team of deep learning specialists. This position would be responsible for evolving and optimizing data and data pipeline architectures, as well as, optimizing data flow and collection for cross-functional teams. The Data Engineer will support software developers, database architects, data analysts, and data scientists. The ideal candidate would be self-directed, passionate about optimizing data, and comfortable supporting the Data Wrangling needs of multiple teams, systems and products.

If you enjoy providing expert level IT technical services, including the direction, evaluation, selection, configuration, implementation, and integration of new and existing technologies and tools while working closely with IT team members, data scientists, and data engineers to build our next generation of AI-driven solutions, we will give you the opportunity to grow personally and professionally in a dynamic environment. Our projects are built on cooperation and teamwork, and you will find yourself working together with other talented, passionate and dedicated team member, all working towards a shared goal.

Essential Job Responsibilities:

  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data models for greater scalability, etc.
  • Leverage Azure for extraction, transformation, and loading of data from a wide variety of data sources in support of AI/ML Initiatives
  • Design and implement high performance data pipelines for distributed systems and data analytics for deep learning teams
  • Create tool-chains for analytics and data scientist team members that assist them in building and optimizing AI workflows
  • Work with data and machine learning experts to strive for greater functionality in our data and model life cycle management capabilities
  • Communicate results and ideas to key decision makers in a concise manner
  • Comply with applicable legal requirements, standards, policies and procedures including, but not limited to the Compliance requirements and HIPAA.


Qualifications:Education/Licensing Requirements:
  • High school diploma or equivalent.
  • Bachelors degree in Computer Science, Electrical Engineer, Statistics, Informatics, Information Systems, or another quantitative field. or related field or equivalent work experience.


Experience Requirements:
  • 5+ years of experience in a Data Engineer role.
  • Experience using the following software/tools preferred:
    • Experience with big data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with AWS or Azure cloud services.
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C#, etc.
  • Strong work ethic, able to work both collaboratively, and independently without a lot of direct supervision, and solid problem-solving skills
  • Must have strong communication skills (written and verbal), and possess good one-on-one interpersonal skills.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
  • 2 years of experience in data modeling, ETL development, and Data warehousing
 

Essential Skills:

  • Fluently speak, read, and write English
  • Fantastic motivator and leader of teams with a demonstrated track record of mentoring and developing staff members
  • Strong point of view on who to hire and why
  • Passion for solving complex system and data challenges and desire to thrive in a constantly innovating and changing environment
  • Excellent interpersonal skills, including teamwork and negotiation
  • Excellent leadership skills
  • Superior analytical abilities, problem solving skills, technical judgment, risk assessment abilities and negotiation skills
  • Proven ability to prioritize and multi-task
  • Advanced skills in MS Office

Essential Values:

  • In Leadership Do whats right, even if its tough
  • In Collaboration Leverage our collective genius, be a team
  • In Transparency Be real
  • In Accountability Recognize that if it is to be, its up to me
  • In Passion Show commitment in heart and mind
  • In Advocacy Earn trust and business
  • In Quality Ensure what we do, we do well
Working Conditions:
  • Fast-paced environment
  • Requires working at a desk and use of a telephone and computer
  • Normal sight and hearing ability
  • Use office equipment and machinery effectively
  • Ability to ambulate to various parts of the building
  • Ability to bend, stoop
  • Work effectively with frequent interruptions
  • May require occasional overtime to meet project deadlines
  • Lifting requirements of
Ultra Tendency
  • Riga, Lettland

You are a developer that loves to take a look at infrastructure as well? You are a systems engineer that likes to write code? Ultra Tendency is looking for you! 


Your Responsibilities:



  • Support our customers and development teams in transitioning capabilities from development and testing to operations

  • Deploy and extend large-scale server clusters for our clients

  • Fine-tune and optimize our clusters to process millions of records every day 

  • Learn something new every day and enjoy solving complex problems


Job Requirements:



  • You know Linux like the back of your hand

  • You love to automate all the things – SaltStack, Ansible, Terraform and Puppet are your daily business

  • You can write code in Python, Java, Ruby or similar languages.

  • You are driven by high quality standards and attention to detail

  • Understanding of the Hadoop ecosystem and knowledge of Docker is a plus


We offer:



  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • Work on the open-source community and become a contributor. Learn from open-source enthusiasts which you will find nowhere else in Germany!

  • Work in an English-speaking, international environment

  • Work with cutting edge equipment and tools

Comcast
  • Englewood, CO

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Job Summary:

Software engineering skills combined with the demands of a high volume, highly-visible analytics platform make this an exciting challenge for the right candidate.

Are you passionate about digital media, entertainment, and software services? Do you like big challenges and working within a highly motivated team environment?

As a software engineer in the Data Experience (DX) team, you will research, develop, support, and deploy solutions in real-time distributing computing architectures. The DX big data team is a fast-moving team of world-class experts who are innovating in providing user-driven, self-service tools for making sense and making decisions with high volumes of data. We are a team that thrives on big challenges, results, quality, and agility.

Who does the data engineer work with?

Big Data software engineering is a diverse collection of professionals who work with a variety of teams ranging from other software engineering teams whose software integrates with analytics services, service delivery engineers who provide support for our product, testers, operational stakeholders with all manner of information needs, and executives who rely on big data for data backed decisioning.

What are some interesting problems you'll be working on?

Develop systems capable of processing millions of events per second and multi-billions of events per day, providing both a real-time and historical view into the operation of our wide-array of systems. Design collection and enrichment system components for quality, timeliness, scale and reliability. Work on high-performance real-time data stores and a massive historical data store using best-of-breed and industry-leading technology.

Where can you make an impact?

Comcast DX is building the core components needed to drive the next generation of data platforms and data processing capability. Running this infrastructure, identifying trouble spots, and optimizing the overall user experience is a challenge that can only be met with a robust big data architecture capable of providing insights that would otherwise be drowned in an ocean of data.

Success in this role is best enabled by a broad mix of skills and interests ranging from traditional distributed systems software engineering prowess to the multidisciplinary field of data science.

Responsibilities:

  • Develop solutions to big data problems utilizing common tools found in the ecosystem.
  • Develop solutions to real-time and offline event collecting from various systems.
  • Develop, maintain, and perform analysis within a real-time architecture supporting large amounts of data from various sources.
  • Analyze massive amounts of data and help drive prototype ideas for new tools and products.
  • Design, build and support APIs and services that are exposed to other internal teams
  • Employ rigorous continuous delivery practices managed under an agile software development approach
  • Ensure a quality transition to production and solid production operation of the software

Skills & Requirements:

  • 5+ years programming experience
  • Bachelors or Masters in Computer Science, Statistics or related discipline
  • Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem.
  • Experience working on big data platforms in the cloud or on traditional Hadoop platforms
  • AWS Core
  • Kinesis
  • IAM
  • S3/Glacier
  • Glue
  • DynamoDB
  • SQS
  • Step Functions
  • Lambda
  • API Gateway
  • Cognito
  • EMR
  • RDS/Auora
  • CloudFormation
  • CloudWatch
  • Languages
  • Python
  • Scala/Java
  • Spark
  • Batch, Streaming, ML
  • Performance tuning at scale
  • Hadoop
  • Hive
  • HiveQL
  • YARN
  • Pig
  • Scoop
  • Ranger
  • Real-time Streaming
  • Kafka
  • Kinesis
  • Data File Formats:
  • Avro, Parquet, JSON, ORC, CSV, XML
  • NoSQL / SQL
  • Microservice development
  • RESTful API development
  • CI/CD pipelines
  • Jenkins / GoCD
  • AWS
    • CodeCommit
    • CodeBuild
    • CodeDeploy
    • CodePipeline
  • Containers
  • Docker / Kubernetes
  • AWS
    • Lambda
    • Fargate
    • EKS
  • Analytics
  • Presto / Athena
  • QuickSight
  • Tableau
  • Test-driven development/test automation, continuous integration, and deployment automation
  • Enjoy working with data data analysis, data quality, reporting, and visualization
  • Good communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly.
  • Great design and problem solving skills, with a strong bias for architecting at scale.
  • Adaptable, proactive and willing to take ownership.
  • Keen attention to detail and high level of commitment.
  • Good understanding in any: advanced mathematics, statistics, and probability.
  • Experience working in agile/iterative development and delivery environments. Comfort in working in such an environment. Requirements change quickly and our team needs to constantly adapt to moving targets.

About Comcast DX (Data Experience):

Data Experience(DX) is a results-driven, data platform research and engineering team responsible for the delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization. The mission of DX is to gather, organize, make sense of Comcast data, and make it universally accessible to empower, enable, and transform Comcast into an insight-driven organization. Members of the DX team define and leverage industry best practices, work on extremely large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines

Comcast is an EOE/Veterans/Disabled/LGBT employer

GTN Technical Staffing and Consulting
  • Dallas, TX

This position requires a broad array of programming skills and experience as well as the desire to learn and grow in an entrepreneurial environment. You will be responsible for creating and developing client onboarding, product provisioning and real-time analytics software services. You will work closely with other members of the architecture team to make strategic decisions about product development, devops and future technology choices.

The ideal candidate should:

  • Demonstrate a proven track record of rapidly building, delivering, and maintaining complex software products.
  • Possess excellent communication skills.
  • Have high integrity.
  • Embrace learning and have a thirst for knowledge.

Enjoy solving problems and finding generic solutions to challenging situations.

  • Enjoy being a mentor for junior team members.

Be self-motivated to innovate and develop cutting edge technology functionality.

  • Be able to rapidly learn new frameworks.
  • Be responsible for creating and implementing core product architecture. Be comfortable developing frontend and backend solutions.

This position reports directly to the CTO.

Required experience:

7+ years hands-on experience

  • AWS (EC2, Lambda, ECS)
  • Docker/Kubernetes
  • 3+ years programming in Scala
  • 3+ years programming in Node.js
  • ES6/Modern javascript
  • Microservices

Preferred experience:

  • Mongodb
  • SBT
  • Serverless Computing

The ideal candidate will:

  • Possess excellent communication and organization skills
  • Embrace learning and have a thirst for knowledge
  • Rapidly learn new technological paradigms
  • Understand and program in multiple programming languages
  • Enjoy being part of a growing team
  • Self-motivated team player

Benefits

  • Medical / Dental / Vision Insurance
  • 401(k)
  • Competitive compensation
  • Work with leaders in the industry
  • Opportunities to learn and grow every day
  • Play a meaningful role on a successful team
Awair
  • San Francisco, CA
  • Salary: $80k - 180k

Us?


 We are looking for an experienced backend software engineer to join a stellar team working on our software platform that brings health and safety into the world of the smart home and real estate. You will be a key contributor to our growth in launching new SaaS products as well as scaling our software infrastructure to meet the growing demands for our monitoring products and control solutions in the market.


Working directly with our Director of Software Engineering and the rest of our software engineering team, you will be collaborating with our frontend, firmware and hardware engineers as well as designers to build the best indoor environment management platform there is!


Based in our office in SoMa, San Francisco, we value software craftspeople and foster a friendly collaborative atmosphere.


You?


You love system architecture. You love getting things done. You’re super smart and picky about finding the right role. You’re experienced, but you also welcome the toughest challenges and like to learn new things.


Responsibilities



  • Build robust and scalable software in Scala

  • Design services and system architecture with other team members

  • Help improve the code quality through writing unit tests, automation and performing code reviews

  • Participate in brainstorming sessions and contribute ideas to our technology, algorithms and products

  • Work with the product teams to understand end-user requirements, formulate use cases, and then translate that into a pragmatic and effective technical solution

  • Dive into difficult problems and successfully deliver results on schedule


Requirements



  • 2+ years of recent hands-on coding and software design (preferably in Scala)

  • Extensive experience with building production-grade applications and services (preferably in Scala)

  • Must have cloud experience in production (Google Cloud experience being more preferable)

  • Good understanding of Kubernetes and Docker and experience of using them in production

Awair
  • San Francisco, CA
  • Salary: $90k - 160k

Us?


 We are looking for an experienced backend software engineer to join a stellar team working on our software platform that brings health and safety into the world of the smart home and real estate. You will be a key contributor to our growth in launching new SaaS products as well as scaling our software infrastructure to meet the growing demands for our monitoring products and control solutions in the market.


Working directly with our Director of Software Engineering and the rest of our software engineering team, you will be collaborating with our frontend, firmware and hardware engineers as well as designers to build the best indoor environment management platform there is!


Based in our office in SoMa, San Francisco, we value software craftspeople and foster a friendly collaborative atmosphere.


You?


You love system architecture. You love getting things done. You’re super smart and picky about finding the right role. You’re experienced, but you also welcome the toughest challenges and like to learn new things.


Responsibilities



  • Build robust and scalable software in Scala

  • Design services and system architecture with other team members

  • Help improve the code quality through writing unit tests, automation and performing code reviews

  • Participate in brainstorming sessions and contribute ideas to our technology, algorithms and products

  • Work with the product teams to understand end-user requirements, formulate use cases, and then translate that into a pragmatic and effective technical solution

  • Dive into difficult problems and successfully deliver results on schedule


Requirements



  • 2+ years of recent hands-on coding and software design (preferably in Scala)

  • Extensive experience with building production-grade applications and services (preferably in Scala)

  • Must have cloud experience in production (Google Cloud experience being more preferable)

  • Good understanding of Kubernetes and Docker and experience of using them in production

Booz Allen Hamilton - Tagged
  • San Diego, CA
Job Description
Job Number: R0042382
Data Scientist, Mid
The Challenge
Are you excited at the prospect of unlocking the secrets held by a data set? Are you fascinated by the possibilities presented by machine learning, artificial intelligence advances, and IoT? In an increasingly connected world, massive amounts of structured and unstructured data open up new opportunities. As a data scientist, you can turn these complex data sets into useful information to solve global challenges. Across private and public sectors-from fraud detection, to cancer research, to national intelligence-you know the answers are in the data.
We have an opportunity for you to use your analytical skills to improve the DoD and federal agencies. Youll work closely with your customer to understand their questions and needs, then dig into their data-rich environment to find the pieces of their information puzzle. Youll develop algorithms, write scripts, build predictive analytics, use automation, and apply machine learning to turn disparate data points into objective answers to help our nations services and leaders make data-driven decisions. Youll provide your customer with a deep understanding of their data, what it all means, and how they can use it. Join us as we use data science for good in the DoD and federal agencies.
Empower change with us.
Build Your Career
At Booz Allen, we know the power of data science and machine intelligence and were dedicated to helping you grow as a data scientist. When you join Booz Allen, you can expect:
  • access to online and onsite training in data analysis and presentation methodologies, and tools like Hortonworks, Docker, Tableau, Splunk, and other open source and emerging tools
  • a chance to change the world with the Data Science Bowlthe worlds premier data science for social good competition
  • participation in partnerships with data science leaders, like our partnership with NVIDIA to deliver Deep Learning Institute (DLI) training to the federal government
Youll have access to a wealth of training resources through our Analytics University, an online learning portal specifically geared towards data science and analytics skills, where you can access more than 5000 functional and technical, certifications, and books. Build your technical skills through hands-on training on the latest tools and state-of-the-art tech from our in-house experts. Pursuing certifications? Take advantage of our tuition assistance, on-site bootcamps, certification training, academic programs, vendor relationships, and a network of professionals who can give you helpful tips. Well help you develop the career you want as you chart your own course for success.
You Have
  • Experience with one or more statistical analytical programming languages, including Python or R
  • Experience with source control and dependency management software, including Git or Maven
  • Experience with using relational databases, including MySQL
  • Experience with identifying analytic insight in data, developing visualizations, and presenting findings to stakeholders
  • Knowledge of object-oriented programming, including Java and C++
  • Knowledge of various machine learning algorithms and their designs, capabilities, and limitations
  • Knowledge of statistical analysis techniques
  • Ability to build complex extraction, transformation, and loading (ETL) pipelines to clean and fuse data together
  • Ability to obtain a security clearance
  • BA or BS degree
Nice If You Have
  • Experience with designing and implementing custom machine learning algorithms
  • Experience with graph algorithms and semantic Web
  • Experience with designing and setting up relational databases
  • Experience with Big Data computing environments, including Hadoop
  • Experience with Navy mission systems
  • MA degree in Mathematics, CS, or a related quantitative field
Clearance
Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information.
Were an EOE that empowers our peopleno matter their race, color, religion, sex, gender identity, sexual orientation, national origin, disability, or veteran statusto fearlessly drive change.
, CJ1, GD13, MPPC, SIG2017
HelloFresh US
  • New York, NY

HelloFresh is hiring a Data Scientist to join our Supply Chain Analytics Team! In this exciting role, you will develop cutting edge insights using a wealth of data about our suppliers, ingredients, operations, and customers to improve the customer experience, drive operational efficiencies and build new supply chain capabilities. To succeed in this role, you’ll need to have a genuine interest in using data and analytic techniques to solve real business challenges, and a keen interest to make a big impact on a fast-growing organization.


You will...



  • Own the development and deployment of quantitative models to make routine and strategic operational decisions to plan the fulfillment of orders and identify the supply chain capabilities we need to build to continue succeeding in the business

  • Solve complex optimization problems with linear programming techniques

  • Collaborate across operational functions (e.g. supply chain planning, logistics, procurement, production, etc) to identify and prioritize projects

  • Communicate results and recommendations to stakeholders in a business oriented manner with clear guidelines which can be implemented across functions in the supply chain

  • Work with complex datasets across various platforms to perform descriptive, prescriptive, predictive, and exploratory analyses


At a minimum, you have...



  • Advanced degree in Statistics, Economics, Applied Mathematics, Computer Science, Data Science, Engineering or a related field

  • 2 - 5 years’ experience delivering analytical solutions to complex business problems

  • Knowledge of linear programming optimization techniques (familiarity with software like CPLEX, AMPL, etc is a plus)

  • Fluency in managing and analyzing large data sets of data with advanced tools, such as R and Python etc.

  • Experience extracting and transforming data from structured databases such as: MySQL, PostgreSQL, etc.


You are...



  • Results-oriented - You love transforming data into meaningful outcomes

  • Gritty - When you encounter obstacles you find solutions, not excuses

  • Intellectually curious – You love to understand why things are the way they are, how things work, and challenge the status quo

  • A team player – You favor team victories over individual success

  • A structured problem solver – You possess strong organizational skills and consistently demonstrate a methodical approach to all your work

  • Agile – You thrive in fast-paced and dynamic environments and are comfortable working autonomously

  • A critical thinker – You use logic to identify opportunities, evaluate alternatives, and synthesize and present critical information to solve complex problems



Our team is diverse, high-performing and international, helping us to create a truly inspiring work environment in which you will thrive!


It is the policy of HelloFresh not to discriminate against any employee or applicant for employment because of race, color, religion, sex, sexual orientation, gender identity, national origin, age, marital status, genetic information, disability or because he or she is a protected veteran.

Computer Staff
  • Fort Worth, TX

We have been retained by our client located in Fort Worth, Texas (south Ft Worth area), to deliver a Risk Modeler on a regular full-time basis.   We prefer SAS experience but are interviewing candidates with R, SPSS, WPS, MatLab or similar statistical package experience if candidate has experience from financial loan credit risk analysis industry. Enjoy all the resources of a big company, none of problems that small companies have. This company has doubled in size in 3 years. We have a keen interest in finding a business minded statistical modeling candidate with some credit risk experience to build statistical models within the marketing, direct mail areas of financial services, lending, loans. We are seeking a candidate with statistical modeling, and data analysis skills, interested in creating better ways to solve problems in order to increase loan originations, and decrease loan defaults, and more. Our client is in business to find prospective borrowers, originate loans, provide loans, service loans, process loans and collect loan payments. The team works with third party data vendors, credit reporting agencies and data service providers, data augmentation, address standardization, fraud detection; decision sciences, analytics, and this position includes create of statistical models. They support the one of the, if not the largest profile of decision management in the US.  


We require experience with statistical analysis tools such as SAS, Matlab, R, WPS or SPSS or Python if to do statistical analysis. This is a statistical modeling, risk modeling, model building, decision science, data analysis and statistical analysis type of role requiring SQL and/or SQL Server experience and critical thinking skills to solve problems.   We prefer candidates with experience with data analysis, SQL queries, joins (left, inner, outer, right), reporting from data warehouses with tools such as Tableau, COGNOS, Looker, Business Objects. We prefer candidates with financial and loan experience especially knowledge of loan originations, borrower profiles or demographics, modeling loan defaults, statistical analysis i.e. Gini Coefficients and K-S test / Kolmogorov-Smirnov test for credit scoring and default prediction and modeling.


However, primarily critical thinking skills, and statistical modeling and math / statistics skills are needed to fulfill the tasks of this very interesting and important role, including playing an important role growing your skills within this small risk/modeling team. Take on challenges in the creation and use of statistical models. There is no use for Hadoop, or any NoSQL databases in this position this is not a big data type of position. no "big data" type things needed. There is no Machine Learning or Artificial Intelligence needed in this role. Your role is to create and use those statistical models. Create statistical models for direct mail in financial lending space to reach the right customers with the right profiles / demographics / credit ratings, etc. Take credit risk, credit analysis, loan data and build a new model, or validate the existing model, or recalibrate it or rebuild it completely.   The models are focused on delivering answers to questions or solutions to problems within these areas financial loan lending: Risk Analysis, Credit Analysis, Direct Marketing, Direct Mail, and Defaults. Logistical regression in SAS or Knowledge Studio, and some light use of Looker as the B.I. tool on top of SQL Server data.   Deliver solutions or ways for this business to make improvements in these areas and help the business be more profitable. Seek answers to questions. Seek solutions to problems. Create models. Dig into the data. Explore and find opportunities to improve the business. Expected to fit within the boundaries of defaults or loan values and help drive the business with ideas to get a better models in place, or explore data sources to get better models in place. Use critical thinking to solve problems.


Answer questions or solve problems such as:

What are the statistical models needed to produce the answers to solve risk analysis and credit analysis problems?

What are customer profiles have the best demographics or credit risk for loans to send direct mail items to as direct marketing pieces?

Why are loan defaults increasing or decreasing? What is impacting the increase or decrease of loan defaults?  



Required Skills

Bachelors degree in Statistics or Finance or Economics or Management Information Systems or Math or Quantitative Business Analysis or Analytics any other related math or science or finance degree. Some loan/lending business domain work experience.

Masters degree preferred, but not required.

Critical thinking skills.

must have SQL skills (any database SQL Server, MS Access, Oracle, PostgresSQL, Postgres) and the ability to write queries, joins, inner joins, left joins, right joins, outer joins. SQL Server is highly preferred.

Any statistical analysis systems / packages experience including statistical modeling experience, and excellent math skills:   SAS, Matlab, R, WPS, SPSS or Python with R language if used in statistical analysis. Must have significant statistical modeling skills and experience.



Preferred Skills:
Loan Credit Analysis highly preferred.   SAS highly preferred.
Experience with Tableu, Cognos, Business Objects, Looker or similar data warehouse data slicing and dicing and data warehouse reporting tools.   Creating reports from data warehouse data, or data warehouse reporting. SQL Server SSAS but only to pull reports. Direct marketing, direct mail marketing, loan/lending to somewhat higher risk borrowers.



Employment Type:   Regular Full-Time

Salary Range: $85,000 130,000 / year    

Benefits:  health, medical, dental, vision only cost employee about $100 per month.
401k 4% matching after 1 year, Bonus structure, paid vacation, paid holidays, paid sick days.

Relocation assistance is an option that can be provided, for a very well qualified candidate. Local candidates are preferred.

Location: Fort Worth, Texas
(area south of downtown Fort Worth, Texas)

Immigration: US citizens and those authorized to work in the US are encouraged to apply. We are unable to sponsor H1b candidates at this time.

Please apply with resume (MS Word format preferred), and also Apply with your Resume or apply with your Linked In Profile via the buttons on the bottom of this Job Posting page:  

http://www.computerstaff.com/?jobIdDescription=314  


Please call 817-424-1411 or please send a Text to 817-601-7238 to inquire or to follow up on your resume application. Yes, we recommend you call to leave a message, or send a text with your name, at least.   Thank you for your attention and efforts.