OnlyDataJobs.com

Riccione Resources
  • Dallas, TX

Sr. Data Engineer Hadoop, Spark, Data Pipelines, Growing Company

One of our clients is looking for a Sr. Data Engineer in the Fort Worth, TX area! Build your data expertise with projects centering on large Data Warehouses and new data models! Think outside the box to solve challenging problems! Thrive in the variety of technologies you will use in this role!

Why should I apply here?

    • Culture built on creativity and respect for engineering expertise
    • Nominated as one of the Best Places to Work in DFW
    • Entrepreneurial environment, growing portfolio and revenue stream
    • One of the fastest growing mid-size tech companies in DFW
    • Executive management with past successes in building firms
    • Leader of its technology niche, setting the standards
    • A robust, fast-paced work environment
    • Great technical challenges for top-notch engineers
    • Potential for career growth, emphasis on work/life balance
    • A remodeled office with a bistro, lounge, and foosball

What will I be doing?

    • Building data expertise and owning data quality for the transfer pipelines that you create to transform and move data to the companys large Data Warehouse
    • Architecting, constructing, and launching new data models that provide intuitive analytics to customers
    • Designing and developing new systems and tools to enable clients to optimize and track advertising campaigns
    • Using your expert skills across a number of platforms and tools such as Ruby, SQL, Linux shell scripting, Git, and Chef
    • Working across multiple teams in high visibility roles and owning the solution end-to-end
    • Providing support for existing production systems
    • Broadly influencing the companys clients and internal analysts

What skills/experiences do I need?

    • B.S. or M.S. degree in Computer Science or a related technical field
    • 5+ years of experience working with Hadoop and Spark
    • 5+ years of experience with Python or Ruby development
    • 5+ years of experience with efficient SQL (Postgres, Vertica, Oracle, etc.)
    • 5+ years of experience building and supporting applications on Linux-based systems
    • Background in engineering Spark data pipelines
    • Understanding of distributed systems

What will make my résumé stand out?

    • Ability to customize an ETL or ELT
    • Experience building an actual data warehouse schema

Location: Fort Worth, TX

Citizenship: U.S. citizens and those authorized to work in the U.S. are encouraged to apply. This company is currently unable to provide sponsorship (e.g., H1B).

Salary: 115 130k + 401k Match

---------------------------------------------------


~SW1317~

Gravity IT Resources
  • Miami, FL

Overview of Position:

We undertaking an ambitious digital transformation across Sales, Service, Marketing, and eCommerce. We are looking for a web data analytics wizard with prior experience in digital data preparation, discovery, and predictive analytics.

The data scientist/web analyst will work with external partners, digital business partners, enterprise analytics, and technology team to strategically plan and develop datasets, measure web analytics, and execute on predictive and prescriptive use cases. The role demands the ability to (1) Learn quickly (2) Work in a fast-paced, team-driven environment (3) Manage multiple efforts simultaneously (4) Adept at using large datasets and using models to test effectiveness of different courses of action (5) Promote data driven decision making throughout the organization (6) Define and measure success of capabilities we provide the organization.


Primary Duties and Responsibilities

    Analy
    • ze data captured through Google Analytics and develop meaningful actionable insights on digital behavior. Put t
    • ogether a customer 360 data frame by connecting CRM Sales, Service, Marketing cloud data with Commerce Web behavior data and wrangle the data into a usable form. Use p
    • redictive modelling to increase and optimize customer experiences across online & offline channels. Evalu
    • ate customer experience and conversions to provide insights & tactical recommendations for web optimization
    • Execute on digital predictive use cases and collaborate with enterprise analytics team to ensure use of best tools and methodologies.
    • Lead support for enterprise voice of customer feedback analytics.
    • Enhance and maintain digital data library and definitions.

Minimum Qualifications

  • Bachelors degree in Statistics, Computer Science, Marketing, Engineering or equivalent
  • 3 years or more of working experience in building predictive models.
  • Experience in Google Analytics or similar web behavior tracking tools is required.
  • Experience in R is a must with working knowledge of connecting to multiple data sources such as amazon redshift, salesforce, google analytics, etc.
  • Working knowledge in machine learning algorithms such as Random Forest, K-means, Apriori, Support Vector machine, etc.
  • Experience in A/B testing or multivariate testing.
  • Experience in media tracking tags and pixels, UTM, and custom tracking methods.
  • Microsoft Office Excel & PPT (advanced).

Preferred Qualifications

  • Masters degree in statistics or equivalent.
  • Google Analytics 360 experience/certification.
  • SQL workbench, Postgres.
  • Alteryx experience is a plus.
  • Tableau experience is a plus.
  • Experience in HTML, JavaScript.
  • Experience in SAP analytics cloud or SAP desktop predictive tool is a plus
Signify Health
  • Dallas, TX

Position Overview:

Signify Health is looking for a savvy Data Engineer to join our growing team of deep learning specialists. This position would be responsible for evolving and optimizing data and data pipeline architectures, as well as, optimizing data flow and collection for cross-functional teams. The Data Engineer will support software developers, database architects, data analysts, and data scientists. The ideal candidate would be self-directed, passionate about optimizing data, and comfortable supporting the Data Wrangling needs of multiple teams, systems and products.

If you enjoy providing expert level IT technical services, including the direction, evaluation, selection, configuration, implementation, and integration of new and existing technologies and tools while working closely with IT team members, data scientists, and data engineers to build our next generation of AI-driven solutions, we will give you the opportunity to grow personally and professionally in a dynamic environment. Our projects are built on cooperation and teamwork, and you will find yourself working together with other talented, passionate and dedicated team member, all working towards a shared goal.

Essential Job Responsibilities:

  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data models for greater scalability, etc.
  • Leverage Azure for extraction, transformation, and loading of data from a wide variety of data sources in support of AI/ML Initiatives
  • Design and implement high performance data pipelines for distributed systems and data analytics for deep learning teams
  • Create tool-chains for analytics and data scientist team members that assist them in building and optimizing AI workflows
  • Work with data and machine learning experts to strive for greater functionality in our data and model life cycle management capabilities
  • Communicate results and ideas to key decision makers in a concise manner
  • Comply with applicable legal requirements, standards, policies and procedures including, but not limited to the Compliance requirements and HIPAA.


Qualifications:Education/Licensing Requirements:
  • High school diploma or equivalent.
  • Bachelors degree in Computer Science, Electrical Engineer, Statistics, Informatics, Information Systems, or another quantitative field. or related field or equivalent work experience.


Experience Requirements:
  • 5+ years of experience in a Data Engineer role.
  • Experience using the following software/tools preferred:
    • Experience with big data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with AWS or Azure cloud services.
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C#, etc.
  • Strong work ethic, able to work both collaboratively, and independently without a lot of direct supervision, and solid problem-solving skills
  • Must have strong communication skills (written and verbal), and possess good one-on-one interpersonal skills.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
  • 2 years of experience in data modeling, ETL development, and Data warehousing
 

Essential Skills:

  • Fluently speak, read, and write English
  • Fantastic motivator and leader of teams with a demonstrated track record of mentoring and developing staff members
  • Strong point of view on who to hire and why
  • Passion for solving complex system and data challenges and desire to thrive in a constantly innovating and changing environment
  • Excellent interpersonal skills, including teamwork and negotiation
  • Excellent leadership skills
  • Superior analytical abilities, problem solving skills, technical judgment, risk assessment abilities and negotiation skills
  • Proven ability to prioritize and multi-task
  • Advanced skills in MS Office

Essential Values:

  • In Leadership Do whats right, even if its tough
  • In Collaboration Leverage our collective genius, be a team
  • In Transparency Be real
  • In Accountability Recognize that if it is to be, its up to me
  • In Passion Show commitment in heart and mind
  • In Advocacy Earn trust and business
  • In Quality Ensure what we do, we do well
Working Conditions:
  • Fast-paced environment
  • Requires working at a desk and use of a telephone and computer
  • Normal sight and hearing ability
  • Use office equipment and machinery effectively
  • Ability to ambulate to various parts of the building
  • Ability to bend, stoop
  • Work effectively with frequent interruptions
  • May require occasional overtime to meet project deadlines
  • Lifting requirements of
Visa
  • Austin, TX
Company Description
Common Purpose, Uncommon
Opportunity. Everyone at Visa works with one goal in mind making sure that Visa is the best way to pay and be paid, for everyone everywhere. This is our global vision and the common purpose that unites the entire Visa team. As a global payments technology company, tech is at the heart of what we do: Our VisaNet network processes over 13,000 transactions per second for people and businesses around the world, enabling them to use digital currency instead of cash and checks. We are also global advocates for financial inclusion, working with partners around the world to help those who lack access to financial services join the global economy. Visas sponsorships, including the Olympics and FIFA World Cup, celebrate teamwork, diversity, and excellence throughout the world. If you have a passion to make a difference in the lives of people around the
world, Visa offers an uncommon opportunity to build a strong, thriving career. Visa is fueled by our team of talented employees who continuously raise the bar on delivering the convenience and security of digital currency to people all over the world. Join our team and find out how Visa is everywhere you want to
be.
Job Description
The ideal candidate will be responsible for the following to:
  • Perform Hadoop Administration on Production Hadoop clusters
  • Perform Tuning and Increase Operational efficiency on a continuous basis
  • Monitor health of the platforms and Generate Performance Reports and Monitor and provide continuous improvements
  • Working closely with development, engineering and operation teams, jointly work on key deliverables ensuring production scalability and stability
  • Develop and enhance platform best practices
  • Ensure the Hadoop platform can effectively meet performance & SLA requirements
  • Responsible for support of Hadoop Production environment which includes Hive, YARN, Spark, Impala, Kafka, SOLR, Oozie, Sentry, Encryption, Hbase, etc.
  • Perform optimization capacity planning of a large multi-tenant cluster
Qualifications
  • Minimum 3 years of work experience in maintaining, optimization, issue resolution of Hadoop clusters, supporting Business users and Batch
  • Experience in Configuring and setting up Hadoop clusters and provide support for - aggregation, lookup & fact table creation criteria
  • Map Reduce tuning, data node, NN recovery etc.
  • Experience in Linux / Unix OS Services, Administration, Shell, awk scripting
  • Experience in building and scalable Hadoop applications
  • Experience in Core Java, Hadoop (Map Reduce, Hive, Pig, HDFS, H-catalog, Zookeeper and OOzie)
  • Hands-on Experience in SQL (Oracle ) and No SQL Databases (HBASE/Cassandra/Mongo DB)
  • Excellent oral and written communication and presentation skills, analytical and problem solving skills
  • Self-driven, Ability to work independently and as part of a team with proven track record developing and launching products at scale
  • Minimum of four year technical degree required
  • Experience on Cloudera distribution preferred
  • Hands-on Experience as a Linux Sys Admin is a plus
  • Knowledge on Spark and Kafka is a plus.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Job Number: REF15232V
Awair
  • San Francisco, CA
  • Salary: $80k - 180k

Us?


 We are looking for an experienced backend software engineer to join a stellar team working on our software platform that brings health and safety into the world of the smart home and real estate. You will be a key contributor to our growth in launching new SaaS products as well as scaling our software infrastructure to meet the growing demands for our monitoring products and control solutions in the market.


Working directly with our Director of Software Engineering and the rest of our software engineering team, you will be collaborating with our frontend, firmware and hardware engineers as well as designers to build the best indoor environment management platform there is!


Based in our office in SoMa, San Francisco, we value software craftspeople and foster a friendly collaborative atmosphere.


You?


You love system architecture. You love getting things done. You’re super smart and picky about finding the right role. You’re experienced, but you also welcome the toughest challenges and like to learn new things.


Responsibilities



  • Build robust and scalable software in Scala

  • Design services and system architecture with other team members

  • Help improve the code quality through writing unit tests, automation and performing code reviews

  • Participate in brainstorming sessions and contribute ideas to our technology, algorithms and products

  • Work with the product teams to understand end-user requirements, formulate use cases, and then translate that into a pragmatic and effective technical solution

  • Dive into difficult problems and successfully deliver results on schedule


Requirements



  • 2+ years of recent hands-on coding and software design (preferably in Scala)

  • Extensive experience with building production-grade applications and services (preferably in Scala)

  • Must have cloud experience in production (Google Cloud experience being more preferable)

  • Good understanding of Kubernetes and Docker and experience of using them in production

Awair
  • San Francisco, CA
  • Salary: $90k - 160k

Us?


 We are looking for an experienced backend software engineer to join a stellar team working on our software platform that brings health and safety into the world of the smart home and real estate. You will be a key contributor to our growth in launching new SaaS products as well as scaling our software infrastructure to meet the growing demands for our monitoring products and control solutions in the market.


Working directly with our Director of Software Engineering and the rest of our software engineering team, you will be collaborating with our frontend, firmware and hardware engineers as well as designers to build the best indoor environment management platform there is!


Based in our office in SoMa, San Francisco, we value software craftspeople and foster a friendly collaborative atmosphere.


You?


You love system architecture. You love getting things done. You’re super smart and picky about finding the right role. You’re experienced, but you also welcome the toughest challenges and like to learn new things.


Responsibilities



  • Build robust and scalable software in Scala

  • Design services and system architecture with other team members

  • Help improve the code quality through writing unit tests, automation and performing code reviews

  • Participate in brainstorming sessions and contribute ideas to our technology, algorithms and products

  • Work with the product teams to understand end-user requirements, formulate use cases, and then translate that into a pragmatic and effective technical solution

  • Dive into difficult problems and successfully deliver results on schedule


Requirements



  • 2+ years of recent hands-on coding and software design (preferably in Scala)

  • Extensive experience with building production-grade applications and services (preferably in Scala)

  • Must have cloud experience in production (Google Cloud experience being more preferable)

  • Good understanding of Kubernetes and Docker and experience of using them in production

Hulu
  • Santa Monica, CA

WHAT YOU’LL DO



  • Build robust and scalable micro-services

  • End to end ownership of backend services: Ideate, review design, build, code-review, test, load-test, launch, monitor performance

  • Identify opportunities to optimize ad delivery algorithm – measure and monitor ad-break utilization for ad count and ad duration.

  • Work with product team to translate requirements into well-defined technical implementation

  • Define technical and operational KPIs to measure ad delivery health

  • Build Functional and Qualitative Test frameworks for ad server

  • Challenge our team and software to be even better


WHAT TO BRING



  • BS or MS in Computer Science/Engineering

  • 7+ years of relevant software engineering experience

  • Strong analytical skills

  • Strong programming (Java/C#/C++ or other related programming languages) and scripting skills

  • Great communication, collaboration skills and a strong teamwork ethic

  • Strive for excellence


NICE-TO-HAVES



  • Experience with non-relational database technologies (MongoDB, Cassandra, DynamoDB)

  • Experience with Redis and/or MemCache

  • Experience with Apache Kafka and/or Kinesis

  • AWS

  • Big Data technologies and data warehouses – Spark, Hadoop, Redshift

HelloFresh US
  • New York, NY

HelloFresh is hiring a Data Scientist to join our Supply Chain Analytics Team! In this exciting role, you will develop cutting edge insights using a wealth of data about our suppliers, ingredients, operations, and customers to improve the customer experience, drive operational efficiencies and build new supply chain capabilities. To succeed in this role, you’ll need to have a genuine interest in using data and analytic techniques to solve real business challenges, and a keen interest to make a big impact on a fast-growing organization.


You will...



  • Own the development and deployment of quantitative models to make routine and strategic operational decisions to plan the fulfillment of orders and identify the supply chain capabilities we need to build to continue succeeding in the business

  • Solve complex optimization problems with linear programming techniques

  • Collaborate across operational functions (e.g. supply chain planning, logistics, procurement, production, etc) to identify and prioritize projects

  • Communicate results and recommendations to stakeholders in a business oriented manner with clear guidelines which can be implemented across functions in the supply chain

  • Work with complex datasets across various platforms to perform descriptive, prescriptive, predictive, and exploratory analyses


At a minimum, you have...



  • Advanced degree in Statistics, Economics, Applied Mathematics, Computer Science, Data Science, Engineering or a related field

  • 2 - 5 years’ experience delivering analytical solutions to complex business problems

  • Knowledge of linear programming optimization techniques (familiarity with software like CPLEX, AMPL, etc is a plus)

  • Fluency in managing and analyzing large data sets of data with advanced tools, such as R and Python etc.

  • Experience extracting and transforming data from structured databases such as: MySQL, PostgreSQL, etc.


You are...



  • Results-oriented - You love transforming data into meaningful outcomes

  • Gritty - When you encounter obstacles you find solutions, not excuses

  • Intellectually curious – You love to understand why things are the way they are, how things work, and challenge the status quo

  • A team player – You favor team victories over individual success

  • A structured problem solver – You possess strong organizational skills and consistently demonstrate a methodical approach to all your work

  • Agile – You thrive in fast-paced and dynamic environments and are comfortable working autonomously

  • A critical thinker – You use logic to identify opportunities, evaluate alternatives, and synthesize and present critical information to solve complex problems



Our team is diverse, high-performing and international, helping us to create a truly inspiring work environment in which you will thrive!


It is the policy of HelloFresh not to discriminate against any employee or applicant for employment because of race, color, religion, sex, sexual orientation, gender identity, national origin, age, marital status, genetic information, disability or because he or she is a protected veteran.

Zalando SE
  • Berlin, Germany

ABOUT THE TEAM


Department: Digital Experience


Reports to: Engineering Lead, Fundamentals


Team Size: <10


Recruiter Name, Email: cristiana.martins@zalando.de



As a Full Stack Engineer in the Media and Translation team you will be responsible for the development and operation of the Zalando Media Services and Translation tooling. You will be responsible for developing and testing new features, as well as optimizing and monitoring the existing platform. In addition, you will work closely with System Architects and Designers to prepare migration and optimization plans.



WHERE YOUR EXPERTISE IS NEEDED



  • Create highly scalable solutions and own the entire development cycle - from architecture design to testing, implementation and maintenance

  • Migration of existing services towards a centralized solution

  • Own your code and decide on the technologies and tools to deliver

  • Operate large-scale applications on cloud (AWS or Kubernetes), based on a microservices architecture



WHAT WE'RE LOOKING FOR



  • Deep knowledge of Frontend and Backend development (knowledge in GoLang is a plus)

  • Pragmatic and curious, find elegant solutions to solve a given customer problem

  • Experience in software development, involving designing and developing large-scale, distributed software applications, tools, systems and services

  • Knowledge of microservices and cloud architectures as well as experience with cloud platforms, preferably Kubernetes

  • Knowledge in architecture / design methods and patterns, data and API specifications, quality assurance and testing methods   


PERKS AT WORK



  • Culture of trust, empowerment and constructive feedback, open source commitment, meetups, game nights, 70+ internal technical and fun guilds, knowledge sharing through tech talks, internal tech academy and blogs, product demos, parties & events

  • Competitive salary, employee share shop, 40% Zalando shopping discount, discounts from external partners, centrally located offices, public transport discounts, municipality services, great IT equipment, flexible working times, additional holidays and volunteering time off, free beverages and fruits, diverse sports and health offerings

  • Extensive onboarding, mentoring and personal development opportunities and an international team of experts

  • Relocation assistance for internationals, PME family service and parent & child rooms* (*available in select locations)


We celebrate diversity and are committed to building teams that represent a variety of backgrounds, perspectives and skills. All employment is decided on the basis of qualifications, merit and business need.


ABOUT ZALANDO


Zalando is Europe’s leading online platform for fashion, connecting customers, brands and partners across 17 markets. We drive digital solutions for fashion, logistics, advertising and research, bringing head-to-toe fashion to more than 23 million active customers through diverse skill-sets, interests and languages our teams choose to use.



Fundamentals team plays a central role in shaping the future of the frontend of all our digital experiences at Zalando. We work with other teams in digital experience, providing them with the infrastructure and frameworks to build a first class, highly engaging, and personalised shopping experience throughout all our digital premises.


Please note that all the applications must be completed using the online form - we do not accept applications via email.

Computer Staff
  • Fort Worth, TX

We have been retained by our client located in Fort Worth, Texas (south Ft Worth area), to deliver a Risk Modeler on a regular full-time basis.   We prefer SAS experience but are interviewing candidates with R, SPSS, WPS, MatLab or similar statistical package experience if candidate has experience from financial loan credit risk analysis industry. Enjoy all the resources of a big company, none of problems that small companies have. This company has doubled in size in 3 years. We have a keen interest in finding a business minded statistical modeling candidate with some credit risk experience to build statistical models within the marketing, direct mail areas of financial services, lending, loans. We are seeking a candidate with statistical modeling, and data analysis skills, interested in creating better ways to solve problems in order to increase loan originations, and decrease loan defaults, and more. Our client is in business to find prospective borrowers, originate loans, provide loans, service loans, process loans and collect loan payments. The team works with third party data vendors, credit reporting agencies and data service providers, data augmentation, address standardization, fraud detection; decision sciences, analytics, and this position includes create of statistical models. They support the one of the, if not the largest profile of decision management in the US.  


We require experience with statistical analysis tools such as SAS, Matlab, R, WPS or SPSS or Python if to do statistical analysis. This is a statistical modeling, risk modeling, model building, decision science, data analysis and statistical analysis type of role requiring SQL and/or SQL Server experience and critical thinking skills to solve problems.   We prefer candidates with experience with data analysis, SQL queries, joins (left, inner, outer, right), reporting from data warehouses with tools such as Tableau, COGNOS, Looker, Business Objects. We prefer candidates with financial and loan experience especially knowledge of loan originations, borrower profiles or demographics, modeling loan defaults, statistical analysis i.e. Gini Coefficients and K-S test / Kolmogorov-Smirnov test for credit scoring and default prediction and modeling.


However, primarily critical thinking skills, and statistical modeling and math / statistics skills are needed to fulfill the tasks of this very interesting and important role, including playing an important role growing your skills within this small risk/modeling team. Take on challenges in the creation and use of statistical models. There is no use for Hadoop, or any NoSQL databases in this position this is not a big data type of position. no "big data" type things needed. There is no Machine Learning or Artificial Intelligence needed in this role. Your role is to create and use those statistical models. Create statistical models for direct mail in financial lending space to reach the right customers with the right profiles / demographics / credit ratings, etc. Take credit risk, credit analysis, loan data and build a new model, or validate the existing model, or recalibrate it or rebuild it completely.   The models are focused on delivering answers to questions or solutions to problems within these areas financial loan lending: Risk Analysis, Credit Analysis, Direct Marketing, Direct Mail, and Defaults. Logistical regression in SAS or Knowledge Studio, and some light use of Looker as the B.I. tool on top of SQL Server data.   Deliver solutions or ways for this business to make improvements in these areas and help the business be more profitable. Seek answers to questions. Seek solutions to problems. Create models. Dig into the data. Explore and find opportunities to improve the business. Expected to fit within the boundaries of defaults or loan values and help drive the business with ideas to get a better models in place, or explore data sources to get better models in place. Use critical thinking to solve problems.


Answer questions or solve problems such as:

What are the statistical models needed to produce the answers to solve risk analysis and credit analysis problems?

What are customer profiles have the best demographics or credit risk for loans to send direct mail items to as direct marketing pieces?

Why are loan defaults increasing or decreasing? What is impacting the increase or decrease of loan defaults?  



Required Skills

Bachelors degree in Statistics or Finance or Economics or Management Information Systems or Math or Quantitative Business Analysis or Analytics any other related math or science or finance degree. Some loan/lending business domain work experience.

Masters degree preferred, but not required.

Critical thinking skills.

must have SQL skills (any database SQL Server, MS Access, Oracle, PostgresSQL, Postgres) and the ability to write queries, joins, inner joins, left joins, right joins, outer joins. SQL Server is highly preferred.

Any statistical analysis systems / packages experience including statistical modeling experience, and excellent math skills:   SAS, Matlab, R, WPS, SPSS or Python with R language if used in statistical analysis. Must have significant statistical modeling skills and experience.



Preferred Skills:
Loan Credit Analysis highly preferred.   SAS highly preferred.
Experience with Tableu, Cognos, Business Objects, Looker or similar data warehouse data slicing and dicing and data warehouse reporting tools.   Creating reports from data warehouse data, or data warehouse reporting. SQL Server SSAS but only to pull reports. Direct marketing, direct mail marketing, loan/lending to somewhat higher risk borrowers.



Employment Type:   Regular Full-Time

Salary Range: $85,000 130,000 / year    

Benefits:  health, medical, dental, vision only cost employee about $100 per month.
401k 4% matching after 1 year, Bonus structure, paid vacation, paid holidays, paid sick days.

Relocation assistance is an option that can be provided, for a very well qualified candidate. Local candidates are preferred.

Location: Fort Worth, Texas
(area south of downtown Fort Worth, Texas)

Immigration: US citizens and those authorized to work in the US are encouraged to apply. We are unable to sponsor H1b candidates at this time.

Please apply with resume (MS Word format preferred), and also Apply with your Resume or apply with your Linked In Profile via the buttons on the bottom of this Job Posting page:  

http://www.computerstaff.com/?jobIdDescription=314  


Please call 817-424-1411 or please send a Text to 817-601-7238 to inquire or to follow up on your resume application. Yes, we recommend you call to leave a message, or send a text with your name, at least.   Thank you for your attention and efforts.

Georgia-Pacific
  • Atlanta, GA

GP is moving into our second full year of operation of our Collaboration and Support Center (CSC) and looking for a leader at the CSC located in midtown Atlanta at the Tech Square Labs Building.  We are seeking an experienced and motivated director with a technical background and operating knowledge to lead the ongoing operation of a centralized support center for GP operating locations. The Collaboration and Support Center (CSC) is a location where GP employees, suppliers and subject matter experts are co-located to focus on improving our manufacturing operations through remote monitoring, problem diagnosis and optimization.    

The Role: 

The CSC team will provide a platform to bring remote monitoring, diagnosis and optimization to operating facilities across GP. Internal and external resources located at the CSC will focus on early recognition of problems and scalable problem solving across our manufacturing sites.

The CSC team will use the best available technology and the initial focus will be primarily on opportunities in the areas of Asset Health, Process Safety, and Process Optimization.

The primary responsibilities of the Collaboration and Support Center Leader will be to: 

    • Lead a group of professionals (30-40 initially) to deliver value through the support to operating locations in the form of use cases that when successful will move into ongoing operational support functions
    • Provide coaching, guidance, and support to the cross-functional CSC team to ensure strong collaboration internally and with external locations
    • Carry and bring to life the vision for the CSC inside the team and into operations
    • Structure the day to day operation of the team, prioritizing the daily focus for the CSC team, interacting with manufacturing sites and subject matter experts, and managing contractors toward measurable contributions
    • Build solid communication with sites by engaging with them to ensure a clear understanding of their needs and how the Tech Square CSC can help with their needs
    • Develop a pipeline of talent for Tech Square location that evolves with the scope and maturity of the CSC operation
    • Manage vendor / supplier partners in the CSC to align objectives and priorities  
     

What You Will Need

Basic Qualifications:

  • Bachelors of Science or Technical Degree required

  • 5+ years minimum experience managing a group of professionals

  • 3+ years of experience in an operations environment

  • Basic financial analysis skills for opportunity evaluation and marginal analysis

  • Experience with Process Tools PI or other
  • Willing and able to travel to sites to build relationships with operations and to understand process - 20%

What Will Put You Ahead?

Preferred Qualifications: 


  • BS Degree in Chemical Engineering, Mechanical Engineering, or Electrical Engineering

  • Experience working with business partners and stakeholders

  • Experience working in or managing an integrated operations center

  • Familiarity with analytic tools, python, R, SAS
  • Understanding of Process Control fundamentals DCS, PLC

  • Understanding of Mechanical reliability fundamentals

PsiKick
  • Ann Arbor, MI

About the Role

PsiKick is looking for a hands-on, experienced data scientist who will help us discover the insights hidden in vast amounts of data collected by PsiKicks revolutionary batteryless sensor networks. These new data streams offer the opportunity for new and improved insights unique to PsiKick. Your primary focus will be cleaning and analyzing the data streams produced by PsiKicks sensors and defining and implementing algorithms to extract insights from the data for automated industrial monitoring spanning industrial health, safety, and energy efficiency. This will include data cleaning, data organization, statistical analysis, data mining, recommending algorithms or developing new ones, verification and cost/benefit analysis of different approaches, and working with software engineers to implement these in PsiKicks cloud platform and/or sensor network hardware.

The successful candidate will have:

-         5+ years experience in a data science/analyst or statistician role working with real data sets, preferably from sensor-driven applications.

-         Bachelors or Masters degree in a quantitative discipline (Engineering, Computer Science, Statistics, etc)

-         Proficient at cleaning and managing large, real data sets. Visualizing data. Statistical analysis and probability. Finding opportunities for extracting insights from data. Industrial monitoring experience a plus.

-         Must have strong experience using a variety of data mining tools, building and implementing algorithms, and creating simulations

-         Broad experience working with a diverse set of algorithms for streaming data

-         Passion for finding hidden solutions in large sets of data

-         Proficient in exploratory data analysis (EDA) using data tools such as Python, R, SQL to efficiently clean data and implement algorithms operating on large data sets

-         Ability to draw conclusions from data and present those conclusions in a clear and concise way

-         Identify and evaluate metrics such as robustness, accuracy, and cost-benefit analysis

-         Knowledge of machine learning techniques

Apporchid Inc
  • Philadelphia, PA

Java- Techcnial lead

Job description:

Experienced Java/J2EE technical lead with proven expertise in implementing, managing enterprise scale Hadoop architectures and environments. Setup highly available App Orchid Java Product platform in AWS with industry standard security frameworks. Collaborates with application developers to support, manage, enhance and tactical roadmaps to support large and highly visible Product environment deployments.

Roles and Responsibilities:

  • Work with Solution Architects and Business leaders to understand the architectural roadmaps that support and fulfill business strategy.
  • Lead and design custom solutions on our App Orchid Product Platform
  • Act as a Tech Lead and Engineer mentoring colleagues with less experience
  • Collaboration with a high-performing, forward-focused team, Product Owner(s) and Business stakeholders engagement
  • Enable and influence the timely and successful delivery of business data capabilities and/or technology objectives
  • Opportunity to expand your communication, analytical, interpersonal, and organization capabilities
  • Experience working in a fast paced environment driving business outcomes leveraging Agile to its fullest
  • Enhance your entrepreneurial mindset network opportunity and influencing outcomes
  • Supporting environment that fosters can-do attitude and opportunity for growth and advancement based on consistent demonstrative performance.
  • Expertise in system administration and programming skills. Storage, performance tuning and capacity management of Big Data.
  • Good understanding of Hadoop eco system such as HDFS, YARN, Map Reduce, HBase, Spark, and Hive.
  • Experience in setup of SSL and integration with Active Directory.
  • Good exposure to CI/CD
  • Oversee technical deliverables for invest and maintenance projects through the software development life cycle, including validating the completeness of estimates, quality and accuracy of technical designs, build and implementation.
  • Proactively address technical issues and risks that could impact project schedule and/or budget
  • Work closely with stakeholders to design and document automation solutions that align with the business needs and also consistent with the architectural vision.
  • Facilitate continuity between Sourcing Partners, other IT Groups and Enterprise Architecture.
  • Work closely with the architecture team to ensure that the technical solution designs and implementation are consistent with the architectural vision, as well as to drive the business through technical innovation through the use of newly identified and leading technologies.
  • Own and drive adoption of DevOps tools and best practices (including conducting (automated) code reviews, reducing/eliminating technical debt, and delivering vulnerability free code) across the application portfolio.

Qualifications

  • Bachelor's degree or equivalent work experience
  • Eight to Ten years (or more) experience as Java/J2EE Technical lead/Sr developer in a large production environment.
  • A deep understanding of Big Data,  Java, Elastic Search, Kibana, Postgresql, TestNG, Gradle
  • Good verbal and written communication skill
  • Demonstrated experience in working on large projects or small teams
  • Working knowledge of Red Hat Linux and Windows operating systems
  • Expert knowledge in Java programming language, SQL and microservices  
  • Good understanding of Cloud technologies, especially AWS stack
  • At least 8 years experience with developing and implementing applications

Desired Skills and Experience

  • Proficient with Java development
  • Ability to quickly learn new technologies and enable/train other analysts
  • Ability to work independently as well as in a team environment on moderate to highly complex issues
  • High technical aptitude and demonstrated progression of technical skills - continuous improvement
  • Ability to automate software/application installations and configurations hosted on Linux servers.
TomTom
  • Amsterdam, Netherlands
At TomTom…

You’ll move the world forward. Every day, we create the most innovative mapping and location technologies to shape tomorrow’s mobility for the better.


We are proud to be one team of more than 5,000 unique, curious, passionate problem-solvers spread across the world. We bring out the best in each other. And together, we help the automotive industry, businesses, developers, drivers, citizens, and cities move towards a safe, autonomous world that is free of congestion and emissions.

What you’ll do



  • Develop distributed systems that are secure, scalable and highly available

  • Write Akka actors, actors everywhere! and join the “scale the system 10x” challenge

  • Develop automated tests, no QAs in the team

  • Treat infrastructure as code, think AWS CloudFormation

  • Own the production environment. You wrote it, you deploy it.



What you’ll need



  • Knowledge or interest in the following: Scala, Akka, NoSQL (Riak), AWS or Azure.

  • Developing the code and working closely with other departments where needed (think Agile/XP/Scrum!)

  • Helping to plan sprints and being involved in the refinement of backlogs


  • Store and share all your relevant and precious pictures and experience in the cloud





Meet your team
NavCloud is a personal data service that is used by millions of users from TomTom and automotive customers’ uses. It allows users to seamlessly synchronize their data between their mobile apps and cars (via dashboard or personal navigation devices). NavCloud uses conflict-free data types (CRDT) to ensure that there is no conflict in synchronized data. We use technologies like Riak, a NoSQL key-value database inspired by Dynamo paper, RabbitMQ, Redis and Scala/Akka. We are a DevOps team with a flat hierarchy that each member has right to deploy to production and responsible for the health of the service. We have right to choose the technologies and design the service architecture on our own.



Achieve more

We are self-starters who play well with others. Every day, we solve new problems with creativity, meet new people and learn rapidly at our offices around the world. We will invest in your growth and are committed to supporting you. In everything we do, we’re guided by six values: We care, putting our heart into what we do; we build trust (you can count on us); we create – driven to make a difference; we are confident, but don’t boast; we keep it simple, since life is complex enough; and we have fun because life’s too short to be boring. 

After you apply

Our recruitment team will work hard to give you a meaningful experience throughout the process, no matter the outcome. Your application will be screened closely and you can rest assured that all follow-up actions will be thorough, from assessments and interviews through your onboarding.

TomTom is an equal opportunity employer

We celebrate diversity, thrive on each other’s differences and are committed to creating an inclusive environment at our offices around the world. Naturally, we do not discriminate against any employee or job applicant because of race, religion, color, sexual orientation, gender, gender identity or expression, marital status, disability, national origin, genetics, or age.

Ready to move the world forward?
 


TomTom
  • Amsterdam, Netherlands

At TomTom…

You’ll move the world forward. Every day, we create the most innovative mapping and location technologies to shape tomorrow’s mobility for the better.


We are proud to be one team of more than 5,000 unique, curious, passionate problem-solvers spread across the world. We bring out the best in each other. And together, we help the automotive industry, businesses, developers, drivers, citizens, and cities move towards a safe, autonomous world that is free of congestion and emissions.

What you’ll do



  • Develop distributed systems that are secure, scalable and highly available

  • Write Akka actors, actors everywhere! and join the “scale the system 10x” challenge

  • Develop automated tests, no QAs in the team

  • Treat infrastructure as code, think AWS CloudFormation

  • Own the production environment. You wrote it, you deploy it.


What you’ll need



  • Knowledge or interest in the following: Scala, Akka, NoSQL (Riak), AWS or Azure.

  • Developing the code and working closely with other departments where needed (think Agile/XP/Scrum!)

  • Helping to plan sprints and being involved in the refinement of backlogs

  • Store and share all your relevant and precious pictures and experience in the cloud



Meet your team
NavCloud is a personal data service that is used by millions of users from TomTom and automotive customers’ uses. It allows users to seamlessly synchronize their data between their mobile apps and cars (via dashboard or personal navigation devices). NavCloud uses conflict-free data types (CRDT) to ensure that there is no conflict in synchronized data. We use technologies like Riak, a NoSQL key-value database inspired by Dynamo paper, RabbitMQ, Redis and Scala/Akka. We are a DevOps team with a flat hierarchy that each member has right to deploy to production and responsible for the health of the service. We have right to choose the technologies and design the service architecture on our own.


Achieve more

We are self-starters who play well with others. Every day, we solve new problems with creativity, meet new people and learn rapidly at our offices around the world. We will invest in your growth and are committed to supporting you. In everything we do, we’re guided by six values: We care, putting our heart into what we do; we build trust (you can count on us); we create – driven to make a difference; we are confident, but don’t boast; we keep it simple, since life is complex enough; and we have fun because life’s too short to be boring. 

After you apply

Our recruitment team will work hard to give you a meaningful experience throughout the process, no matter the outcome. Your application will be screened closely and you can rest assured that all follow-up actions will be thorough, from assessments and interviews through your onboarding.

TomTom is an equal opportunity employer

We celebrate diversity, thrive on each other’s differences and are committed to creating an inclusive environment at our offices around the world. Naturally, we do not discriminate against any employee or job applicant because of race, religion, color, sexual orientation, gender, gender identity or expression, marital status, disability, national origin, genetics, or age.

Ready to move the world forward?
 


Cortical.io
  • Vienna, Austria
  • Salary: €55k - 65k

Cortical.io is a young entrepreneurial company focusing on natural language understanding (NLU). We use our patented technology in combination with sophisticated AI approaches to address problems others have failed to solve. Over the last couple of years, we have built an impressive client portfolio of global Fortune 100 companies. At this point, we are looking to expand our team at our headquarters in Vienna.


The ideal candidates for this position have some experience in Java product development, machine learning, and/or natural language processing (NLP). If you are keen on learning new technologies and want to contribute to software solutions that solve challenging NLP problems, then you should send us your application!


Basic requirements



  • Good working proficiency in written and spoken English

  • European Union citizenship or authorization to work in Austria


What you’ll be working on



  • Applying state-of-the-art NLP and machine-learning techniques to develop semantic solutions for large international businesses

  • Developing our core NLU technology, e.g. our semantic search engine or classification module


What you must have



  • A bachelor’s degree in computer science, AI, machine learning, information retrieval, or NLP

  • Two or more years’ experience as a Java software engineer

  • Basic knowledge of NLP

  • Professional experience in Java product development

  • Practical experience with


    • Integration tools (Maven, Jenkins, Git)

    • Software testing and code reviews

    • Common technologies such as Spring, REST, JSON, NoSQL, Docker, IntelliJ

    • UNIX-style environment


  • Good communication skills to interact with technical and business people at all levels of customers’ organizations


It would be great if you also had



  • A master’s degree in computer science, AI, machine learning, information retrieval, or NLP

  • Professional experience in NLP, machine learning, and/or information retrieval

  • Experience with other programming languages, e.g. Scala, Python, Shell scripting

  • Practical experience with NLP frameworks


What you’ll benefit from



  • A competitive salary

  • 25 vacation days a year, 13 public holidays, and the Austrian national health insurance and pension plan

  • A relaxed, diverse, and friendly work environment in a pleasant office with flexible working arrangements

  • The option of telecommuting occasionally

  • The satisfaction of engineering successful machine-learning solutions where competing technologies have come up short

  • Joining an expanding company that is already working with many big-name clients

Zalando SE
  • Berlin, Germany

ABOUT THE TEAM


Department: Retail Operations, Team Retail Centre


Reports to: Engineering Lead


Team Size: >10



As a Full Stack Engineer in the Team Retail Center, you will build the central component that connects all department components and ensures that Zalando’s store has the right product available at the right time. You will build and own all centralized services for the department and your systems will be used by 1000+ internal users and 2000+ suppliers every day.



WHERE YOUR EXPERTISE IS NEEDED 



  • Meet business needs while maintaining scalable architecture and keeping dependable consistency of business objects

  • Own the whole development cycle - from architecture design to implementation and testing and as well maintenance of our products

  • Work together with the developers, UX, product-management and other teams to provide the best experience for our customers

  • Collaborate with other technology and business teams to ensure that our solutions are well integrated in the larger context of Zalando.





WHAT WE’RE LOOKING FOR



  • 4+ years of experience in dedicated frontend projects with modern frameworks like React/Preact. Experience in ECMAScript, HTML5, SCSS and modern stack like Babel, Webpack

  • 4+ years of experience in Java (with no fear of Scala or Kotlin) and with relevant frameworks (e.g., Spring/Play)

  • Ability to write well formatted, structured and clean code and comfortably work with backend and DevOps technologies such as NodeJS 8+, Docker, Kubernetes

  • Enthusiasm for direct and frequent interaction with the users and interest in discovering opportunities collaborating closely with UX, Product and other colleagues



WHAT WE OFFER



  • Culture of trust, empowerment and constructive feedback, open source commitment, meetups, 100+ internal guilds, knowledge sharing through tech talks, internal tech academy and blogs, product demos, parties & events

  • Competitive salary, employee share shop, 40% Zalando shopping discount, discounts from external partners, centrally located offices, public transport discounts, municipality services, great IT equipment, flexible working times, additional holidays and volunteering time off, free beverages and fruits, diverse sports and health offerings

  • Extensive onboarding, mentoring and personal development opportunities and access to an international team of experts

  • Relocation assistance for internationals, PME family service and parent & child rooms* (*available only in select locations)


We celebrate diversity and are committed to building teams that represent a variety of backgrounds, perspectives and skills. All employment is decided on the basis of qualifications, merit and business need.


ABOUT ZALANDO


Zalando is Europe’s leading online platform for fashion, connecting customers, brands and partners across 17 markets. We drive digital solutions for fashion, logistics, advertising and research, bringing head-to-toe fashion to more than 23 million active customers through diverse skill-sets, interests and languages our teams choose to use.


At Retail Operations, we provide the technological backbone that drives our core retail business on the supplier-facing side. We build state-of-the art B2B & enterprise systems that connect Zalando seamlessly with our suppliers - from onboarding product catalogues to creating the best online multimedia content, and from aligning purchase quantities to scheduling deliveries into our warehouses. To our 2000+ suppliers, we offer easy ways to integrate and collaborate with Zalando - through our supplier portal, standard EDI and web-based REST APIs. To our 1000+ internal users we offer highly automated processes that allow them to focus on everything machines cannot do on their own. To do this, we build state-of-the systems using a variety of technologies, such as HTML/JS, Java, Scala, Kotlin and Python - all as cloud-based microservices.


For more information about Retail Operations Business Unit, click here.


Please note that all applications must be completed using the online form - we do not accept applications via e-mail.

Elan Partners
  • Dallas, TX

Senior OIPA Configurator - Remote

Remote Direct Hire Position
Unable to sponsor
No third party firms
Local Dallas/Ft Worth candidates preferred


Job Description

Our client is conducting a very large implementation of Oracles OIPA policy administration system. Those with experience across multiple business work streams are ideal. The OIPA Configurator will work directly with members of the business and technical teams to gather the business and technical requirements.


Requirements:

  • LOMA certification and knowledge of typical insurance business functions including: underwriting and policy issue, payment processing, month anniversary, anniversary, disbursement, withdrawal, correspondence, non-forfeiture, lapse processing, state reporting, billing, policy servicing, product set-up, , health claims processing, life claims processing
  • Working knowledge of OIPA Versions V8 and V9
  • Knowledge of Oracle SOA and BPM will be an asset
  • Understanding of the OIPA configuration language, or a strong set of traditional programming skills (C, C#, Java etc.)
  • Comprehensive working knowledge of database concepts and a strong understanding of SQL query building and optimization
  • Bachelors degree in Actuarial science, math, statistics, business, finance or Computer Science are preferred or other related field


Responsibilities:

  • Implement the business rules specific to insurance and/or annuity products. These business rules can be purely calculation driven or can be more procedure driven.
  • Become a specialist in the field of Life Insurance and Annuity processing for Policy Administration
  • Demonstrate strong analytical and problem solving skills
  • Design complex functional components, formulas and workflow
  • Development of processes for integration with internal and other 3rd party systems
  • Provide quality review of other team members work
  • Design logical queries to facilitate calculations, reporting and complex formulas
  • Identify, recommend and implement standardization and consolidation opportunities as the system is implemented across the Torchmark affiliates
  • Train and mentor other team members
  • Proactively assess short/long term design implications that would impact effective and efficient operations of the overall OIPA systems work streams
  • Ability to self-start, multi-task and manage conflicting priorities
  • Critical and creative thinking skills with proven ability look at a problem and think through possible solutions.
  • Ability to translate complex concepts for a variety of business and technical audiences
  • Experience with and understanding of change control methodologies.
  • Strong analytical and structured problem solving skills
  • Excellent written and verbal communication skills with project team and client resources
  • Ability to learn quickly in a fast paced environment
GE Capital
  • Ann Arbor, MI
  • ***Please Note: This Role is in Van Buren, MI (30 minutes drive from Ann Arbor)


Role Summary

Serve as analytics & visualization developer to build innovative solutions to support a broad range of analysis and outcomes. Partner with teams to create wing-to-wing transactional views, trends and anomalies leveraging GE's data lake. Look for new ways to harness the data we have for insights and actionable outcomes. 

 
In This Role, You Will

Essential Responsibilities: 


  • Develop Spotfire reports utilizing advanced data visualization techniques and related SQL.
  • Leverage Treasury Data Lake and data virtualization technologies (Denodo) to deliver new capabilities on tablet and mobile platforms.
  • Work on an agile team using Rally to quickly prototype and iterate on ideas
  • Lead the research and evaluation of emerging technology, industry and market trends to assist in project development and/or operational support activities
  • Technical analysis working with PostgreSQL & AWS native services
  • Partner with business teams to define requirements & user stories
  • Building and implementing analytical models with R and Python


Qualifications/Requirements
  • Bachelors degree from an accredited university or college in Computer Science or Information Systems
  • One or more years experience of design & development of data centric applications leveraging data from enterprise data warehouses.


Eligibility Requirements:

  • Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job


Technical Expertise

Desired Characteristics:
  • 1 year+ experience with BI visualization and/or reporting tools (Expert level knowledge of Modern BI Platforms like Spotfire, Qlik, Tableau etc.); a data and reporting guru.
  • Experience with web technologies such as ASP, HTML and CSS Integration of same with Data Visualization tools (e.g. extensions) a plus
  • Experience with scripting languages like Java Script, Python etc.
  • Exposure to advanced analytic & data science applications
  • Excellent BI application development skills, as demonstrated by having led, designed and implemented successful web and mobile projects
  • Ability to clearly articulate creative ideas to senior leaders
  • Ability to guide and direct technical team members through the SDLC
  • Ability to hit tight deadlines and work under pressure
  • SAP and/or Oracle ERP systems exposure a plus
  • Passion for learning new technologies and eagerness to collaborate with other creative minds
  • Strong desire for exploring, evaluating and understanding new technologies
Splice Machine
  • Atlanta, GA

Splice Machine, an AI predictive platform startup company, is looking for a Solutions Architect with experience working with complex distributed systems and large data sets using Spark and Hadoop.  Work from anywhere in the US.

Splice Machines predictive platform solution helps companies turn their Big Data into actionable business decisions. Our predictive platform eliminates the complexity of integrating multiple compute engines and databases necessary to power next-generation enterprise predictive AI and Machine Learning applications.

Some of our use-cases include:

  • At a leading credit card company, Splice Machine powers a customer service application that returns sub-20ms record lookups on 7 PB of data
  • At a Fortune 50 bank, Splice Machine is replacing a leading RDBMS and data warehouse with one platform in a customer profitability application
  • At an RFID tag company, Splice Machine is replacing a complex architecture for a retail IoT solution
  • At a leading financial service company, Splice Machine powers an enterprise data hub for 10,000 users
  • At a leading healthcare solution provider, Splice Machine powers a predictive application to learn models and use them to save lives in hospitals

Splice Machines CEO/ Co-Founder, Monte Zweben, is a serial entrepreneur in AI, selling his first company, Red Pepper, to Peoplesoft/ Oracle for $225M and taking his second company Blue Martini, through one of the largest IPOs in the early 2000s ($2.9B). He started Splice Machine to disrupt the $30 billion traditional database market with the first open-source dual engine database and predictive platform to power Big Data, AI and Machine Learning applications.   

Splice Machine has recruited a team of legendary Big Data advisors including, Roger Bamford, Father of Oracle RAC, Michael Franklin, former Director of AMPLab at UC Berkeley, Ken Rudin, Head of Growth and Analytics for Google Search, Andy Pavlo, Assistant Professor of Computer Science at Carnegie Mellon University and Ray Lane, former COO of Oracle, to collaborate with the Splice Machine team as we blaze new trails in Big Data.

Solution Architect

About You

  • You have implemented several large (40-50 node) Hadoop projects and have demonstrated successful outcomes.
  • You take pride in working to understand, quantify and verify the business needs of customers and their specific use cases, translating these needs into big data, DB, or ML capabilities.
  • You are comfortable engaging both business and engineering leadership, team leads and individual contributors to drive successful business outcomes.
  • Your project leadership style emphasizes collaboration and follow-through.
  • You are very technical and are accustomed to working with architects, developers, project managers, and C-level experts to ensure the best implementation practices and use of the product.

About What Youll Work On

  • Build the customers trust by maintaining a deep understanding of our solutions and their business.
  • Speak with customers about Splice Machine's most relevant features/functionality for their specific business needs.
  • Manage all post-sales technical activity, working on a cross-functional team of Splice Machine and customer resources for solution implementation.
  • Ensure that a plan is in place for each customer deployment, change management and adoption and communicated to all contributors.
  • Act as the voice of the customer and provide internal feedback on how Splice Machine can better serve our customers while working closely with Product and Engineering on identification and tracking of new feature and enhancement requests.
  • Help Sales identify new business opportunities within the customer in other departments.
  • Increase customer retention and renewals by conducting regular check-in calls and perform quarterly business reviews that drive renewals, upsells, adoption and customer references.

Requirements

  • Expertise in Cloudera and/or Hortonworks Hadoop solutions.
  • 7+ years of experience in architecting complex database and big data solutions for enterprise software.
  • Experience working in a complex multi-functional environment
  • Hands-on experience with SQL, Java and tuning databases
  • Experience with scalable and highly available distributed systems
  • BS in Computer Science / B.A. or equivalent work experience

Our people enjoy access to the best tools available, an open and collaborative work environment and a supportive culture inspiring them to do their very best.  We offer great salaries, generous equity, employee & family health coverage, flexible time off, and an environment that gives you the flexibility to seize moments of inspiration among other perks.

We encourage you to learn more about working here!