OnlyDataJobs.com

R1 RCM
  • Salt Lake City, UT

Healthcare is at an inflection point. Businesses are quickly evolving, and new technologies are reshaping the healthcare experience. We are R1 - a revenue cycle management company that is passionate about simplifying the patient experience, removing the paperwork hassle and demystifying financial obligations. Our success enables our healthcare clients to focus on what matters most - providing excellent clinical care.

Great people make great companies and we are looking for a great Lead Software Engineer to join our team in Murray, UT. Our approach to building software is disciplined and quality-focused with an emphasis on creativity, craftsmanship and commitment. We are looking for smart, quality-minded individuals who want to be a part of a high functioning, dynamic team. We believe in treating people fairly and your compensation should reflect that. Bring your passion for software engineering and help us disrupt ourselves as we build the next generation healthcare revenue cycle management products and platforms. Now is the right time to join R1!

We are seeking a highly experienced Lead Platform Engineer to join our team. The lead platform engineer will be responsible for building and maintaining real time, scalable, and resilient platform for product teams and developers. This role will be responsible for performing and supervising design, development and implementation of platform services, tools and frameworks. You will work with other software architects, software engineers, quality engineers, and other team members to design and build platform services. You will also provide technical mentorship to software engineers/developers and related groups.      


Responsibilities:


  • Be responsible for designing and developing software solutions with engineering mindset
  • Ensures SOLID principles and standard design patterns are applied across the organization to system architectures and implementations
  • Acts as a technical subject matter expert: helping fellow engineers, demonstrating technical expertise and engage in solving problems
  • Collaborate with stakeholders to help set and document technical standards
  • Evaluates, understands and recommends new technology, languages or development practices that have benefits for implementing.
  • Participate in and/or lead technical development design sessions to formulate technical designs that minimize maintenance, maximize code reuse and minimize testing time

Required Qualifications:


  • 8+ years experience in building scalable, highly available, distributed solutions and services
  • 4+ experience in middleware technologies: Enterprise Service Bus (ESB), Message Queuing (MQ), Routing, Service Orchestration, Integration, Security, API Management, Gateways
  • Significant experience in RESTful API architectures, specifications and implementations
  • Working knowledge of progressive development processes like scrum, XP, Kanban, TDD, BDD and continuous delivery using Jenkins
  • Significant experience working with most of the following technologies/languages: Java, C#, SQL, Python, Ruby, PowerShell, .NET/Core, WebAPI, Web Sockets, Swagger, JSON, REST, GIT
  • Hand-on experience in micro-services architecture, Kubernetes, Docker
  • Familiarity with Middleware platform Software AG WebMethods is a plus
  • Concept understanding on Cloud platforms, BIG Data, Machine Learning is a major plus
  • Knowledge of the healthcare revenue cycle, EMRs, practice management systems, FHIR, HL7 and HIPAA is a major plus


Desired Qualifications:


  • Strong sense of ownership and accountability for delivering well designed, high quality enterprise software on schedule
  • Prolific learner, willing to refactor your understanding of emerging patterns, practices and processes as much as you refactor your code
  • Ability to articulate and illustrate software complexities to others (both technical and non-technical audiences)
  • Friendly attitude and available to mentor others, communicating what you know in an encouraging and humble way
  • Continuous Learner


Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions.  Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests.


Our associates are given valuable opportunities to contribute, to innovative and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package.  To learn more, visit: r1rcm.com

WorldLink US
  • Dallas, TX

Business Analyst

Dallas, TX

Full time, direct hire position

Seeking a bright, motivated individual with a unique, wide range of skills and the ability to process large data sets while communicating findings clearly and concisely.

Responsibilities

  • Analyze data from a myriad of sources and generate valuable insights
  • Interface with our sales team and clients to discuss issues related to data availability and customer targeting
  • Execute marketing list processing for mail, email and integrated multi-channel campaigns
  • Assist in development of tools to optimize and automate internal systems and processes
  • Assist in conceptualization and maintenance of business intelligence tools

Requirements

  • Bachelors degree in math, economics, statistics or related quantitative field
  • An ability to deal and thrive with imperfect, mixed, varied and inconsistent data from multiple sources
  • Must possess rigorous analytical disciplined approach, as well as dynamic, abstract problem solving skills (get to the answer via both inspiration and perspiration)
  • Proven ability to work in a fast-paced environment and to meet changing deadlines / priorities on multiple simultaneous projects
  • Extensive experience writing queries for large, complex data sets in SQL (MySQL, PostgreSQL, Oracle, other SQL/RDBMS)
  • Highly proficient with Excel (or an alternate spreadsheet application like OpenOffice Calc) including macros, pivot tables, vlookups, charts and graphs
  • Solid knowledge of statistics and able to perform analysis in R SAS or SPSS proficiently
  • Strong interpersonal skills as a team leader and team player
  • Self-learning attitude, constantly pushing towards new opportunities, approaches, ideas and perspectives
  • Bonus points for experience with high-level, dynamically compiled programming languages: Python, Ruby, Perl, Lisp or PHP

  **No VISA Sponsorship available

Acxiom
  • Austin, TX
As a Senior Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze the latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You are able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You are also a self-starter able to continuously evaluate new technologies, innovate and deliver solutions for business critical applications


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Lead the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Lead and review Hadoop log files with the help of log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 6+ years of Big Data Administration Experience
  • Extensive knowledge and Hands-on Experience of Hadoop based data manipulation/storage technologies like HDFS, MapReduce, Yarn, Spark/Kafka, HBASE, HIVE, Pig, Impala, R and Sentry/Ranger/Knox
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Experience supporting Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems
  • Great operational expertise such as excellent troubleshooting skills, understanding of system's capacity, bottlenecks, core resource utilizations (CPU, OS, Storage, and Networks)
  • Experience in Hadoop cluster migrations or upgrades
  • Strong scripting skills in Perl, Python, shell scripting, and/or Ruby on Rails
  • Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera, HortonWorks, and/or MapR versions along with monitoring/alerting tools (Nagios, Ganglia, Zenoss, Cloudera Manager)
  • Strong problem solving and critical thinking skills
  • Excellent verbal and written communication skills


What will set you apart:


  • Solid understanding and hands-on experience of Big Data on private/public cloud technologies(AWS/GCP/Azure)
  • DevOps experience (CHEF, Puppet and Ansible)
  • Strong knowledge of JAVA/J2EE and other web technologies

 
Zendesk
  • Dublin, Ireland

Zendesk's is going through large growth at the moment and we are looking for Software Engineers at all levels :)


We are trying to grow to a billon dollar business and nearly double our global headcount in the next few years. Engineering Team is the heartbeat of Zendesk and instrumental to helping us achieve our company goals.  


We pride ourselves on our culture so you will work with like minded people and have lots of fun while doing it! 


Our main tech stacks used are Ruby, Java, Scala to name a few. We are using top technologies and want you to add value and help us build our engineering platforms.


From Software Engineer to Director of Product, we are recruiting! If you would like to talk to me about any opportunities, please drop me a message.


List of available roles are below:


https://www.zendesk.com/jobs/dublin/

Limelight Networks
  • Phoenix, AZ

Job Purpose:

The Sr. Data Services Engineer assists in maintaining the operational aspects of Limelight Networks platforms, provides guidance to the Operations group and acts as an escalation point for advanced troubleshooting of systems issues. The Sr. Data Services Engineer assists in the execution of tactical and strategic operational infrastructure initiatives by building and managing complex computing systems and processes that facilitate the introduction of new products and services while allowing existing services to scale.


Qualifications: Experience and Education (minimums)

  • Bachelors Degree or equivalent experience.
  • 2+ years experience working with MySQL (or other relational databases: Mongo DB, Cassandra, Hadoop, etc.) in a large-scale enterprise environment.
  • 2+ years Linux Systems Administration experience.
  • 2+ years Version Control and Shell scripting and one or more scripting languages including Python, Perl, Ruby and PHP.
  • 2+ Configuration Management Systems, using Puppet, Chef or SALT.
  • Experienced w/MySQL HA/Clustering solutions; Corosync, Pacemaker and DRBD preferred.
  • Experience supporting open-source messaging solutions such as RabbitMQ or ActiveMQ preferred.

Knowledge, Skills & Abilities

  • Collaborative in a fast-paced environment while providing exceptional visibility to management and end-toend ownership of incidents, projects and tasks.
  • Ability to implement and maintain complex datastores.
  • Knowledge of configuration management and release engineering processes and methodologies.
  • Excellent coordination, planning and written and verbal communication skills.
  • Knowledge of the Agile project management methodologies preferred.
  • Knowledge of a NoSQL/Big Data platform; Hadoop, MongoDB or Cassandra preferred.
  • Ability to participate in a 24/7 on call rotation.
  • Ability to travel when necessary.

Essential Functions:

  • Develop and maintain core competencies of the team in accordance with applicable architectures and standards.
  • Participate in capacity management of services and systems.
  • Maintain plans, processes and procedures necessary for the proper deployment and operation of systems and services.
  • Identify gaps in the operation of products and services and drive enhancements.
  • Evaluate release processes and tools to find areas for improvement.
  • Contribute to the release and change management process by collaborating with the developers and other Engineering groups.
  • Participate in development meetings and implement required changes to the operational architecture, standards, processes or procedures and ensure they are in place prior to release (e.g., monitoring, documentation and metrics).
  • Maintain a positive demeanor and a high level of professionalism at all times.
  • Implement proactive monitoring capabilities that ensure minimal disruption to the user community including: early failure detection mechanisms, log monitoring, session tracing and data capture to aid in the troubleshooting process.
  • Implement HA and DR capabilities to support business requirements.
  • Troubleshoot and investigate database related issues.
  • Maintain migration plans and data refresh mechanisms to keep environments current and in sync with production.
  • Implement backup and recovery procedures utilizing various methods to provide flexible data recovery capabilities.
  • Work with management and security team to assist in implementing and enforcing security policies.
  • Create and manage user and security profiles ensuring application security policies and procedures are followed.

VidMob
  • Pittsfield, MA
  • Salary: $65k - 85k

Who We’re Seeking


VidMob’s Ads Integration Engineer is a highly technical position that works with our strategic ad platform partners, integrating their APIs into the VidMob platform. You enjoy digging into complex campaign management and reporting frameworks allowing you to build elegant and scalable integrations. Your experience with ad tech makes you the expert on a team when talking about metrics, dimensions, KPIs, campaigns, squads, and formats.


What You’ll Do


You will engage with some of the world's largest companies to extend their campaign and media performance API offerings through the VidMob platform. Building tools and automation to pull and report on data along with keeping those integrations up to date. You’ll work closely with our data engineers to maximize our data pipelines and write clear documentation so our front-end engineers can quickly build features around each integration.


This position is full time and is based in Pittsfield, MA.


Responsibilities



  • Define and implement API integrations with our strategic partners

  • Work closely with our strategic partners staying up to date on product changes

  • Be an ads/reporting integration technical expert, and have a strategic influence on partners and internal teams at VidMob

  • Support VidMob engineering efforts in other areas as needed


Minimum Qualifications



  • 3+ years of previous experience as a software engineer

  • Ad Tech experience a must with a strong understanding of campaign management tools across the major platforms (Facebook, Google, Snapchat, Twitter, etc)

  • Experience with DSPs a plus

  • Solid software development skills with experience building software developed in Java

  • Additional experience in (at least one) Python, PHP, C/C++, Ruby, or Scala

  • Excellent communication skills including experience presenting to technical and business audiences

  • BA/BS in Computer Science or equivalent degree/experience

Riccione Resources
  • Dallas, TX

Sr. Data Engineer Hadoop, Spark, Data Pipelines, Growing Company

One of our clients is looking for a Sr. Data Engineer in the Fort Worth, TX area! Build your data expertise with projects centering on large Data Warehouses and new data models! Think outside the box to solve challenging problems! Thrive in the variety of technologies you will use in this role!

Why should I apply here?

    • Culture built on creativity and respect for engineering expertise
    • Nominated as one of the Best Places to Work in DFW
    • Entrepreneurial environment, growing portfolio and revenue stream
    • One of the fastest growing mid-size tech companies in DFW
    • Executive management with past successes in building firms
    • Leader of its technology niche, setting the standards
    • A robust, fast-paced work environment
    • Great technical challenges for top-notch engineers
    • Potential for career growth, emphasis on work/life balance
    • A remodeled office with a bistro, lounge, and foosball

What will I be doing?

    • Building data expertise and owning data quality for the transfer pipelines that you create to transform and move data to the companys large Data Warehouse
    • Architecting, constructing, and launching new data models that provide intuitive analytics to customers
    • Designing and developing new systems and tools to enable clients to optimize and track advertising campaigns
    • Using your expert skills across a number of platforms and tools such as Ruby, SQL, Linux shell scripting, Git, and Chef
    • Working across multiple teams in high visibility roles and owning the solution end-to-end
    • Providing support for existing production systems
    • Broadly influencing the companys clients and internal analysts

What skills/experiences do I need?

    • B.S. or M.S. degree in Computer Science or a related technical field
    • 5+ years of experience working with Hadoop and Spark
    • 5+ years of experience with Python or Ruby development
    • 5+ years of experience with efficient SQL (Postgres, Vertica, Oracle, etc.)
    • 5+ years of experience building and supporting applications on Linux-based systems
    • Background in engineering Spark data pipelines
    • Understanding of distributed systems

What will make my résumé stand out?

    • Ability to customize an ETL or ELT
    • Experience building an actual data warehouse schema

Location: Fort Worth, TX

Citizenship: U.S. citizens and those authorized to work in the U.S. are encouraged to apply. This company is currently unable to provide sponsorship (e.g., H1B).

Salary: 115 130k + 401k Match

---------------------------------------------------


~SW1317~

Ultra Tendency
  • Riga, Lettland

You are a developer that loves to take a look at infrastructure as well? You are a systems engineer that likes to write code? Ultra Tendency is looking for you! 


Your Responsibilities:



  • Support our customers and development teams in transitioning capabilities from development and testing to operations

  • Deploy and extend large-scale server clusters for our clients

  • Fine-tune and optimize our clusters to process millions of records every day 

  • Learn something new every day and enjoy solving complex problems


Job Requirements:



  • You know Linux like the back of your hand

  • You love to automate all the things – SaltStack, Ansible, Terraform and Puppet are your daily business

  • You can write code in Python, Java, Ruby or similar languages.

  • You are driven by high quality standards and attention to detail

  • Understanding of the Hadoop ecosystem and knowledge of Docker is a plus


We offer:



  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • Work on the open-source community and become a contributor. Learn from open-source enthusiasts which you will find nowhere else in Germany!

  • Work in an English-speaking, international environment

  • Work with cutting edge equipment and tools

Hulu
  • Santa Monica, CA

WHAT YOU’LL DO



  • Build elegant systems that are robust and scalable

  • Challenge our team and software to be even better

  • Use a mix of technologies including Scala, Ruby, Python, and Angular JS


WHAT TO BRING



  • BS or MS in Computer Science/Engineering

  • 5+ years of relevant software engineering experience

  • Strong programming (Java/C#/C++ or other related programming languages) and scripting skills

  • Great communication, collaboration skills and a strong teamwork ethic

  • Strive for excellence


NICE-TO-HAVES



  • Experience with both statically typed languages and dynamic languages

  • Experience with relational (Oracle, MySQL) and non-relational database technologies (MongoDB, Cassandra, DynamoDB)

Acxiom
  • Austin, TX
As a Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You must be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You must be a self-starter to continuously evaluate new technologies, innovate and deliver solutions for business critical applications. 


 

What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Own the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Manage and review Hadoop log files with the help of  Log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with Vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 3+ years of Big Data Administration experience
  • Extensive knowledge of Hadoop based data manipulation/storage technologies such as HDFS, MapReduce, Yarn, HBASE, HIVE, Pig, Impala and Sentry
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Great operational expertise such as good troubleshooting skills, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
  • Experience in Hadoop cluster migrations or upgrades
  • Strong Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera/Horton Works/MapR versions along with Monitoring/Alerting tools (Nagios, Ganglia, Zenoss , Cloudera Manager)
  • Scripting skills in Perl, Python, Shell Scripting, and/or Ruby on Rails
  • Knowledge of JAVA/J2EE and other web technologies
  • Understanding of On-premise and Cloud network architectures
  • DevOps experience is a great plus (CHEF, Puppet and Ansible)
  • Excellent verbal and written communication skills


 

Mercedes-Benz USA
  • Atlanta, GA

Job Overview

Mercedes-Benz USA is recruiting a Big Data Architect, a newly created position within the Information Technology Infrastructure Department. This position is responsible for refining and creating the next step in technology for our organization. In this role you will act as contact person and agile enabler for all questions regarding new IT infrastructure and services in context of Big Data solutions.

Responsibilities

  • Leverage sophisticated Big Data technologies into current and future business applications

  • Lead infrastructure projects for the implementation of new Big Data solutions

  • Design and implement modern, scalable data center architectures (on premise, hybrid or cloud) that meet the requirements of our business partners

  • Ensure the architecture is optimized for large dataset acquisition, analysis, storage, cleansing, transformation and reclamation

  • Create the requirements analysis, the platform selection and the design of the technical architecture

  • Develop IT infrastructure roadmaps and implement strategies around data science initiatives

  • Lead the research and evaluation of emerging technologies, industry and market trends to assist in project development and operational support actives

  • Work closely together with the application teams to exceed our business partners expectations

Qualifications 

Education

Bachelors Degree (accredited school) with emphasis in:

Computer/Information Science

Information Technology

Engineering

Management Information System (MIS)

Must have 5 7 years of experience in the following:

  • Architecture, design, implementation, operation and maintenance of Big Data solutions

  • Hands-on experience with major Big Data technologies and frameworks including Hadoop, MapReduce, Pig, Hive, HBase, Oozie, Mahout, Flume, ZooKeeper, MongoDB, and Cassandra.

  • Experience with Big Data solutions deployed in large cloud computing infrastructures such as AWS, GCE and Azure

  • Strong knowledge of programming and scripting languages such as Java, Linus, PHP, Ruby, Phyton

  • Big Data query tools such as Pig, Hive and Impala

  • Project Management Skills:

  • Ability to develop plans/projects from conceptualization to implementation

  • Ability to organize workflow and direct tasks as well as document milestones and ROIs and resolve problems

Proven experience with the following:

  • Open source software such as Hadoop and Red Hat

  • Shell scripting

  • Servers, storage, networking, and data archival/backup solutions

  • Industry knowledge and experience in areas such as Software Defined Networking (SDN), IT infrastructure and systems security, and cloud or network systems management

Additional Skills
Focus on problem resolution and troubleshooting
Knowledge on hardware capabilities and software interfaces and applications
Ability to produce quality digital assets/products
 
EEO Statement
Mercedes-Benz USA is committed to fostering an inclusive environment that appreciates and leverages the diversity of our team. We provide equal employment opportunity (EEO) to all qualified applicants and employees without regard to race, color, ethnicity, gender, age, national origin, religion, marital status, veteran status, physical or other disability, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local law.

BigCommerce
  • Austin, TX

BigCommerce is disrupting the e-commerce industry as the SaaS leader for fast- growing, mid-market businesses. We enable our customers to build intuitive and engaging stores to support every stage of their growth.

Be a leader on our Ecosystem Team that powers our billing, partner, & identity platforms. Youll be working with team members to extend our products and integrate with a broad array of external services. BigCommerce offers a heavily collaborative environment helping you expand your skill set and take ideas from inception to delivery. This role will require a need to balance: driving our aggressive product roadmap, improving the performance and stability of our system, introducing engineering best practices into the organization, and leading/mentoring other engineers.

What youll do

  • Use Ruby, Rails, gRPC, JavaScript, RabbitMQ, Docker, Resque, MySQL, Redis, and a slew of other technology to help power our platform
  • Help design/architect/execute the building of services for the BigCommerce platform.
  • Build highly-available, distributed systems
  • Build integrations with 3rd party SOAP/REST APIs that can span multiple codesets/services, fail gracefully, and be highly extensible
  • Coach team towards code that is performant, fault-tolerant, maintainable, testable, and concise
  • Drive our technical roadmap and direction of our stack
  • Encourage innovation and foster an environment of continuous improvement
  • Collaborate with stakeholders and other teams to promote communication & coordination
  • Support and coach 4-6 members of your team
  • Foster a culture that is open, positive, energized and innovative


Who You Are

  • 5-7+ years of experience in building systems using at least two different languages: Ruby/Rails (required), Scala, Java, PHP, Python, Node, etc. We currently primarily use Ruby, PHP, and Scala
  • 2+ years managing, driving and retaining a high-performance team.
  • Experience building integrations with 3rd party SOAP/REST API providers that can span multiple code sets, fail gracefully and be highly extendable.
  • Knowledge of TDD, BDD, DDD
  • Nice to Have: Experience with OAuth and/or SAML workflows and permissions. 
  • Nice to Have: DevOps experience, GCP experience, and/or Docker or other containerization technologies
  • Desire to work in a collaborative, open environment on an Agile team
  • Highly proactive and results-oriented with excellent critical thinking skills
  • Excited to learn new technologies
  • Experience with ecommerce, distributed queuing systems, SaaS platforms, highly desirable


Diversity & Inclusion at BigCommerce


We have the opportunity to build not only a great business, but a great company, with soul. Our beliefs and commitment to diversity and inclusion are a central part of achieving that.

Our dedication to diversity and inclusion is grounded in two things: a moral belief in the dignity, value, and potential of every individual, and a practical belief that diverse, inclusive teams will create the best outcomes for our customers, partners, employees, and company. We welcome everyone to be a part of our journey


Perks & Benefits


    • An amazing company culture that doesnt just talk values, but lives them
    • Our Think Big Program encourages and rewards employee-led innovation
    • Employees are empowered to go above and beyond their daily duties to act on ideas that help our customers and/or improve the BigCommerce platform
    • Competitive compensation packages and meaningful stock grants for every employee
    • Comprehensive health insurance coverage that starts on day one
    • A free online store to help you live out your entrepreneurial dreams
    • Time off for volunteering and employee-driven charity events

Gecko Robotics
  • Pittsburgh, PA
What is Gecko Portal?


Gecko Portal is a rapidly growing team at Gecko Robotics that is focused on transforming the company’s core product from the ground up by creating industry-leading tools for industrial asset management using the latest in modern Web technologies, machine learning, and advanced data visualization. We aim to leverage the tremendous amount of real-world data that Gecko Robotics has collected to generate predictive, never-before-seen insights and radically transform the way that plant owners and managers manage their industrial assets. The Gecko Portal product involves all levels of the stack, from 3D data visualization tools to dynamic React.js frontend applications powered by a backend that processes millions of data points, serves as a platform for comprehensive asset data management, and generates ML-driven analytics. We are looking for talented engineers who are comfortable with rapid iteration and development to join us in building Web applications at scale. 


What is Backend at Gecko Robotics?


Backend engineers on the Gecko Portal team create the core software services that power all of our internal and customer-facing applications, from data validation and signal processing platforms to customer-facing data visualization and inspection data management applications.  We use a wide range of modern languages and technologies such as Python (Bokeh, Django, Flask, Pandas), Javascript (React, Redux, D3, Kepler), Google Cloud Platform (Cloud Storage, Cloud Functions, BigQuery), Redis, PostgreSQL, and Docker. Some of our technical areas of focus include signal processing, real-time video processing / object detection and classification, computer vision, and machine learning / predictive analytics.


Minimum Requirements:



  • 2+ years experience with building backend software applications

  • Bachelor’s and/or Master’s degree in Computer Science, or equivalent experience

  • Demonstrated ability in writing performant, scalable code

  • Dedication to test-driven development and designing production-ready systems

  • Deep understanding of a backend web development framework (Django, Flask, Node.js, Spring, Ruby, or Laravel)

  • Familiarity with Computer Science fundamentals and ability to apply those fundamentals to code

  • Awareness of best practices for scaling backend architectures and databases


Additional requirements/responsibilities:



  • Design, implement, and verify new features in a continuous integration environment

  • Interact with frontend engineers and data analysts to design robust web application APIs

  • Lead design and code reviews

  • Use critical thinking skills to debug problems and develop solutions to challenging technical problems

  • Interact with other engineers from multiple disciplines in a team environment

  • Develop tests to ensure the integrity and availability of the application

  • Provide and review technical documentation

Tekberry
  • Atlanta, GA

Title: R&D DevOps Specialist
City: Atlanta
State: GA
ZIP: 30308
Job Type: Contract
Hours: 40
Job Code: EB-1473668477

Tekberry is looking for a highly qualified and motivated R&D DevOps Specialist to work on-site with our client, a Fortune-1000 electronics company in Atlanta, GA.


This is a contract position that will see the ideal candidate working alongside industry-leading talent in a world-class environment.

Job Description:

    • Join a fun, hardworking, team that is dedicated to building a system that is always reliable and available to our client's customers.
    • Mature and refactor a software release infrastructure to support an Agile development environment.
    • Strong technical consultation depth. Able to evaluate & influence out-of-the-box vs customized solutions as applicable.
    • Eloquent articulator of industry trends, thoughts & own ideas.
    • Constantly lookout for cost-effective solutions.


Qualifications:

    • Bachelors degree or higher in Computer Science, Computer Engineering or Electrical Engineering. A Masters degree is preferred.
    • Scripting (Bash/Python/Ruby/Perl/PHP).
    • Continuous integration/deployment in AWS and cloud environments using Jenkins.
    • Familiarity with Atlassian tool chain, Jira, Bitbucket, git, etc.
    • C/C++ development using Visual Studio and gcc and CMake.
    • Experience in Systems Administration & understanding of various Operating Systems platforms & technologies:
      o Windows
      o Linux
      o Web Services (IIS, Apache, tomcat) (optional)
      o Application monitoring & performance tuning


The work must be done on-site, so telecommuting will not be possible. Please submit your resume with salary requirements. Principals only; no third parties or off-shore companies. No phone calls please.

As a W2 employee you will have access to health and 401k benefits.

Tekberry Inc. is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other protected categories under all applicable laws.

HCL Technologies: Products & Platforms Division
  • Atlanta, GA

***Are you craving a great career opportunity to utilize and build upon your present skill sets? Are you experienced as a  Software Engineer experienced in development of distributed systems & services running on managed Kubernetes using functional programming with Scala/Java? Then let's explore this great & unique opportunity for full-time employment (not a consultant or contracting role) together***


***We would be very happy with your interest and desire to pursue this opportunity. If interested in this role, please email Andy at andrew.rucker@hcl.com and request the prequalifying questionnaire to complete as the first step***


We are only accepting work authorization status of either Green Card or US Citizen for this role. This is for a full-time, direct hire, on-site position only.


We are seeking a talented Software Engineer to join a team in the development of distributed systems and services running on managed Kubernetes. Our role is with a major open source contributor in Scala, Spark, Cassandra, and other technologies. 


Your initial role will entail developing applications, tools, and Kubernetes operators using Scala, Java, Akka, Cassandra, Go, Helm, and Go Templating. , and other open source technologies are also heavily leveraged. Work with new and emerging technologies, deploy to managed Kubernetes, and learn new tools and techniques.
   
 For this role you will need high degree of proficiency in the following:

  
Experience with functional programming using Scala / Java

Proficiency in Docker, Kubernetes, and Helm
Strong understanding of data structures and algorithms
Strong understanding of Unix primitives and regexes
 Nice to have other languages like Python, Go, Go templating, Groovy/Jenkins, Ruby
Desire to learn new technologies and languages
General knowledge of everyday developer tools
Interest and knowledge of emerging tools and technologies
Ability to convey information concisely and clearly
Ability to work closely and effectively with peer developers
Solid understanding of parallel and concurrent programming
Bachelors degree or higher in Computer Science, or comparable work experience
Participation in software design, development, and code reviews
High level of proficiency with Computer Science/Software Engineering knowledge and contribution to the technical skills growth of other team members  

trivago N.V.
  • Düsseldorf, Germany



As part of the Hotel Search pillar, you are shaping the future of trivago's main product to help our users to find the ideal hotel. You will join one of our agile engineering teams of smart, motivated and curious people and work on our AWS platform to deliver the best and most relevant data to our millions of users and other trivago teams by utilizing modern technologies like Apache Kafka, Kinesis, Lambda, Hadoop and Redshift.



We are looking for an entrepreneurial problem solver who is happy to trade perfectionism with pragmatism, execution and outcome. Someone who believes that factual proof, not seniority, determines which path to take. And, most importantly, someone who is not only willing, but embraces small-scale failures in an innovative and fast-paced environment as a path to large-scale success. Does this sound like you? Then read on to see what trivago has to offer!



What you'll do:



  • Be a part of our Visual Content team to contribute to the development of our current platform.

  • Build scalable and high-performance data pipelines in our AWS cloud infrastructure.

  • Use streaming data technologies to create top-notch features for our millions of users.

  • Collaborate with stakeholders to map business requirements into technical solutions.



What you'll need:



  • 3+ years experience in Software Engineering.

  • A bachelor's degree in Computer Science, or (in case you're self-taught) an outstanding Github profile.

  • Experience developing with Python (our primary language), Java, Node.js or Ruby.

  • Experience working with highly scalable systems, preferably in the cloud (hands- on AWS experience would be a big plus).

  • Good knowledge of relational Databases and SQL dialects/implementations such as MySQL, Postgres or MSSQL.

  • You speak English (our company language) fluently.



What we'd love you to have:



  • You are and AWS certified Developer, DevOps Engineer or Solutions Architect.

  • You have knowledge/hands-on experience in distributed systems, ETL, big data analytics or streaming data technologies.

  • You have a proven track record delivering complex and high-impactful tech products to a large user base.

  • You are a clear communicator, who knows how to present complex topics in plain English to non-technical peers and stakeholders.



Life at trivago is:



  • A unique culture with a strong sense of community and an agile, international work environment.

  • The opportunity for self-driven individuals to have a direct impact on the business.

  • The freedom to embrace small-scale failures as a path to large-scale success.

  • The belief that factual proof is the driving force behind all decisions and determines the way forward.

  • The chance to develop personally and professionally due to a strong feedback culture and access to training and workshops.

  • Flexibility for all employees to contribute value and maintain a healthy work-life balance.

  • A state-of-the-art campus with world-class ergonomics that supports your health and happiness, 30+ sports and a multi-cuisine cafeteria to satisfy your inner foodie.

  • To find out more about life at trivago follow us on social media @lifeattrivago.

  • To learn more about tech at trivago, check out our blog: https://tech.trivago.com/



Additional information:



  • trivago N.V. is an equal opportunity employer. Applications from individuals with disabilities are welcome.

trivago N.V.
  • Düsseldorf, Germany
As part of the Hotel Search team, you will shape the future of trivago's main product to help our users to find the ideal hotel. You will join one of our agile engineering teams of smart, motivated and curious people and work on our AWS platform to deliver the best and most relevant data to our millions of users and other trivago teams by utilising modern technologies like Apache Kafka, Kinesis, Redshift and Hadoop.

We’re looking for an entrepreneurial-minded problem solver, who is happy to trade perfectionism with pragmatism, execution and outcome. Someone who believes that factual proof, not seniority, determines which path to take. And, most importantly, someone who is not only willing, but embraces small-scale failures in an innovative and fast-paced environment as a path to large-scale success.

Does this sound like you? Then read on to see what trivago has to offer!


What you'll do:

  • Be a part of trivago’s Hotel Search team and actively contribute to a product that is used by millions of people each day.

  • Craft and build scalable applications for our data platform on AWS that can handle billions of events, using technologies like Apache Kafka, Redshift, EC2, Lambda, Kinesis and Hadoop.

  • Take ownership and contribute your own ideas on how to bring the best and most relevant data to our users so they can make a profound decision on their next hotel booking.

  • Work closely with stakeholders and other teams to map business requirements into technical solutions that have a real impact.

  • Be an ambassador for cloud technologies, spread the word, and share knowledge among your fellow colleagues in peer exchanges, guild meetings and meet-ups.



What you'll need:



  • 3+ years of experience in (agile) Software Engineering

  • A Bachelor's degree in Computer Science, or (in case you're self-taught) an outstanding Github profile.

  • Experience in one of more of the following programming languages: Python (preferably), Node.js, Ruby, Java

  • Experience working with highly scalable systems, preferably in the cloud (hands-on AWS experience would be a big plus)

  • Good knowledge of relational Databases and SQL dialects/implementations such as MySQL, Postgres or MSSQL.

  • You speak English (our company language) fluently.



What we'd love you to have:



  • You are and AWS certified Developer, DevOps Engineer or Solutions Architect.

  • You have knowledge/hands-on experience in distributed systems, ETL, big data analytics or streaming data technologies.

  • You have a proven track record delivering complex and high-impactful tech products to a large user base.

  • You are a clear communicator, who knows how to present complex topics in plain English to non-technical peers and stakeholders. 



What you can expect from life at trivago:



  • Growth: We help you grow as trivago grows through support for personal and professional development, constant new challenges, regular peer-feedback, mentorship and world-class training. 

  • Autonomy: Every talent has the ability to make an impact independently by driving topics thanks to our strong entrepreneurial mindset, our horizontal workflow and self-determined working hours.

  • International environment: Our agile, international culture and environment with talents from 50+ nations encourages mutual trust and creates a safe space to discuss openly and act freely. 

  • Collaborative spaces: Our state-of-the-art campus in Düsseldorf offers interactive spaces where we can easily collaborate, exchange ideas, take a break and workout together. 

  • Relocation: We offer our international talents support with relocation costs, work permit and visa questions, free language classes, subsidies for public transport, flat search, company pension and insurance.

  • Tech Culture: A conference budget plus internal workshops, guilds, meet-ups and academies, as well as a MacBook Pro 15", a 30" wide-screen display, an adjustable desk, and a pair of Bose QuietComfort headphones for your time in the zone.



Additional information:



  • trivago N.V. is an equal opportunity employer. Applications from individuals with disabilities are welcome.

  • To find out more about life at trivago follow us on social media @lifeattrivago.

  • To learn more about tech at trivago, check out our blog: https://tech.trivago.com/

snabBus
  • Köln, Deutschland

What you will do and learn



  • Work alongside our CTO and take responsibility for the company and your own learning journey

  • Manage own projects from writing high-quality code to documentation and testing

  • CI, deployment and integration of additional AWS services into our infrastructure to further utilize analytical and optimization potential 


What you will do, will be heavily dependent on both your skills, experience, and personal development goals. 


What you should bring to the table



  • Most of all, curiosity and personal fit with current team 

  • Experience in the AWS ecosystem (EB, EC2, Lambda, Infrastructure, Data pipelines)

  • Experience in Ruby on Rails and React (comparable languages do qualify)

  • Ideally, experience with machine learning, python, sagemaker


We believe in talent and curiosity. Relevant experience in at least partial areas is a big plus, however. 

Acxiom
  • Austin, TX
As an Enterprise Big Data Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You must be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You must be a self-starter to continuously evaluate new technologies, innovate and deliver solutions for business critical applications.


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Own the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Manage and review Hadoop log files with the help of  Log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with Vendor support teams on support tasks


What you will need:


  • Extensive knowledge of Hadoop based data manipulation/storage technologies HDFS, MapReduce, Yarn, HBASE, HIVE, Pig, Impala and Sentry
  • 3+ years of Big Data Administration experience
  • Experience in Capacity Planning, Cluster Designing and Deployment, Troubleshooting and Performance Tuning
  • Great operational expertise such as good troubleshooting skills, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
  • Experience in Hadoop Cluster migrations or Upgrades
  • Strong Linux/SAN Administration skills and RDBMS/ETL knowledge
  • DevOps experience is a great plus (CHEF, Puppet and Ansible)
  • Good Experience in Cloudera/Horton Works/MapR versions along with Monitoring/Alerting tools (Nagios, Ganglia, Zenoss , Cloudera Manager)
  • Strong scripting skills in Perl / Python / Shell Scripting / Ruby on Rails
  • Strong knowledge of JAVA/J2EE and other web technologies
  • Solid Understanding on premise and Cloud network architectures
  • Excellent verbal and Written Communication Skills

Acxiom
  • Austin, TX
Integrates Acxiom and third-party software to create solutions to business problems defined by specific business requirements. Draws upon technical and data processing knowledge to solve moderately complex marketing and data warehousing problems on very large volumes of data. An Enterprise Big Data Administrator may configure or tune Acxiom or third-party software, develop automation to tie components together, develop custom application code to accommodate specific business requirements or collaborate with others to add extensions to existing Acxiom software. An Enterprise Big Data Administrator may also be responsible for new development, ongoing maintenance, support, and optimization working closely with accounts teams who are utilizing the solution.


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Own the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Performance tuning of Hadoop clusters and MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Manage and review Hadoop log files with the help of Log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Work on tickets related to various Hadoop/Bigdata services which include HDFS, MapReduce, Yarn, Hive, Sqoop, Storm, Spark, Kafka, HBase, Kerberos, Ranger, Knox
  • Perform monitoring of various services of the Hadoop cluster and fix user issues and make necessary corrections to the cluster to meet the cluster uptime required
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with Vendor support teams on support tasks


What you need:


  • 5+ years of Big Data Administration Experience
  • Extensive knowledge and Hands-on Experience of Hadoop based data manipulation/storage technologies like HDFS, MapReduce, Yarn, Spark/Kafka, HBASE, HIVE, Pig, Impala, R and Sentry/Ranger/Knox
  • Experience in Capacity Planning, Cluster Designing and Deployment, Troubleshooting and Performance Tuning
  • Experience supporting Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems
  • Great Operational expertise such as Excellent Troubleshooting skills, understanding of system's capacity, bottlenecks, Core Resource utilizations (CPU, OS, Storage, and Networks)
  • Solid Understanding and hands-on Experience of Big Data on Private/Public Cloud Technologies(AWS/GCP/Azure) will be a great add-on
  • Experience in Hadoop Cluster migrations or Upgrades
  • Strong scripting skills in Perl / Python / Shell Scripting / Ruby on Rails
  • Strong Linux/SAN Administration skills and RDBMS/ETL knowledge
  • DevOps experience is a great plus (CHEF, Puppet and Ansible)
  • Good Experience in Cloudera/HortonWorks/MapR versions along with Monitoring/Alerting tools (Nagios, Ganglia, Zenoss, Cloudera Manager)
  • Strong knowledge of JAVA/J2EE and other web technologies
  • Strong problem solving and critical thinking skills
  • Excellent verbal and Written Communication Skills