OnlyDataJobs.com

OverDrive Inc.
  • Garfield Heights, OH

The Data Integration team at OverDrive provides data for other teams to analyze and build their systems upon. We are plumbers, building a company-wide pipeline of clean, usable data for others to use. We typically don’t analyze the data, but instead we make the data available to others. Your job, if you choose to join us, is to help us build a real-time data platform that connects our applications and makes available a data stream of potentially anything happening in our business.


Why Apply:


We are looking for someone who can help us wire up the next step. Help us create something from the ground up (almost a green field). Someone that can help us move large data from one team to the next and come up with ideas and solutions around how we go about looking at data. Using technologies like Kafka, Scala, Clojure, F#.


About You:



  • You always keep up with the latest in distributed systems. You're extremely depressed each summer when the guy who runs highscalability.com hangs out the "Gone Fishin" sign.

  • You’re humble. Frankly, you’re in a supporting role. You help build infrastructure to deliver and transform data for others. (E.g., someone else gets the glory because of your effort, but you don’t care.)

  • You’re patient. Because nothing works the first time, when it comes to moving data around.

  • You hate batch. Real-time is your thing.

  • Scaling services is easy. You realize that the hardest part is scaling your data, and you want to help with that.

  • You think microservices should be event-driven. You prefer autonomous systems over tightly-coupled, time-bound synchronous ones with long chains of dependencies.


 Problems You Could Help Solve:



  • Help us come up with solutions around speeding up our process

  • Help us come up with ideas around making our indexing better

  • Help us create better ways to track all our data

  • If you like to solve problems and use cutting edge technology – keep reading


 Responsibilities:



  • Implement near real-time ETL-like processes from hundreds of applications and data sources using the Apache Kafka ecosystem of technologies.

  • Designing, developing, testing and tuning a large-scale ‘stream data platform’ for connecting systems across our business in a decoupled manner.

  • Deliver data in near real-time from transactional data stores into analytical data stores.

  • R&D ways to acquire data and suggest new uses for that data.

  • “Stream processing.” Enable applications to react to, process and transform streams of data between business domains.

  • “Data Integration.” Capture application events and data store changes and pipe to other interested systems.


 Experience/Skills: 



  • Comfortable with functional programming concepts. While we're not writing strictly functional code, experience with languages like Scala, Haskell, or Clojure will make working with streaming data easier.

  • Familiarity with the JVM.  We’re using Scala with a little bit of Java and need to occasionally tweak the performance settings of the JVM itself.

  • Familiarity with C# and the .Net framework is helpful. While we don’t use it day to day, most of our systems run on Windows and .Net.

  • Comfortable working in both Linux and Windows environments. Our systems all run on Linux, but we interact with many systems running on Windows servers.

  • Shell scripting & common Linux tool skills.

  • Experience with build tools such as Maven, sbt, or rake.

  • Knowledge of distributed systems.

  • Knowledge of, or experience with, Kafka a plus.

  • Knowledge of Event-Driven/Reactive systems.

  • Experience with DevOps practices like Continuous Integration, Continuous Deployment, Build Automation, Server automation and Test Driven Development.


 Things You Dig: 



  • Stream processing tools (Kafka Streams, Storm, Spark, Flink, Google Cloud DataFlow etc.)

  • SQL-based technologies (SQL Server, MySQL, PostgreSQL, etc.)

  • NoSQL technologies (Cassandra, MongoDB, Redis, HBase, etc.)

  • Server automation tools (Ansible, Chef, Puppet, Vagrant, etc.)

  • Distributed Source Control (Mercurial, Git)

  • The Cloud (Azure, Amazon AWS)

  • The ELK Stack (Elasticsearch, Logstash, Kibana)


What’s Next:


As you’ve probably guessed, OverDrive is a place that values individuality and variety. We don’t want you to be like everyone else, we don’t even want you to be like us—we want you to be like you! So, if you're interested in joining the OverDrive team, apply below, and tell us what inspires you about OverDrive and why you think you are perfect for our team.



OverDrive values diversity and is proud to be an equal opportunity employer.

Genoa Employment Solutions
  • Detroit, MI

Solution Architect

If you are someone who:

  • Is a creative thinker and great teammate who can come up with innovative approaches to help resolve complex issues.
  • Has good analytical and problem solving skills and is able to break down a solution into smaller units of work and produce a solution roadmap.
  • Has written high quality, well-tested shared components that can be leveraged by multiple systems.
  • Takes pride in software craftsmanship, diving deep into code and constantly innovating.
  • Has extensive experience in back-end development, service design, data modeling and web development.
  • Takes requirements (business features, technical debts and internal enhancements) and designs resilient solutions.
  • Can support and collaborate with multiple development teams and provide technical guidance.
  • Can step into specific projects to supply additional management, coding and engineering capacity as needed to make projects successful.
  • Has expert knowledge in distributed systems with a heavy focus in conversational semantics for large scale distributed systems.
  • Is passionate about webscale technologies as applied to large scale growing businesses.


Must Have Skills:

  • Excellent verbal and written communication skills and is able to explain a complex technical solution to business stakeholders.
  • Demostrated ability to translate customer needs into well documented requirements, architectural plans and produce near production ready protoypes.
  • Expert at producing sequence flow diagrams, solution diagrams, architectural component diagrams
  • Two years of experience of mentoring team leads and engineers.
  • Demonstrated willingness to learn from peers and coworkers junior to them.
  • Ability to enforce responsible engineering practices (including automated unit and stress testing, engineering for data security, resiliency, scalability, etc.)
  • Proficient in multiple programming languages like: Java, Python, Ruby, Scala, Groovy, GO, BASH
  • Expert knowledge of Java or Scala or Erlang with 7+ years of experience.
  • In depth experience developing high volume transactions and distributed applications both real-time and batch.
  • A deep understanding of performance tuning and scalability.
  • Development experience with REST WebServices and various data interchange and representation formats such as JSON, XML, HTML etc.
  • Development experience with RDBMS, distributed cache (Memcached, Redis)  and NoSQL database.
  • Deep end to end architectural understanding of distributed applications.
  • Experience with containerization technologies (such as Docker) and familiarity with micro-service architecture and development patterns.
  • A deep and demonstrable understanding of design patterns.
  • Knowledge and understanding of application servers such as JBoss, Tomcat and Weblogic.
  • Development experience with security such as securing the users and their data.
  • Development experience of writing batch jobs with performing high volume transactions.
  • Knowledge and understanding of work in modern CI environments: version control, build tool, CI server
  • Knowledge of Open Source libraries, tools and frameworks. Experience with any modern open source libraries would be an added advantage.
  • Experience with Agile development methodology.

Highly desirable Skills:

  • Experience in HIPAA and PCI security Domain.
  • Development experience with modern technologies such Elastic Search, Kafka, Kibana,
    Logstash, Hibernate/JPA, Spring, Angular. Experience with any modern technologies would be an added advantage.
  • Experience building and deploying software onto AWS or Openstack using Chef, or similar technologies.
  • Experience with big data and data analytics applications, or similar systems programming experience.
  • Strong expertise in text parsing, analytics and machine learning.
  • Has worked extensively on parsing and generating  EDI formats.
  • Experience with SAFE framework.
  • Experience with Java Message Service (JMS) and Message Driven Bean (MDB) development is preferred.
  • Expert knowledge of JDBC and managing transactions.
  • Understanding of Service Oriented Architecture.
  • US Citizenship is preferred.
  • Experience in the insurance industry, specifically with the health care industry.
  • Bachelor of Science in Computer Science, Information Systems, Engineering or a related field or comparable work experience.
WB Solutions LLC
  • Houston, TX

Role: Hadoop Engineer

Location: Irving, TX

Duration: Long Term Contract

Requirement:

    • Majority is related to Hadoop and not on Oracle pl/sql, ODI, OBIEE
    • Data scientist experience with building statistical models and intelligence around it
    • Proven understanding and related experience with Hadoop, HBase, Hive, Pig, Sqoop, Flume, Hbase, Map/Reduce, Apache Spark as well as Unix OS Core Java programming, Scala, shell scripting experience.
    • Hands on Experience in Oozie Job Scheduling, Zookeeper, Solr, ElasticSearch, Storm, LogStash or other similar technologies
    • Solid experience in writing SQL, stored procedures, query performance tuning preferably on Oracle 12c, ODI jobs

Responsibilities:

    • Participate in Agile development on a large Hadoop-based data platform as a member of a distributed team
    • Come out with different statistical models and build data insights on revenue leakages possibilities
    • Code programs to load data from diverse data sources into ODI, OBIEE , Hive structures using SQOOP and other tools.
    • Translate complex functional and technical requirements into detailed design.
    • Analyze vast data stores.
    • Code business logic using Scala on Apache Spark.
    • Create workflows using Oozie.
    • Code and test prototypes.
    • Code to existing frameworks where applicable.

EDUCATION/CERTIFICATIONS:

    • Bachelor's/Masters in Computer Engineering or Information Technology
    • Oracle - ODI, OBIEE , Hadoop certification is added advantage
Vista Higher Learning
  • Boston, MA

Vista Higher Learning is seeking a motivated and experienced Architect who is familiar with the design and delivery of large scale rails-based systems to join our dynamic engineering team!  Knowledge of cloud-based architectures and proven ability to create high-performance consumer applications is essential.  The ideal candidate is a true collaborator who's comfortable working across the stack and leveraging AWS services.  In this role you will have the opportunity to work both independently and collaboratively with other architects and engineering gurus on a variety of exciting projects.



Our Tool belt:




  • Languages/Frameworks: Ruby, Rails, AngularJS

  • Analytics: Elasticsearch, logstash

  • Analytics: Statds, Graphite

  • Real-time communication: XMPP, WebSockets

  • Persistence layer: MySQL, Postgres, Redis

  • Technologies we're digging:

  • Angular

  • MySql 

  • PostgREST  

  • Redis 

  • RabbitMQ 

  • AWS 

  • Chef

  • Puppet

  • Vagrant

  • Unix



What you bring to the party:




  • Good code samples

  • Good OOP skills

  • Track record of delivering a product to paying customers

  • Delivery of production quality code to a large consumer facing application

  • Good linux command line skills

  •  Development experience with Ruby/Python/Java/Javascript

  •  Mobile experience (bonus points!)

  •  Experience building scalable, rich web applications

  •  Strong OO skills

  •  Test-driven development experience

  •  A working style that thrives in a highly collaborative environment

  •  Experience building REST-based APIs and services

  •  A GitHub account (or code that you can share with us)



Vista Higher Learning (VHL) is the largest educational technology provider of language learning to schools and universities in North America. Our digital solutions serve 35,000 courses and 1.5M students every year. VHL leads through innovation, being the first language learning technology company to offer fully online, cloud-based courses in online learning, built around ground-breaking services including adaptive learning and "verification" automated speech recognition (VASR). The VHL technology team lives its mission every day: Create the ideas and technology that enable everyone to learn a new language. Our engineers are imaginative, driven, smart, and focused on ensuring every teacher and student has the highest quality learning experience possible. Engineers are encouraged to champion innovative ways to solve problems, working directly with product owners to establish product design and direction.



Our benefits package includes life/health/dental/vision insurance, 401(k), educational assistance, commuter pass subsidies, PTO and paid holidays.  Plus we offer all sorts of fun perks such as monthly chair massages, company lunches, onsite cardio, and strength training classes and much more!



Vista Higher Learning is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, age, color, religion, sexual orientation, gender identity, national origin, physical or mental disability, and/or protected veteran status or other characteristics protected by applicable law.

Elastic
  • No office location

At Elastic, we have a simple goal: to pursue the world's data problems with products that delight and inspire. We help people around the world do exceptional things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. Often, what you can do with our products is only limited by what you can dream up. We believe that diversity drives our vibe. We unite employees across 30+ countries into one unified team, while the broader community spans across over 100 countries.


Elastic is building out our Elastic Cloud Team focusing on Elastic as a Service. This is a great opportunity to help lead our Cloud efforts and make an immediate impact to our strategy and implementation.


Our cloud product allows users to create new clusters or expand existing ones easily This product would be built on technologies such as OpenStack, AWS, Docker, and others to enable the Operations Teams to easily create and manage multiple Elastic Clusters.


What You Will Do:



  • Implement features to manage multiple Elasticsearch Clusters on top of our orchestration layer

  • Build and manage Docker images for Elastic Stack components

  • Develop software for our distributed systems and ES as a Service offerings

  • Debugging meaningful technical issues inside a very deep and complex technical stack involving containers, microservices, etc on multiple platforms

  • Collaborate with Elastic’s engineering teams like Elasticsearch, Kibana, Logstash, APM and Beats) to enable them to run on Cloud infrastructure

  • Grow and share your interest in technical outreach (blog posts, tech papers, conference speaking, etc.)


What You Bring Along:



  • Interest in the JVM and experience working with any of Java, Scala or Golang

  • Understanding of Docker internals and APIs

  • Experience working with container infrastructure in production

  • You care deeply about resiliency of the services and quality of the features you ship

  • Experience with public Cloud environments (AWS, GCP, Azure, etc.)

  • A self starter who has experience working across multiple technical teams and decision makers

  • You love working with a diverse, worldwide team in a distributed work environment


Additional Information:


We're looking to hire team members invested in realizing the goal of making real-time data exploration easy and available to anyone. As a distributed company, we believe that diversity drives our vibe! Whether you're looking to launch a new career or grow an existing one, Elastic is the type of company where you can balance great work with great life.




  • Competitive pay based on the work you do here and not your previous salary

  • Stock options

  • Global minimum of 16 weeks of paid in full parental leave (moms & dads)

  • Generous vacation time and one week of volunteer time off

  • Your age is only a number. It doesn't matter if you're just out of college or your children are; we need you for what you can do.



Elastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law.

Indoc Research
  • Toronto, ON, Canada

We are seeking an intermediate software developer with good experience and great passion for developing big data systems that will change the very landscape of medical science.


You, our ideal candidate, are looking for a small and dynamic multidisciplinary team of researchers and engineers who work together in an agile fashion.  You are prepared to bring your drive, your experience, and your passion to contribute at all levels to the entire team.  We, Indoc Research, are that team.  We build and manage complex health research infrastructure for collaborators and clients.  We bring together prominent research organizations across the country and internationally.  Together we have created large scale informatics platforms involving diverse and complex data modalities (e.g. imaging, genomics, clinical assessments) across multiple disease areas (e.g. neurodegeneration, depression, cancer).


You will have the opportunity to stretch and develop your data handling design and implementation skills to a whole new level.  You will imagine, help design and implement whole new ways to bring multidimensional data from an endless variety of sources into a collection of platforms. How to efficiently manage, link, integrate, federate not only gigabytes but terabytes of information? How to query and navigate highly-structured and highly-unstructured data? How can we shape international collaborative efforts towards consistent lexicons and adapt to emerging ontologies? And do it all efficiently, reliably, and accurately? These are the questions that you will help us answer.


With strong skills in data handling and representation, you will be responsible for helping create futuristic yet realistic systems for use by surgeons in the midst of procedures, researchers accessing remote data across the globe, doctors trying to understand the genetics of their patient even at the bedside, and patients and the public seeking insight into their own maladies and conditions.  Your creativity and innovation, combined with the multidisciplinary skills of the rest of our team, will help deal with security and privacy even while enabling fusion of high dimensional data across multiple medical modalities as diverse as MR imaging, molecular science, and psychological assessments.  Your work will help address critical gaps and fulfill currently unmet and urgent needs in both clinical and research communities, handling data from distributed settings such as critical care units, clinical laboratories and hospital imaging facilities. 


 As a software developer on these projects, you will have a critical role at all levels of design and implementation, architecture and testing, deployment and support.  And you will be building the future of medical science.


Qualifications:



  • 2+ years of programming experience in Python.  Knowing Java would be an asset.

  • Good work experience with Elasticsearch, Spark SQL.

  • Familiar with modern data management systems, including RDBMSs, Redis, and / or MongoDB.

  • Experience with version control system such as Git or SVN.

  • Solid experience with application deployment in a UNIX/Linux environment and Docker ecosystems.

  • Strong personal research capabilities and the ability to learn new technologies/products quickly.


Optional Skills



  • Experience with:


    • high throughput data ETL pipelines using  Apache Kafka, Logstash, or other message queuing systems

    • Big Data ecosystems (Spark framework, Thrift, Hadoop, HBase)



  • Experience in lexical and ontological technologies (such as the semantic web)

  • Experience in RDF, XML, SPARQL and related technologies

  • Strong written and oral communication skills

  • Track record of initiative and self-organization with strong time management skills

  • Willingness and ability to work on multiple projects at the same time

  • Demonstrated ability to work within a collaborative team across multiple disciplines


 Education:



  • Bachelor’s degree or equivalent in Computer Science, Software Engineering, or equivalent


 How to apply:


Please submit your resume and cover letter.

Comcast
  • Denver, CO

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

The Alternate Content Engineering Operations / Linear team is a growing, fast-moving team of world-class experts who are innovating in end-to-end IP Video Signal Processing that is focused on Dynamic Ad Insertion, Virtual Stream Stiching, Content Content Blackout delivery to Comcast and it's Syndication Partners as well as IP Linear responsibities and relationship across these components. We are a team that thrives on big challenges, results, quality, and agility.

The Alternate Content/ Linear Operations Engineer is the technical counterpart to the on IP Video projects that involve Ad Insertion, Virtual Stream Stitching and Content Blackouts. The group is accountable for the overall deployment, delivery and operational support of all Signal Processing and IP Linear services. The AltCon/ Linear Engineer 5 role gets involved early in the project lifecycle and continues to support the technical solution beyond successful deployment, ensuring detailed software designs, infrastructure, and operational support meet the project objective.

Who does the delivery engineer work with?

The Engineer 5 is the Lead' for the teams technical solutions, capacity planning and overall operational support. Throughout the life of the project, the Principal AltCon / Linear Ops Engineer will collaboratively work with many project stakeholders, including the project manager, service delivery, architect, software development leads, infrastructure team, network engineers, system adiministrators and Comcast leadership to ensure efficient delivery of IP Video Signal Processing and Ad Insertion Projects and its relationship across the IP Linear footprint. This includes analyzing deployment requirements and physical design, defining deployment strategies and help decomposing into deliverable tasks. This Engineer 5 will drive capacity management analysis and audits to understand the IP Video Signal Processing and Ad Insertion applications data processing, throughput and storage needs, as well as growth projections to determine deployment phasing and establishing tracking metrics and robust performance models. These models will be used to validate that applications are performing as anticipated, highlight application issues, and to establish plans to scale the application infrastructure over time. We would like a senior-level engineer with experience managing large scale web sites that utilize caching, load balancing, etc. This engineer should also have automation experience and a demonstrated knowledge of how to utilize git for managing projects/source code. They should be comfortable working with developers and able to communicate via various methods to accomplish their tasks.

What are some interesting problems you'll be working on?

In this role, you will bridge many technical gaps during the life of a project. Examples include:

  • Drive the implementation of an IP Video Signal Processing and Ad Insertion solution, both for Comcast and for companies external to Comcast.
  • Lead troubleshooting efforts to find root causes and corrective actions thoughout the life of a project.
  • Determine requirements, create, validate and audit system capacity plans.
  • Identify and create advanced application monitoring (Splunk, ELK, Sysdig, Prometheus) for improved reliability.
  • Establish automated application deployments to various environments (Kubernetes, Helm).
  • Develop scripts and utilities to automate data collection.
  • Evaluate new code releases for basic reliability and systems integration support.
  • Provide guidance to QA teams who will perform functional and load testing.

Responsibilities:

  • Work directly with technical systems solutions team (Delivery Engineering and Developers) and provide hands-on project support to implement advanced IP Video Signal Processing and Ad Insertion technologies and services.
  • Manage the work of AltCon Ops engineers, assign, prioritize and balance project tasks.
  • Create and maintain performance models for existing and new applications.
  • Perform "what if" scenario analysis to support business decisions, forecast infrastructure needs, and budgeting.
  • Assure systems are backed up and copies are readily available to team
  • Identify process improvements and create fully documented troubleshotting procedures for offshore team
  • Analyze and recommend improvements to the scalability and resiliency of applications.
  • Identify and create advanced application performance metrics to monitor for improved application reliability.
  • Interact with Software Architects, Service Delivery Engineers and stakeholders to analyze complex projects and break them down into detailed and functional tasks.
  • Create and present analysis and interdependencies to project, partner or senior leadership stakeholders.
  • Assist development teams by deploying and configuring components in various environments.
  • Troubleshoot and triage services and solutions.
  • Collaboratively drive deployment of scalable software solutions.
  • Development of tools and processes for managing servers.
  • Development of load, capacity, longevity limitations of each platform.
  • Development of systems and code performance validation tools.
  • Foster cross-functional knowledge sharing and mentoring amongst the various engineering teams.
  • Responsible for implementation, troubleshooting, and management of customer facing systems with high potential for impact
  • Leader of long-term projects that have high impact to internal and external customers.
  • Writes SMOPs for implementation of changes to software, reviews for others
  • Self starter, projects often require curiosity and a love of learning to complete
  • Must be a problem solver and able to utilize new technologies or methods to solve complex problems

Here are some of the specific technologies we use:

  • Programming Languages: Ruby, Python, Go, Javascript, Bash
  • DevOps Tools: Splunk, Kubernetes, Docker, Sysdig, Prometheus, Git, Helm, Concourse, Jenkins
  • Open Source Technologies: Nginx, PostgreSQL, Varnish, Apache Tomcat, HAProxy, Redis, Kafka
  • General Knowledge: Linux, MPEG, HTTP Adaptive Streaming, IP Networking, VMWare, Kubernetes, OpenStack

Familiarity with the following industry specifications and standards is helpful but not required:

  • CableLabs Event Signaling and Management (ESAM)
  • CableLabs Event Signaling and Notification Interface (ESNI)
  • Society of Cable Telecommunications Engineers (SCTE-35)
  • Digital Video Ad Serving Template (VAST)
  • CableLabs Encoder Boundary Point (EBP)

Skills & Requirements

  • 9+ years of hands-on experience in software development and/or DevOps engineering.
  • 3+ years of experience as a team lead.
  • Experience with a variety of Unix/Linux automation and scripting languages such as Python, Bash, Puppet.
  • Experience writing core programming languages such as Go, Java or C/ C++.
  • Strong Excel skills
  • Strong ability to prioritize, assign, track and shift team resources as needed for multiple projects.
  • Strong experience gathering requirements and supporting advanced software development teams in an agile environment.
  • Ability to plan, organize and document complex system designs.
  • Understanding how to scale applications depending on load.
  • Knowledge of networking concepts (VLAN, TCP/IP, Multicast, Unicast, OSI).
  • Experience with developing advanced application performance monitoring.
  • Ability to navigate Unix operating systems.
  • Excellent presentation and communication skills to explain system designs and technologies to senior leadership.
  • Strong ability to collaborate with peers and stakeholders around system designs
  • High attention to detail and strong ability to problem solve systems issues.
  • Experience with CI/CD methodologies.
  • Willing to take ownership of problems and independently drive them to resolution.
  • Must be able to work independently, be self-motivated and handle multiple priorities.
  • Comfortable working in a fast paced agile environment. Requirements change quickly and our team needs to constantly adapt to moving targets.
  • Container Experience
  • * Docker
  • * Kubernetes
  • * Helm
  • Experience with Git and source code management
  • Proven experience in one of the following languages:
  • * Python
  • * Ruby
  • * golang
  • Proven Experience using Splunk or Logstash/Kibana/ElasticSearch for reporting
  • Management of Web Services in a Virtualized Env
  • * Apache/Nginx
  • * Varnish/HaProxy
  • * IPVS or other load balancing
  • Experience troubleshooting operational issues with developers
  • Experience troubleshooting networking from a sys admin level
  • * Packet captures
  • * Trace routes
  • Experience with Software Deployment/Configuration Management utilizing
  • * Puppet

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Denver, CO

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

The Alternate Content Engineering Operations / Linear team is a growing, fast-moving team of world-class experts who are innovating in end-to-end IP Video Signal Processing that is focused on Dynamic Ad Insertion, Virtual Stream Stiching, Content Content Blackout delivery to Comcast and it's Syndication Partners as well as IP Linear responsibities and relationship across these components. We are a team that thrives on big challenges, results, quality, and agility.

The Alternate Content/ Linear Operations Engineer is the technical counterpart to the on IP Video projects that involve Ad Insertion, Virtual Stream Stitching and Content Blackouts. The group is accountable for the overall deployment, delivery and operational support of all Signal Processing and IP Linear services. The AltCon/ Linear Engineer 5 role gets involved early in the project lifecycle and continues to support the technical solution beyond successful deployment, ensuring detailed software designs, infrastructure, and operational support meet the project objective.

Who does the delivery engineer work with?

The Engineer 5 is the Lead' for the teams technical solutions, capacity planning and overall operational support. Throughout the life of the project, the Principal AltCon / Linear Ops Engineer will collaboratively work with many project stakeholders, including the project manager, service delivery, architect, software development leads, infrastructure team, network engineers, system adiministrators and Comcast leadership to ensure efficient delivery of IP Video Signal Processing and Ad Insertion Projects and its relationship across the IP Linear footprint. This includes analyzing deployment requirements and physical design, defining deployment strategies and help decomposing into deliverable tasks. This Engineer 5 will drive capacity management analysis and audits to understand the IP Video Signal Processing and Ad Insertion applications data processing, throughput and storage needs, as well as growth projections to determine deployment phasing and establishing tracking metrics and robust performance models. These models will be used to validate that applications are performing as anticipated, highlight application issues, and to establish plans to scale the application infrastructure over time. We would like a senior-level engineer with experience managing large scale web sites that utilize caching, load balancing, etc. This engineer should also have automation experience and a demonstrated knowledge of how to utilize git for managing projects/source code. They should be comfortable working with developers and able to communicate via various methods to accomplish their tasks.

What are some interesting problems you'll be working on?

In this role, you will bridge many technical gaps during the life of a project. Examples include:

  • Drive the implementation of an IP Video Signal Processing and Ad Insertion solution, both for Comcast and for companies external to Comcast.
  • Lead troubleshooting efforts to find root causes and corrective actions thoughout the life of a project.
  • Determine requirements, create, validate and audit system capacity plans.
  • Identify and create advanced application monitoring (Splunk, ELK, Sysdig, Prometheus) for improved reliability.
  • Establish automated application deployments to various environments (Kubernetes, Helm).
  • Develop scripts and utilities to automate data collection.
  • Evaluate new code releases for basic reliability and systems integration support.
  • Provide guidance to QA teams who will perform functional and load testing.

Responsibilities:

  • Work directly with technical systems solutions team (Delivery Engineering and Developers) and provide hands-on project support to implement advanced IP Video Signal Processing and Ad Insertion technologies and services.
  • Manage the work of AltCon Ops engineers, assign, prioritize and balance project tasks.
  • Create and maintain performance models for existing and new applications.
  • Perform "what if" scenario analysis to support business decisions, forecast infrastructure needs, and budgeting.
  • Assure systems are backed up and copies are readily available to team
  • Identify process improvements and create fully documented troubleshotting procedures for offshore team
  • Analyze and recommend improvements to the scalability and resiliency of applications.
  • Identify and create advanced application performance metrics to monitor for improved application reliability.
  • Interact with Software Architects, Service Delivery Engineers and stakeholders to analyze complex projects and break them down into detailed and functional tasks.
  • Create and present analysis and interdependencies to project, partner or senior leadership stakeholders.
  • Assist development teams by deploying and configuring components in various environments.
  • Troubleshoot and triage services and solutions.
  • Collaboratively drive deployment of scalable software solutions.
  • Development of tools and processes for managing servers.
  • Development of load, capacity, longevity limitations of each platform.
  • Development of systems and code performance validation tools.
  • Foster cross-functional knowledge sharing and mentoring amongst the various engineering teams.
  • Responsible for implementation, troubleshooting, and management of customer facing systems with high potential for impact
  • Leader of long-term projects that have high impact to internal and external customers.
  • Writes SMOPs for implementation of changes to software, reviews for others
  • Self starter, projects often require curiosity and a love of learning to complete
  • Must be a problem solver and able to utilize new technologies or methods to solve complex problems

Here are some of the specific technologies we use:

  • Programming Languages: Ruby, Python, Go, Javascript, Bash
  • DevOps Tools: Splunk, Kubernetes, Docker, Sysdig, Prometheus, Git, Helm, Concourse, Jenkins
  • Open Source Technologies: Nginx, PostgreSQL, Varnish, Apache Tomcat, HAProxy, Redis, Kafka
  • General Knowledge: Linux, MPEG, HTTP Adaptive Streaming, IP Networking, VMWare, Kubernetes, OpenStack

Familiarity with the following industry specifications and standards is helpful but not required:

  • CableLabs Event Signaling and Management (ESAM)
  • CableLabs Event Signaling and Notification Interface (ESNI)
  • Society of Cable Telecommunications Engineers (SCTE-35)
  • Digital Video Ad Serving Template (VAST)
  • CableLabs Encoder Boundary Point (EBP)

Skills & Requirements

  • 9+ years of hands-on experience in software development and/or DevOps engineering.
  • 3+ years of experience as a team lead.
  • Experience with a variety of Unix/Linux automation and scripting languages such as Python, Bash, Puppet.
  • Experience writing core programming languages such as Go, Java or C/ C++.
  • Strong Excel skills
  • Strong ability to prioritize, assign, track and shift team resources as needed for multiple projects.
  • Strong experience gathering requirements and supporting advanced software development teams in an agile environment.
  • Ability to plan, organize and document complex system designs.
  • Understanding how to scale applications depending on load.
  • Knowledge of networking concepts (VLAN, TCP/IP, Multicast, Unicast, OSI).
  • Experience with developing advanced application performance monitoring.
  • Ability to navigate Unix operating systems.
  • Excellent presentation and communication skills to explain system designs and technologies to senior leadership.
  • Strong ability to collaborate with peers and stakeholders around system designs
  • High attention to detail and strong ability to problem solve systems issues.
  • Experience with CI/CD methodologies.
  • Willing to take ownership of problems and independently drive them to resolution.
  • Must be able to work independently, be self-motivated and handle multiple priorities.
  • Comfortable working in a fast paced agile environment. Requirements change quickly and our team needs to constantly adapt to moving targets.
  • Container Experience
  • * Docker
  • * Kubernetes
  • * Helm
  • Experience with Git and source code management
  • Proven experience in one of the following languages:
  • * Python
  • * Ruby
  • * golang
  • Proven Experience using Splunk or Logstash/Kibana/ElasticSearch for reporting
  • Management of Web Services in a Virtualized Env
  • * Apache/Nginx
  • * Varnish/HaProxy
  • * IPVS or other load balancing
  • Experience troubleshooting operational issues with developers
  • Experience troubleshooting networking from a sys admin level
  • * Packet captures
  • * Trace routes
  • Experience with Software Deployment/Configuration Management utilizing
  • * Puppet

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Denver, CO

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

The Alternate Content Engineering Operations / Linear team is a growing, fast-moving team of world-class experts who are innovating in end-to-end IP Video Signal Processing that is focused on Dynamic Ad Insertion, Virtual Stream Stiching, Content Content Blackout delivery to Comcast and it's Syndication Partners as well as IP Linear responsibities and relationship across these components. We are a team that thrives on big challenges, results, quality, and agility.

The Alternate Content/ Linear Operations Engineer is the technical counterpart to the on IP Video projects that involve Ad Insertion, Virtual Stream Stitching and Content Blackouts. The group is accountable for the overall deployment, delivery and operational support of all Signal Processing and IP Linear services. The AltCon/ Linear Engineer 5 role gets involved early in the project lifecycle and continues to support the technical solution beyond successful deployment, ensuring detailed software designs, infrastructure, and operational support meet the project objective.

Who does the delivery engineer work with?

The Engineer 5 is the Lead' for the teams technical solutions, capacity planning and overall operational support. Throughout the life of the project, the Principal AltCon / Linear Ops Engineer will collaboratively work with many project stakeholders, including the project manager, service delivery, architect, software development leads, infrastructure team, network engineers, system adiministrators and Comcast leadership to ensure efficient delivery of IP Video Signal Processing and Ad Insertion Projects and its relationship across the IP Linear footprint. This includes analyzing deployment requirements and physical design, defining deployment strategies and help decomposing into deliverable tasks. This Engineer 5 will drive capacity management analysis and audits to understand the IP Video Signal Processing and Ad Insertion applications data processing, throughput and storage needs, as well as growth projections to determine deployment phasing and establishing tracking metrics and robust performance models. These models will be used to validate that applications are performing as anticipated, highlight application issues, and to establish plans to scale the application infrastructure over time. We would like a senior-level engineer with experience managing large scale web sites that utilize caching, load balancing, etc. This engineer should also have automation experience and a demonstrated knowledge of how to utilize git for managing projects/source code. They should be comfortable working with developers and able to communicate via various methods to accomplish their tasks.

What are some interesting problems you'll be working on?

In this role, you will bridge many technical gaps during the life of a project. Examples include:

  • Drive the implementation of an IP Video Signal Processing and Ad Insertion solution, both for Comcast and for companies external to Comcast.
  • Lead troubleshooting efforts to find root causes and corrective actions thoughout the life of a project.
  • Determine requirements, create, validate and audit system capacity plans.
  • Identify and create advanced application monitoring (Splunk, ELK, Sysdig, Prometheus) for improved reliability.
  • Establish automated application deployments to various environments (Kubernetes, Helm).
  • Develop scripts and utilities to automate data collection.
  • Evaluate new code releases for basic reliability and systems integration support.
  • Provide guidance to QA teams who will perform functional and load testing.

Responsibilities:

  • Work directly with technical systems solutions team (Delivery Engineering and Developers) and provide hands-on project support to implement advanced IP Video Signal Processing and Ad Insertion technologies and services.
  • Manage the work of AltCon Ops engineers, assign, prioritize and balance project tasks.
  • Create and maintain performance models for existing and new applications.
  • Perform "what if" scenario analysis to support business decisions, forecast infrastructure needs, and budgeting.
  • Assure systems are backed up and copies are readily available to team
  • Identify process improvements and create fully documented troubleshotting procedures for offshore team
  • Analyze and recommend improvements to the scalability and resiliency of applications.
  • Identify and create advanced application performance metrics to monitor for improved application reliability.
  • Interact with Software Architects, Service Delivery Engineers and stakeholders to analyze complex projects and break them down into detailed and functional tasks.
  • Create and present analysis and interdependencies to project, partner or senior leadership stakeholders.
  • Assist development teams by deploying and configuring components in various environments.
  • Troubleshoot and triage services and solutions.
  • Collaboratively drive deployment of scalable software solutions.
  • Development of tools and processes for managing servers.
  • Development of load, capacity, longevity limitations of each platform.
  • Development of systems and code performance validation tools.
  • Foster cross-functional knowledge sharing and mentoring amongst the various engineering teams.
  • Responsible for implementation, troubleshooting, and management of customer facing systems with high potential for impact
  • Leader of long-term projects that have high impact to internal and external customers.
  • Writes SMOPs for implementation of changes to software, reviews for others
  • Self starter, projects often require curiosity and a love of learning to complete
  • Must be a problem solver and able to utilize new technologies or methods to solve complex problems

Here are some of the specific technologies we use:

  • Programming Languages: Ruby, Python, Go, Javascript, Bash
  • DevOps Tools: Splunk, Kubernetes, Docker, Sysdig, Prometheus, Git, Helm, Concourse, Jenkins
  • Open Source Technologies: Nginx, PostgreSQL, Varnish, Apache Tomcat, HAProxy, Redis, Kafka
  • General Knowledge: Linux, MPEG, HTTP Adaptive Streaming, IP Networking, VMWare, Kubernetes, OpenStack

Familiarity with the following industry specifications and standards is helpful but not required:

  • CableLabs Event Signaling and Management (ESAM)
  • CableLabs Event Signaling and Notification Interface (ESNI)
  • Society of Cable Telecommunications Engineers (SCTE-35)
  • Digital Video Ad Serving Template (VAST)
  • CableLabs Encoder Boundary Point (EBP)

Skills & Requirements

  • 9+ years of hands-on experience in software development and/or DevOps engineering.
  • 3+ years of experience as a team lead.
  • Experience with a variety of Unix/Linux automation and scripting languages such as Python, Bash, Puppet.
  • Experience writing core programming languages such as Go, Java or C/ C++.
  • Strong Excel skills
  • Strong ability to prioritize, assign, track and shift team resources as needed for multiple projects.
  • Strong experience gathering requirements and supporting advanced software development teams in an agile environment.
  • Ability to plan, organize and document complex system designs.
  • Understanding how to scale applications depending on load.
  • Knowledge of networking concepts (VLAN, TCP/IP, Multicast, Unicast, OSI).
  • Experience with developing advanced application performance monitoring.
  • Ability to navigate Unix operating systems.
  • Excellent presentation and communication skills to explain system designs and technologies to senior leadership.
  • Strong ability to collaborate with peers and stakeholders around system designs
  • High attention to detail and strong ability to problem solve systems issues.
  • Experience with CI/CD methodologies.
  • Willing to take ownership of problems and independently drive them to resolution.
  • Must be able to work independently, be self-motivated and handle multiple priorities.
  • Comfortable working in a fast paced agile environment. Requirements change quickly and our team needs to constantly adapt to moving targets.
  • Container Experience
  • * Docker
  • * Kubernetes
  • * Helm
  • Experience with Git and source code management
  • Proven experience in one of the following languages:
  • * Python
  • * Ruby
  • * golang
  • Proven Experience using Splunk or Logstash/Kibana/ElasticSearch for reporting
  • Management of Web Services in a Virtualized Env
  • * Apache/Nginx
  • * Varnish/HaProxy
  • * IPVS or other load balancing
  • Experience troubleshooting operational issues with developers
  • Experience troubleshooting networking from a sys admin level
  • * Packet captures
  • * Trace routes
  • Experience with Software Deployment/Configuration Management utilizing
  • * Puppet

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Denver, CO

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

The Alternate Content Engineering Operations / Linear team is a growing, fast-moving team of world-class experts who are innovating in end-to-end IP Video Signal Processing that is focused on Dynamic Ad Insertion, Virtual Stream Stiching, Content Content Blackout delivery to Comcast and it's Syndication Partners as well as IP Linear responsibities and relationship across these components. We are a team that thrives on big challenges, results, quality, and agility.

The Alternate Content/ Linear Operations Engineer is the technical counterpart to the on IP Video projects that involve Ad Insertion, Virtual Stream Stitching and Content Blackouts. The group is accountable for the overall deployment, delivery and operational support of all Signal Processing and IP Linear services. The AltCon/ Linear Engineer 5 role gets involved early in the project lifecycle and continues to support the technical solution beyond successful deployment, ensuring detailed software designs, infrastructure, and operational support meet the project objective.

Who does the delivery engineer work with?

The Engineer 5 is the Lead' for the teams technical solutions, capacity planning and overall operational support. Throughout the life of the project, the Principal AltCon / Linear Ops Engineer will collaboratively work with many project stakeholders, including the project manager, service delivery, architect, software development leads, infrastructure team, network engineers, system adiministrators and Comcast leadership to ensure efficient delivery of IP Video Signal Processing and Ad Insertion Projects and its relationship across the IP Linear footprint. This includes analyzing deployment requirements and physical design, defining deployment strategies and help decomposing into deliverable tasks. This Engineer 5 will drive capacity management analysis and audits to understand the IP Video Signal Processing and Ad Insertion applications data processing, throughput and storage needs, as well as growth projections to determine deployment phasing and establishing tracking metrics and robust performance models. These models will be used to validate that applications are performing as anticipated, highlight application issues, and to establish plans to scale the application infrastructure over time. We would like a senior-level engineer with experience managing large scale web sites that utilize caching, load balancing, etc. This engineer should also have automation experience and a demonstrated knowledge of how to utilize git for managing projects/source code. They should be comfortable working with developers and able to communicate via various methods to accomplish their tasks.

What are some interesting problems you'll be working on?

In this role, you will bridge many technical gaps during the life of a project. Examples include:

  • Drive the implementation of an IP Video Signal Processing and Ad Insertion solution, both for Comcast and for companies external to Comcast.
  • Lead troubleshooting efforts to find root causes and corrective actions thoughout the life of a project.
  • Determine requirements, create, validate and audit system capacity plans.
  • Identify and create advanced application monitoring (Splunk, ELK, Sysdig, Prometheus) for improved reliability.
  • Establish automated application deployments to various environments (Kubernetes, Helm).
  • Develop scripts and utilities to automate data collection.
  • Evaluate new code releases for basic reliability and systems integration support.
  • Provide guidance to QA teams who will perform functional and load testing.

Responsibilities:

  • Work directly with technical systems solutions team (Delivery Engineering and Developers) and provide hands-on project support to implement advanced IP Video Signal Processing and Ad Insertion technologies and services.
  • Manage the work of AltCon Ops engineers, assign, prioritize and balance project tasks.
  • Create and maintain performance models for existing and new applications.
  • Perform "what if" scenario analysis to support business decisions, forecast infrastructure needs, and budgeting.
  • Assure systems are backed up and copies are readily available to team
  • Identify process improvements and create fully documented troubleshotting procedures for offshore team
  • Analyze and recommend improvements to the scalability and resiliency of applications.
  • Identify and create advanced application performance metrics to monitor for improved application reliability.
  • Interact with Software Architects, Service Delivery Engineers and stakeholders to analyze complex projects and break them down into detailed and functional tasks.
  • Create and present analysis and interdependencies to project, partner or senior leadership stakeholders.
  • Assist development teams by deploying and configuring components in various environments.
  • Troubleshoot and triage services and solutions.
  • Collaboratively drive deployment of scalable software solutions.
  • Development of tools and processes for managing servers.
  • Development of load, capacity, longevity limitations of each platform.
  • Development of systems and code performance validation tools.
  • Foster cross-functional knowledge sharing and mentoring amongst the various engineering teams.
  • Responsible for implementation, troubleshooting, and management of customer facing systems with high potential for impact
  • Leader of long-term projects that have high impact to internal and external customers.
  • Writes SMOPs for implementation of changes to software, reviews for others
  • Self starter, projects often require curiosity and a love of learning to complete
  • Must be a problem solver and able to utilize new technologies or methods to solve complex problems

Here are some of the specific technologies we use:

  • Programming Languages: Ruby, Python, Go, Javascript, Bash
  • DevOps Tools: Splunk, Kubernetes, Docker, Sysdig, Prometheus, Git, Helm, Concourse, Jenkins
  • Open Source Technologies: Nginx, PostgreSQL, Varnish, Apache Tomcat, HAProxy, Redis, Kafka
  • General Knowledge: Linux, MPEG, HTTP Adaptive Streaming, IP Networking, VMWare, Kubernetes, OpenStack

Familiarity with the following industry specifications and standards is helpful but not required:

  • CableLabs Event Signaling and Management (ESAM)
  • CableLabs Event Signaling and Notification Interface (ESNI)
  • Society of Cable Telecommunications Engineers (SCTE-35)
  • Digital Video Ad Serving Template (VAST)
  • CableLabs Encoder Boundary Point (EBP)

Skills & Requirements

  • 9+ years of hands-on experience in software development and/or DevOps engineering.
  • 3+ years of experience as a team lead.
  • Experience with a variety of Unix/Linux automation and scripting languages such as Python, Bash, Puppet.
  • Experience writing core programming languages such as Go, Java or C/ C++.
  • Strong Excel skills
  • Strong ability to prioritize, assign, track and shift team resources as needed for multiple projects.
  • Strong experience gathering requirements and supporting advanced software development teams in an agile environment.
  • Ability to plan, organize and document complex system designs.
  • Understanding how to scale applications depending on load.
  • Knowledge of networking concepts (VLAN, TCP/IP, Multicast, Unicast, OSI).
  • Experience with developing advanced application performance monitoring.
  • Ability to navigate Unix operating systems.
  • Excellent presentation and communication skills to explain system designs and technologies to senior leadership.
  • Strong ability to collaborate with peers and stakeholders around system designs
  • High attention to detail and strong ability to problem solve systems issues.
  • Experience with CI/CD methodologies.
  • Willing to take ownership of problems and independently drive them to resolution.
  • Must be able to work independently, be self-motivated and handle multiple priorities.
  • Comfortable working in a fast paced agile environment. Requirements change quickly and our team needs to constantly adapt to moving targets.
  • Container Experience
  • * Docker
  • * Kubernetes
  • * Helm
  • Experience with Git and source code management
  • Proven experience in one of the following languages:
  • * Python
  • * Ruby
  • * golang
  • Proven Experience using Splunk or Logstash/Kibana/ElasticSearch for reporting
  • Management of Web Services in a Virtualized Env
  • * Apache/Nginx
  • * Varnish/HaProxy
  • * IPVS or other load balancing
  • Experience troubleshooting operational issues with developers
  • Experience troubleshooting networking from a sys admin level
  • * Packet captures
  • * Trace routes
  • Experience with Software Deployment/Configuration Management utilizing
  • * Puppet

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Denver, CO

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

The Alternate Content Engineering Operations / Linear team is a growing, fast-moving team of world-class experts who are innovating in end-to-end IP Video Signal Processing that is focused on Dynamic Ad Insertion, Virtual Stream Stiching, Content Content Blackout delivery to Comcast and it's Syndication Partners as well as IP Linear responsibities and relationship across these components. We are a team that thrives on big challenges, results, quality, and agility.

The Alternate Content/ Linear Operations Engineer is the technical counterpart to the on IP Video projects that involve Ad Insertion, Virtual Stream Stitching and Content Blackouts. The group is accountable for the overall deployment, delivery and operational support of all Signal Processing and IP Linear services. The AltCon/ Linear Engineer 5 role gets involved early in the project lifecycle and continues to support the technical solution beyond successful deployment, ensuring detailed software designs, infrastructure, and operational support meet the project objective.

Who does the delivery engineer work with?

The Engineer 5 is the Lead' for the teams technical solutions, capacity planning and overall operational support. Throughout the life of the project, the Principal AltCon / Linear Ops Engineer will collaboratively work with many project stakeholders, including the project manager, service delivery, architect, software development leads, infrastructure team, network engineers, system adiministrators and Comcast leadership to ensure efficient delivery of IP Video Signal Processing and Ad Insertion Projects and its relationship across the IP Linear footprint. This includes analyzing deployment requirements and physical design, defining deployment strategies and help decomposing into deliverable tasks. This Engineer 5 will drive capacity management analysis and audits to understand the IP Video Signal Processing and Ad Insertion applications data processing, throughput and storage needs, as well as growth projections to determine deployment phasing and establishing tracking metrics and robust performance models. These models will be used to validate that applications are performing as anticipated, highlight application issues, and to establish plans to scale the application infrastructure over time. We would like a senior-level engineer with experience managing large scale web sites that utilize caching, load balancing, etc. This engineer should also have automation experience and a demonstrated knowledge of how to utilize git for managing projects/source code. They should be comfortable working with developers and able to communicate via various methods to accomplish their tasks.

What are some interesting problems you'll be working on?

In this role, you will bridge many technical gaps during the life of a project. Examples include:

  • Drive the implementation of an IP Video Signal Processing and Ad Insertion solution, both for Comcast and for companies external to Comcast.
  • Lead troubleshooting efforts to find root causes and corrective actions thoughout the life of a project.
  • Determine requirements, create, validate and audit system capacity plans.
  • Identify and create advanced application monitoring (Splunk, ELK, Sysdig, Prometheus) for improved reliability.
  • Establish automated application deployments to various environments (Kubernetes, Helm).
  • Develop scripts and utilities to automate data collection.
  • Evaluate new code releases for basic reliability and systems integration support.
  • Provide guidance to QA teams who will perform functional and load testing.

Responsibilities:

  • Work directly with technical systems solutions team (Delivery Engineering and Developers) and provide hands-on project support to implement advanced IP Video Signal Processing and Ad Insertion technologies and services.
  • Manage the work of AltCon Ops engineers, assign, prioritize and balance project tasks.
  • Create and maintain performance models for existing and new applications.
  • Perform "what if" scenario analysis to support business decisions, forecast infrastructure needs, and budgeting.
  • Assure systems are backed up and copies are readily available to team
  • Identify process improvements and create fully documented troubleshotting procedures for offshore team
  • Analyze and recommend improvements to the scalability and resiliency of applications.
  • Identify and create advanced application performance metrics to monitor for improved application reliability.
  • Interact with Software Architects, Service Delivery Engineers and stakeholders to analyze complex projects and break them down into detailed and functional tasks.
  • Create and present analysis and interdependencies to project, partner or senior leadership stakeholders.
  • Assist development teams by deploying and configuring components in various environments.
  • Troubleshoot and triage services and solutions.
  • Collaboratively drive deployment of scalable software solutions.
  • Development of tools and processes for managing servers.
  • Development of load, capacity, longevity limitations of each platform.
  • Development of systems and code performance validation tools.
  • Foster cross-functional knowledge sharing and mentoring amongst the various engineering teams.
  • Responsible for implementation, troubleshooting, and management of customer facing systems with high potential for impact
  • Leader of long-term projects that have high impact to internal and external customers.
  • Writes SMOPs for implementation of changes to software, reviews for others
  • Self starter, projects often require curiosity and a love of learning to complete
  • Must be a problem solver and able to utilize new technologies or methods to solve complex problems

Here are some of the specific technologies we use:

  • Programming Languages: Ruby, Python, Go, Javascript, Bash
  • DevOps Tools: Splunk, Kubernetes, Docker, Sysdig, Prometheus, Git, Helm, Concourse, Jenkins
  • Open Source Technologies: Nginx, PostgreSQL, Varnish, Apache Tomcat, HAProxy, Redis, Kafka
  • General Knowledge: Linux, MPEG, HTTP Adaptive Streaming, IP Networking, VMWare, Kubernetes, OpenStack

Familiarity with the following industry specifications and standards is helpful but not required:

  • CableLabs Event Signaling and Management (ESAM)
  • CableLabs Event Signaling and Notification Interface (ESNI)
  • Society of Cable Telecommunications Engineers (SCTE-35)
  • Digital Video Ad Serving Template (VAST)
  • CableLabs Encoder Boundary Point (EBP)

Skills & Requirements

  • 9+ years of hands-on experience in software development and/or DevOps engineering.
  • 3+ years of experience as a team lead.
  • Experience with a variety of Unix/Linux automation and scripting languages such as Python, Bash, Puppet.
  • Experience writing core programming languages such as Go, Java or C/ C++.
  • Strong Excel skills
  • Strong ability to prioritize, assign, track and shift team resources as needed for multiple projects.
  • Strong experience gathering requirements and supporting advanced software development teams in an agile environment.
  • Ability to plan, organize and document complex system designs.
  • Understanding how to scale applications depending on load.
  • Knowledge of networking concepts (VLAN, TCP/IP, Multicast, Unicast, OSI).
  • Experience with developing advanced application performance monitoring.
  • Ability to navigate Unix operating systems.
  • Excellent presentation and communication skills to explain system designs and technologies to senior leadership.
  • Strong ability to collaborate with peers and stakeholders around system designs
  • High attention to detail and strong ability to problem solve systems issues.
  • Experience with CI/CD methodologies.
  • Willing to take ownership of problems and independently drive them to resolution.
  • Must be able to work independently, be self-motivated and handle multiple priorities.
  • Comfortable working in a fast paced agile environment. Requirements change quickly and our team needs to constantly adapt to moving targets.
  • Container Experience
  • * Docker
  • * Kubernetes
  • * Helm
  • Experience with Git and source code management
  • Proven experience in one of the following languages:
  • * Python
  • * Ruby
  • * golang
  • Proven Experience using Splunk or Logstash/Kibana/ElasticSearch for reporting
  • Management of Web Services in a Virtualized Env
  • * Apache/Nginx
  • * Varnish/HaProxy
  • * IPVS or other load balancing
  • Experience troubleshooting operational issues with developers
  • Experience troubleshooting networking from a sys admin level
  • * Packet captures
  • * Trace routes
  • Experience with Software Deployment/Configuration Management utilizing
  • * Puppet

Comcast is an EOE/Veterans/Disabled/LGBT employer

VIOOH
  • London, UK
  • Salary: £70k - 90k

Role Title: Big Data Engineer
Reports to: Head of Data
Location: London - Paddington


Purpose of Role


We are looking for an experienced Big Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on developing and implementing optimal solutions for these purposes, then maintaining and monitoring them. You will also be responsible for integrating them with the architecture used across the company.


DUTIES



  • Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities

  • Implementing ETL process 

  • Industrialise Code produced by Data Scientists

  • Monitoring performance and advising any necessary infrastructure changes

  • Defining and implementing data retention policies


SKILLS & QUALIFICATIONS



  • Strong experience with BigData tools: Hadoop, Spark, HDFS, Hive

  • Experience with the Following languages: Scala, Java, PySpark

  • Experience Implementing Consumers and Producers for Data Streaming Tools, in particular Kafka, and Amazon Kinesis

  • Experience with NoSQL databases, such as Cassandra, HBase, MongoDB

  • Experience with Hortonworks HDP, Amazon EMR

  • Experience with integration of data from multiple data sources

  • Experience in building or integrating Monitoring Tools ( Kibana, LogStash, Grafana, Prometheus)

  • Knowledge and experience with Amazon AWS platform

  • Knowledge of various ETL techniques and frameworks.


PERSON SPECIFICATION



  • Understand new technologies and stay up to date with latest technologies

  • Data oriented person, attentive to details.

  • Excellent analysis, testing and troubleshooting skills

  • Highly motivated and project orientated

  • Ability to facilitate discussions to resolve conflicting processes / content / opinions

  • Ability to work under own initiative and prioritise workload efficiently

  • Excellent communication both verbal and presentation


Welcome experience



  • Knowledge of advertising industry, and possibly Adtech.

Connections IT Services
  • Dallas, TX

At Connections IT Services, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company's growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring the best people and helping them grow both professionally and personally, we ensure a bright future for Connections IT Services and for the people who work here.

No Agency or 3rd Party
Sponsorship is Available

Position ID: cITs 1114
Location: Santa Clara, California
Duration: 12 months

 

Title: Hadoop Developer

Job Description:
The candidate must have strong experience/exposure across the entire data landscape with the ability to extend traditional data architecture techniques to include big data components. This includes Data Strategy and Architecture development, Data Governance, Master Data Management, Metadata management, Data Integration/ingestion, Data Quality management, Data modeling, Data warehousing, Business Intelligence and advanced Analytics. Must possess a thorough understanding of data management as well as big data technology, tools, processes and data architecture best practices to guide data and big data centric initiatives. Demonstrated ability to liaise closely with business and IT leadership to establish, defend, and negotiate the approach for implementing data management solutions. - Serves as an expert consultant to senior IT leadership on Data architecture and enterprise data initiatives. Strong communication, organizational, interpersonal and time management skills; must have the ability to work independently with minimal direction or supervision in a team setting and dynamic environment. Ability to identify and articulate domain specific use cases that can take advantage of big data tools and technologies.  

Technical Skills:

  • 5+ years of experience with the Hadoop ecosystem and Big Data technologies -
  • Expert level software development experience -
  • Ability to dynamically adapt to conventional big-data frameworks and tools with the use-cases required by the project -
  • Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hbase, Hive, Impala, Spark, Kafka, Kudu, Solr)
  • Experience with building stream-processing systems using solutions such as spark-streaming, Storm or Flink etc
  • Experience in other open-sources like Druid, Elastic Search, Logstash etc is a plus
  • Knowledge of design strategies for developing scalable, resilient, always-on data lake
  • Some knowledge of Agile(scrum) development methodology is a plus  


Desired Skills:

  • Hadoop/Cloud Developer Certification
  • MongoDB Developer Certification
  • Experience deploying applications in a cloud environment; ability to architect, design, deploy and manage cloud based Hadoop clusters
Elastic
  • No office location

At Elastic, we have a simple goal: to pursue the world's data problems with products that delight and inspire. We help people around the world do exceptional things with their data. From stock quotes to Twitter streams, Apache logs to WordPress blogs, our products are extending what's possible with data, delivering on the promise that good things come from connecting the dots. Often, what you can do with our products is only limited by what you can dream up. We believe that diversity drives our vibe. We unite employees across 30+ countries into one unified team, while the broader community spans across over 100 countries.


Elastic’s Cloud product allows users to build new clusters or expand existing ones easily. This product is built on Docker based orchestration system to easily deploy and manage multiple Elastic Clusters.


What You Will Do:



  • Work cross-functionally with product managers, analysts, and engineering teams to extract meaningful data from multiple sources

  • Develop analytical models to identify trends, calculate KPIs and identify anomalies.  Use these models to generate reports and data dumps that enrich our KPI efforts

  • Design resilient data pipelines or ETL processes to collect, process and index business and operational data

  • Integrate several data sources like Salesforce, Postgres DB, Elasticsearch to create a holistic view of the Cloud business

  • Manage data collection services in production with the SRE team

  • Use Kibana and Elasticsearch to analyze business data. Help engineering and product teams to make data based decisions.

  • Grow and share your interest in technical outreach (blog posts, tech papers, conference speaking, etc.)


What You Bring Along:



  • You are passionate about developing software that deliver quality data to stakeholders

  • Hands-on experience building data pipelines using technologies such as Elasticsearch, Hadoop, Spark

  • Experience developing models for KPIs such as user churn, trial conversion, etc

  • Ability to code in JVM based languages or Python

  • Experience with data modeling

  • Experience doing ad-hoc data analysis for key stakeholders

  • A working knowledge of Elasticsearch

  • Experience building dashboards in Kibana

  • Experience working with ETL tools such as Logstash, Apache NiFi, Talend is a plus

  • Deep understanding of relational as well as NoSQL data stores is a plus

  • A self starter who has experience working across multiple technical teams and decision makers

  • You love working with a diverse, worldwide team in a distributed work environment


Additional Information



  • Competitive pay and benefits

  • Equity

  • Catered lunches, snacks, and beverages in most offices

  • An environment in which you can balance great work with a great life

  • Passionate people building great products

  • Employees with a wide variety of interests

  • Your age is only a number. It doesn't matter if you're just out of college or your children are; we need you for what you can do.


Elastic is an Equal Employment employer committed to the principles of equal employment opportunity and affirmative action for all applicants and employees. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender perception or identity, national origin, age, marital status, protected veteran status, or disability status or any other basis protected by federal, state or local law, ordinance or regulation. Elastic also makes reasonable accommodations for disabled employees consistent with applicable law.

Viavi Solutions UK Ltd
  • Newbury, UK

About Viavi Solutions


VIAVI (NASDAQ: VIAV) has a 90+ year history of technical innovations that have evolved to keep pace and address our customer’s most pressing business issues. We make equipment, software, and systems that help to plan, deploy, certify, monitor, and optimize all kinds of networks - like those for mobile phones, service providers, large businesses and data centers. And, we are also at the forefront of optical security – we bend light to develop and deliver optical solutions that provide security to the world’s currencies and safety and performance applications for consumer electronics and spectrometry.We are the people behind the products that help keep the world connected – at home, school, work, at play, and everywhere in between. VIAVI employees are fierce about supporting customer success and we welcome people who bring their best every day to the company - to question, to collaborate and to push for solutions that will delight our customers.Summary:  VIAVI Solutions is looking for a full stack software development engineer to join our R&D team developing its new NITRO Mobile product. Our NITRO Mobilesolutions process and store billions of events a day and are truly ‘Big data’ systems. You will join a team working atop leading ‘Big data’ frameworks including Apache Spark and Kafka, deployed on large cluster environments. The team will develop on Linux using Java and/or Scala as the primary development languages. 


Duties/Responsibilities: 



  • Work effectively and efficiently with others on the R&D development team to develop a winning product roadmap

  • Continue to expand, focus and leverage personal and team knowledge base and technical abilities in constant pursuit of developing a superior product

  • Follow the Agile Product Development model to constantly optimize feature, time to market and project budget while maintaining an uncompromising high level of product quality

  • Execute full software development life cycle (SDLC)

  • Use Behavior Driven Development or Test Driven Development to deliver well-designed, tested code

  • Develop new user-facing features, following established UI/UX design guidelines

  • Provide ongoing maintenance, support, and enhancements

  • Develop automated software unit tests and integration tests

  • Integrate software components into a fully functional software system

  • Troubleshoot and debug existing systems

  • Provide recommendations for continuous improvement


Required Qualifications & Experience Basic Qualifications: 



  • BS or MS in Computer Science, Computer Engineering, Software Engineering, or related field

  • Excellent English-language written and verbal communication skills

  • Software development experience, using an Agile methodology (e.g., SCRUM or Kanban), including design, development, and testing activities.

  • Experience in developing complex commercial software products 

  • Software development using Java

  • Experience developing applications using a Microservices (preferred) or Web Services architecture

  • Experience with designing, developing and using RESTful APIs supporting JSON or XML

  • Experience working in a Linux environment (RHEL or CentOS preferred)

  • Experience developing single page application (SPA) web applications using HTML5, CSS2/CSS3, and JavaScript (ECMAScript 6)

  • Experience using AngularJS (preferred) or Angular

  • Experience with addressing cross-browser compatibility issues

  • Experience with version control systems (Bitbucket/Git preferred)

  • Experience developing automated unit tests (Junit preferred)


Preferred Qualifications: Experience with some or all of the following: 



  • Spring framework (especially with Spring Boot)

  • Swagger API framework

  • WSO2 Identify Management

  • Elasticsearch, Logstash, Kibana (ELK stack)

  • Apache Kafka

  • Apache Spark

  • JFrog Artifactory artifact manager

  • Apache Maven

  • Apache Zookeeper

  • PostgreSQL

  • Log4j

  • JetBrains IntelliJ IDEA or other JetBrains tools

  • Atlassian tool suite, including JIRA, Confluence, Bitbucket/Git, Bamboo

  • Monitoring system performance with tools such as Graphite and Grafana

  • Deployment of applications in a cloud-hosted environment (Amazon Web Services / AWS preferred)

  • Development or use of tools for Network Performance Monitoring and Diagnostics (NPMD)

  • UI libraries including JQuery, Leaflet, and Highsoft Highcharts

  • Tools such as Node Package Manager (npm), Bower package manager, and Grunt task runner

  • Elasticsearch, Logstash, Kibana (ELK stack)

  • Behavior-driven or test-driven development


If you have what it takes to push boundaries and seize opportunities, apply to join our team today.



VIAVI Solutions is an equal opportunity and affirmative action employer – minorities/females/veterans/persons with disabilities.

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Job Description

Softwareengineering leadership and data science skills, combined with the demands of a highly-visible enterprise metadata repository, make this an exciting challenge for the right candidate.

Are you passionate about digital media, entertainment, and software services? Do you likebigchallenges and working within ahighly-motivatedteam environment?

As aSenior Manager in the Metadata Engineeringgroup of the Data Experience (DX) team at Comcast, you will drive the development,deployment, and support of large-scale Metadata platforms using real-time distributed computing architectures.You willalso employ your skills to promote positive changes in our work culture and practices that will improve our productivity, ingenuity, agility, and software development maturity.TheDXdatateam is afast-moving team of world-class experts who are innovating in end-to-end data delivery and analytics. We are a team that thrives onbigchallenges, results, quality, and agility.

Who will you be working with?

DX MetadataEngineering is a diverse collection of professionals who work with a variety of teams ranging from: other software engineering teams whose Metadata repositories integrate with the Centralized Metadata Repository, Portal engineers who develop a UI to support data discovery, software engineers on other DX platforms that ingest, transform, and retrieve data whose metadata is stored in the Centralized Metadata Store, data stewards/data architects who collect and disseminate metadata information,and users who rely on Metadata for data discovery.

What are some interesting problems you'll be working on?

You will manage the design and development of a metadata and business glossary collection and, enrich the system that allows real-time update of the enterprise and satellite metadata repositories using best-of-breed and industry-leading technologies. These repositories contain metadata and lineage for a widely diverse and ever-growing complement of datasets (e.g., Hortonworks, AWS S3, Streaming Data (Kafka/kinesis), streaming data transformation, ML pipeline, Teradata, and RDBMS's). You will lead the design and development of cross-domain cross-platform lineage tooling by using advanced statistical methods and Machine Intelligence algorithms. You will manage development tools to discover data across disparate metadata repositories, develop tools for data governance, and implement processes to rationalize data across the repositories.

Where can you make an impact?

The dx Teamis building the enterprise metadata repository needed to drive the next generation ofdata platforms and data processing capabilities.Building data products, identifying trouble spots, and optimizing the overall user experience is a challenge that can only be met with a robustmetadata repository capable of providing insightsinto the data and its lineage.

Success in this role is best enabled by a broad mix of skills and interests ranging from traditional distributed systems software engineering prowess to the multidisciplinary field of data science.

Responsibilities:

-Responsible for managing all metadata assets, applications, and supporting processes.

-Closely work with Architects, Product Owners and Solution Engineers to understand product requirements, understand architectural recommendations, and work with solution engineers to develop a viable solution.

-Guide the Metadata Engineering team in identifying product and technical requirements.

-Ensure products and projects are delivered as a roadmap within the agreed budget and time.

-Serve as primary point of contact and liaison between Metadata Engineering and other teams.

-Closely monitor metadata eco-system to ensure that each metadata asset is performing as per SLA and continuously delivering business value.

-Responsible for preparing team budgets, roadmaps, and operational objectives and ensuring operational plans are aligned with business objectives.

-Responsible for selection and recruitment of resources and work to ensure a high-quality stream of candidates in our talent pipeline

-Experience in hiring and managing teams.

-Experienced in managing projects with competing priorities

-Ensure that direct reports keep current with technological developments within the industry.

-Monitor and evaluate competitive applications and products.

-Drive a culture of continuous improvement and innovation

-Pro-actively work to mitigate risks to performance and delivery of our teams

-Promote solutions for integrating metadata and data quality processes into agile methodologies

-Promote blameless post-mortems and ensure that all post-mortem activities are acted upon

Here are some ofthespecific technologies we use:

-Metadata Repositories-Apache Atlas and Informatica MDM

-Spark(AWS EMR, Databricks)

-Kafka, AWS Kinesis

-AWS Glue, AWS Lambda

-Cassandra, RDBMS, Teradata, AWS DynamoDB

-Elasticsearch, Solr, Logstash, Kibana

-Java, Scala, Go, Python, R

-Git,Maven, Gradle, Jenkins

-Puppet, Docker, Terraform, Ansible, AWS CloudFormation

-Linux

-Kubernetes

-Manta

-Hadoop (HDFS, YARN, ZooKeeper, Hive), Presto

-Jira

Skills & Requirements:

-7+ years of people leadership experience in a software development environment

-2-4 years of experience in metadata-related projects

-Bachelors or Masters inComputer Science, Statisticsor related discipline

-Demonstrated experience in data management; Certified Data Management Professional (CDMP)-Nice to have

-Experience in software development of large-scale distributed systems including a proven track record of delivering backend systems that participate in a complex ecosystem.

-Experienceinmetadata-related open source frameworks preferred

-Experience in using and contributing to Open Source software preferred.

-ProficientinUnix/Linux environmentspreferred

-Partner withdata analysis, data quality, and reporting teams in establishing the best standards and principles around metadata management

-Excellentcommunicator, able to analyze and articulate complex issues and technologies understandably and engagingly

-Great design and problem-solving skills

-Adaptable, proactive and program ownership

-Keen attention to detail and high level of commitment

-Thrivesin a fast-paced agile environment. Requirements change quickly, and our team needs to constantly adapt to moving targets

-A team player with excellent networking, collaborating and influencing skills

About ComcastDX(Data Experience):

dx(Data Experience) is a results-driven, data platform research and engineering team responsible for the delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization. We have an overarching objective to gather, organize, and make sense of Comcast data with the intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of the dx team define and leverage industry best practices, work on extremely large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Our mission is to enable many diverse users with the tools and information to gather, organize, make sense of Comcast data, and make it universally accessible to empower, enable, and transform Comcast into an insight-driven organization.

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Job Description

Platformengineering and cloud excellence combined with the demands of a high volume, highly-visible analytics platform make this an exciting challenge for the right candidate.

Are you passionate about digital media, entertainment, and software services? Do you likebigchallenges and working within ahighly-motivatedteam environment?

As aPlatformEngineer indx Data Experience team, you will research develop,support and deploy solutions using real-time distributed computing architectures.Our mission is to enable many diverse users with the tools and information to gather, organize, make sense of Comcast data, and make it universally accessible to empower, enable, and transform Comcast into an insight-driven organization. Thedxbig dataorganization is afast-moving team of world-class experts who are innovating in end-to-end data delivery. We are a team that thrives onbigchallenges, results, quality, and agility.

Who does thePlatformengineer work with?

PlatformEngineering is a diverse collection of professionals who work with a variety of teams ranging from other software engineering teams whose software integrates with analytics services, service delivery engineers who provide support for our product, testers, operational stakeholders with all manner of information needs,and executives who rely ondatafor data-based decision making.

What are some interesting problems you'll be working on?

Develop solutions capable of processingmillions of events per second and multi-billions of events per day, providing both a real time and historical view into the operation of Comcast'swide array ofsystems. Design collection and enrichment system components forquality, timeliness,scale and reliability.Work on high performance real time data stores and a massive historical data stores usingbest-of-breed and industry-leading technology.Build platforms that allow others to design, develop, and apply advanced statistical methods and Machine Intelligence algorithms, fostering self-service capabilities and ease of use across the entire Technology, Product, Xperience (TPX) organization landscape and beyond!

Where can you make an impact?

The dx Teamis building the core components needed to drive the next generation ofdata platforms and data processing capability.Building data products, identifying trouble spots, and optimizing the overall user experience is a challenge that can only be met with a robustdataarchitecture capable of providing insightsthat would otherwise be drowned in anoceanof data.

Success in this role is best enabled by a broad mix of skills and interests ranging from traditional distributed systems software engineering prowess to the multidisciplinary field of data science.

Responsibilities:

-Lead development for new platforms

-Build capabilities that analyze massive amounts of databothinreal-time and batch processing

-Prototype ideas for new tools, products and services

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Ensure a quality transition to production andsolid production operationof the platforms

-Raise the bar for the Engineering team by advocating leading edge practices such as CI/CD, containerization and test-driven development (TDD)

-Enhance our DevOps practicesto deploy and operate our systems

-Automate and streamline our operations and processes

-Build and maintain tools for deployment,monitoring and operations

-Troubleshoot and resolve issues in our development, test andproduction environments

Here are some ofthespecific technologies we use:

-Spark(AWS EMR), AWS Lambda

-SparkStreaming and Batch

-Avro, Parquet

-Apache Kafka, Kinesis Stream

-MemSQL, Cassandra,HBase,MongoDB, RDBMS

-Caching Frameworks (ElastiCache)

-Elasticsearch, Beats, Logstash, Kibana

-Java, Scala, Go, Python, R, Node.js

-Git,Maven, Gradle, Jenkins

-Rancher, Puppet, Docker, Ansible, Kubernetes

-Linux

-Hadoop (HDFS, YARN, ZooKeeper, Hive)

-Presto

Skills & Requirements:

-7+years programming experience

-Bachelors or Masters inComputer Science, Statisticsor related discipline

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem.

-Experienceindata related technologies and open source frameworks preferred

-ProficientinUnix/Linux environments

-Knowledge of network engineering and security

-Test-driven development/testautomation, continuous integration, and deployment automation

-Enjoyworking withdata analysis, data quality and reporting

-Excellentcommunicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Keen attention to detail and high level of commitment

-Thrivesin a fast-paced agile environment. Requirements change quickly and our team needs to constantly adapt to moving targets

About Comcastdx (Data Experience):

dx(Data Experience) is a results-driven, data platform research and engineering team responsible for the delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization. We have an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of the dx team define and leverage industry best practices, work on extremely large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Our mission is to enable many diverse users with the tools and information to gather, organize, make sense of Comcast data, and make it universally accessible to empower, enable, and transform Comcast into an insight-driven organization.

Comcast is an EOE/Veterans/Disabled/LGBT employer

everyLIFE Technologies Limited
  • Farnborough, UK
  • Salary: £55k - 65k

We are looking for an experienced full stack developer to join the existing cross-functional agile team making a real-world difference to the delivery and evolution of care management. You will have an opportunity to work with cutting-edge technologies such as Scala, Akka, Angular, TypeScript and the Elastic Stack (Elastic Search, Logstash, Kibana). Previous experience with Scala would be highly beneficial but if you’re the right fit and have experience of working with OO languages and have an awareness of functional paradigms then we can get you up to speed.


We are a team that prides itself on embracing change, continuous improvement, collaboration and writing good quality code, albeit in a pragmatic manner. If you care about making a difference then you’ll fit in well here.


 Responsibilities include:



  • Design, develop, test and deliver well engineered code, collaborating with Product to ensure that we are releasing value early and often

  • Drive continuous improvement through code reviews, system design to ensure that all code is clean, consistent and secure.

  • Maintain an up to date knowledge of development languages, frameworks, tools and design patterns

  • Promptly escalate issues that affect product delivery and quality

  • Maintain and manage our Continuous Integration and Delivery pipelines and tooling


 Required Skills



  • Strong application of software craftsmanship, OO principles, SOLID techniques and clean coding

  • Experience of modern web and server-side RESTful development

  • Experience of developer testing approaches

  • Experience of build and Continuous Integration pipelines and tools


 Desirable skills



  • Experience of modern front-end JavaScript frameworks (eg Angular, React, Vue.js)

  • Experience of Play, Xitrum or other Scala Microservice frameworks

  • Experience with Netty and Akka or other non-blocking, multi-threaded IO libraries

  • Experience with Git

  • Experience using MySQL and Elasticsearch

  • Experience of TDD and BDD

  • Experience in deployment and maintenance of Linux based servers

  • Awareness and passion for working with agility and with XP techniques


  Candidates must have the right to work in the UK.  Relocation will be considered for the right applicant.

Sky
  • Isleworth, UK

We’re part of Europe’s leading entertainment with over 23 million customers across seven countries, we make life easier by entertaining and connecting people. It’s a genuine team effort. That’s why we want talented people, like you, to join us and help make the future happen.


This role is an exciting opportunity to join us and work within our Tech Team


The perks:


As a valued employee, you’ll benefit from a free Sky Q premium package (one off payment required for installation), an excellent pension scheme and private health care. What’s more, you’ll also have access to over 12,000 LinkedIn Learning courses to support your development. As if that’s not enough, our impressive Osterley campus boasts endless subsidised restaurants, on-site cinema, on-site gym, and much more to make your experience with us even more enjoyable.


To find out more about working with us, search #LifeatSky on LinkedIn, Twitter or Instagram.


Your key responsibilities:


As the Scala Software Developer, you will be a disciplined professional that displays technical aptitude and will possess a level of business acumen. You will work in a dedicated product stream as part of a self-organising and empowered team (Agile).


- Deliver production-ready running tested software in every iteration.


- Provide technical leadership to the development team


- Work as part of the team to support and maintain the live product (including first line support).


- Participate and lead when appropriate agile ceremonies including daily stand-up meetings, planning games, showcases, and retrospectives.


- Collaborate with the Product Owner / Technical Analyst and testers in the creation of user stories, providing information such as cost (estimates) and technical risk.


- Work with the testers to identify and ensure acceptance criteria are satisfied.


Your skills:


- A passion for Agile methodologies and concepts such Lean, and Kanban as well as XP practices including pair programming, TDD, BDD & DDD


- Proven delivery experience of high volume, high availability, large-scale backend system


- A background in Continuous Integration and Delivery practices


- Experience in Scala and Java 8 and Micro Services Architecture


- Reactive Systems


- Testing tools (ScalaTest) & mock frameworks (mockito)


- Experience with Apache Kafka, Akka, RESTful APIs and Unix 


- Experience with Cassandra or other No SQL databases


- Experience with logging, monitoring and alerting tools, e.g. Splunk, Logstash, Kibana, App Dynamics, Elastic Search, Grafana, etc


If you’re ready to work in a dynamic environment alongside talented people who take pride in delivering great results, apply today.


Happy to talk flexible working.


It’s our people that make Sky Europe’s leading entertainment company. That’s why we work hard to be an inclusive employer, so everyone at Sky can be their best. 


If you are successful in your application for this role, your appointment will be subject to receiving a positive outcome from your Criminal Record Check.


Believe in better

Exponea
  • Bratislava, Slovakia
  • Salary: €13k - 72k

  • Would you like to manage a platform that uses hundreds of servers, using several TB of RAM?

  • Would working with a platform that handles thousands of requests per second and stores billions of records be a challenge for you?


Exponea Experience Cloud is an award-winning customer experience and data management platform that not only boosts e-commerce growth with AI-powered engagement automation, but also helps improve company culture with improved cross-department collaboration and customer centricity.


With all key features developed, we are developing additional features and stabilizing the platform, which serves hundreds of servers and TBs of RAM to handle thousands of requests per second.


We are currently looking for new DevOps Engineers to help us maintain and improve the system it can grow much further without our customers noticing any hiccups.


Our platform:


 The Overall Exponea architecture is scalable, secure and prepared for Big Data from our clients. For data ingestion we use an asynchronous API with basic input validation. All such events are put into Kafka for further data processing. We use several data storages for specific use cases such as MongoDB (primary copy of most of our data), Redis (cache, fast access, unstructured), MapR (long term archival, data science) and RabbitMQ (campaign workers). For several use cases we provide real time analysis API that triggers computation or data retrieval from our proprietary in-memory analytics engine.


 Our cloud solution is deployed on Google Cloud.  


Your role:



  • Actively participate in developing Exponea through almost all development phases.

  • Propose system changes for increased stability and scalability.

  • Improve development processes to facilitate testing and deploying code to production.

  • Monitor Exponea so we can quickly identify issues if and when they happen.

  • Resolve urgent issues in production, and once ready to do so, be available on-call.

  • Maintain our servers and virtual machines.


Our expectations of you:



  • Familiarity with Linux server administration basics.

  • General experience of the full web app development cycle (designing, building, testing, deploying, fixing).

  • Experience of operating highly-available services.

  • Experience of at least one server configuration tool.

  • Good command of written and oral English.


If you are eager to grow, but lack the requested experience, let us know. We are also looking for Junior DevOps Engineers.


What we’ll consider an asset:



  • Familiarity with Ansible, ideally from larger-scale projects.

  • Programming in PythonC++ or Go. The more coding experience, the better.

  • Familiarity with some of our technologies: MongoDBRedisRabbitMQKafkaElasticsearchHadoop.

  • Familiarity with monitoring tools: ZabbixGrafanaLogstashStatsDTelegrafInfluxDB.

  • Ability to systematically address issues even in less-than-ideal situations.


 Our way of working:


Quick iterations, MVPs, improvements on the go. Technologies are evolving as we speak in our field. If you enjoy building new things and learning on the go, you will like it here. You will also be able to leave a mark on our product.


What you might like about the role:



  • Maintaining and scaling the fastest marketing cloud, considered world-class by many (see the  Best Marketing Automation Software benchmark).

  • Having immediate impact through collaboration with our developers.

  • Becoming known by publishing your texts and speaking at conferences, workshops and meet-ups including our (see our founder Jozef Kovac in Forbes Next).

  • Working side by side with senior colleagues from companies like Red Hat, Piano, and WebSupport.