OnlyDataJobs.com

Apple Inc.
  • Cupertino, CA
Job Summary:
Would you like to play a part in the next revolution in human-computer interaction? Contribute to a product that is redefining mobile and desktop computing, and work with the people who built the intelligent assistant that helps millions of people get things done just by asking?

The vision for the Siri Data organization is to improve Siri by using data as the voice of our customers.

The Siri Team at Apple is seeking a hardworking data pipeline engineer to build complex & scalable data pipelines. As part of this group, you will work with one of the most exciting high performance computing environments, with petabytes of data, millions of queries per second, and have an opportunity to imagine and build products that delight our customers every single day.

Key Qualifications:
* 6+ years experience working with Spark or other big data architectures (Hadoop, Mapreduce) in high-volume environments.
* Experience building and managing ETL pipelines from inception to production rollout.
* Experience with object-oriented/object function scripting languages: Python and Scala.
* Experience with workflow management tools: Airflow, Oozie, Azkaban, etc.
* Experience with configuration management & Monitoring : Splunk, Grafana, Prometheus, Nagios, puppet.
* Experience supporting hosted services in a high-volume customer facing environment.
* Experience with SQL and basic database knowledge for modifying queries and tables.
* Experience with CI / CD : Teamcity and Jenkins.

 

Description:
The Siri Metrics Platform team is in an unusual position to align our quality initiatives to a singular platform. You can help us architect highly scalable distributed data system. A part of the job is to ensure the Operational SLA for data generation & availability across the data org in Siri. In order to achieve these things you would have experience in:
* Writing Tools / Dashboard for operational excellence.
* Contribute to our monitoring & Alerting framework.
* Develop & contribute to Open source projects (Apache Spark, Apache Druid).
* Constantly evolve our pipelines & question the status quo.
* Ensure the platform can handle all types of robust data exploration in real-time.
* Partner with different teams and build features to improve data analysis.

Education:
Bachelors Degree or foreign equivalent in Computer Science, or related field, or equivalent experience.

Apple is an Equal Opportunity Employer that is committed to inclusion and diversity. We also take affirmative action to offer employment and advancement opportunities to all applicants, including minorities, women, protected veterans, and individuals with disabilities. Apple will not discriminate or retaliate against applicants who inquire about, disclose, or discuss their compensation or that of other applicants.

OverDrive Inc.
  • Garfield Heights, OH

The Data Integration team at OverDrive provides data for other teams to analyze and build their systems upon. We are plumbers, building a company-wide pipeline of clean, usable data for others to use. We typically don’t analyze the data, but instead we make the data available to others. Your job, if you choose to join us, is to help us build a real-time data platform that connects our applications and makes available a data stream of potentially anything happening in our business.


Why Apply:


We are looking for someone who can help us wire up the next step. Help us create something from the ground up (almost a green field). Someone that can help us move large data from one team to the next and come up with ideas and solutions around how we go about looking at data. Using technologies like Kafka, Scala, Clojure, F#.


About You:



  • You always keep up with the latest in distributed systems. You're extremely depressed each summer when the guy who runs highscalability.com hangs out the "Gone Fishin" sign.

  • You’re humble. Frankly, you’re in a supporting role. You help build infrastructure to deliver and transform data for others. (E.g., someone else gets the glory because of your effort, but you don’t care.)

  • You’re patient. Because nothing works the first time, when it comes to moving data around.

  • You hate batch. Real-time is your thing.

  • Scaling services is easy. You realize that the hardest part is scaling your data, and you want to help with that.

  • You think microservices should be event-driven. You prefer autonomous systems over tightly-coupled, time-bound synchronous ones with long chains of dependencies.


 Problems You Could Help Solve:



  • Help us come up with solutions around speeding up our process

  • Help us come up with ideas around making our indexing better

  • Help us create better ways to track all our data

  • If you like to solve problems and use cutting edge technology – keep reading


 Responsibilities:



  • Implement near real-time ETL-like processes from hundreds of applications and data sources using the Apache Kafka ecosystem of technologies.

  • Designing, developing, testing and tuning a large-scale ‘stream data platform’ for connecting systems across our business in a decoupled manner.

  • Deliver data in near real-time from transactional data stores into analytical data stores.

  • R&D ways to acquire data and suggest new uses for that data.

  • “Stream processing.” Enable applications to react to, process and transform streams of data between business domains.

  • “Data Integration.” Capture application events and data store changes and pipe to other interested systems.


 Experience/Skills: 



  • Comfortable with functional programming concepts. While we're not writing strictly functional code, experience with languages like Scala, Haskell, or Clojure will make working with streaming data easier.

  • Familiarity with the JVM.  We’re using Scala with a little bit of Java and need to occasionally tweak the performance settings of the JVM itself.

  • Familiarity with C# and the .Net framework is helpful. While we don’t use it day to day, most of our systems run on Windows and .Net.

  • Comfortable working in both Linux and Windows environments. Our systems all run on Linux, but we interact with many systems running on Windows servers.

  • Shell scripting & common Linux tool skills.

  • Experience with build tools such as Maven, sbt, or rake.

  • Knowledge of distributed systems.

  • Knowledge of, or experience with, Kafka a plus.

  • Knowledge of Event-Driven/Reactive systems.

  • Experience with DevOps practices like Continuous Integration, Continuous Deployment, Build Automation, Server automation and Test Driven Development.


 Things You Dig: 



  • Stream processing tools (Kafka Streams, Storm, Spark, Flink, Google Cloud DataFlow etc.)

  • SQL-based technologies (SQL Server, MySQL, PostgreSQL, etc.)

  • NoSQL technologies (Cassandra, MongoDB, Redis, HBase, etc.)

  • Server automation tools (Ansible, Chef, Puppet, Vagrant, etc.)

  • Distributed Source Control (Mercurial, Git)

  • The Cloud (Azure, Amazon AWS)

  • The ELK Stack (Elasticsearch, Logstash, Kibana)


What’s Next:


As you’ve probably guessed, OverDrive is a place that values individuality and variety. We don’t want you to be like everyone else, we don’t even want you to be like us—we want you to be like you! So, if you're interested in joining the OverDrive team, apply below, and tell us what inspires you about OverDrive and why you think you are perfect for our team.



OverDrive values diversity and is proud to be an equal opportunity employer.

Raytheon UK
  • Gloucester, UK

What do you want to learn in your next software engineering role?


Do you want to gain more experience with agile? All of Raytheons projects use the agile methodology and opportunities to become a certified scrum master are available.


programming languages? We use Java, Python and Javascript to name a few


Big Data technologies? Currently, we are using Hadoop, Pig, Hive and Spark


DevOps skills? You can work with deployment tools such as Chef and Puppet


Cloud technologies? You can learn to develop within AWS 


Database technologies? How about adding Elasticsearch and MongoDB to your skill set


You can also work on high performing distributed systems, machine learning and data analytics projects along with learning secure development , PKI and encryption techniques.


These are some of technologies you and techniques you can learn when joining Raytheon’s cyber innovation centre in Gloucester.


We like to support individual innovation and development of your career whilst also having fun and getting involved in STEM/cyber academy activities, team socials and tech events. We promote a collaborative work environment where everyone's ideas are listened. For example,  we run a yearly cyber innovation contest where your can submit your ideas and if chosen you can put it into action.


As you can see from above you will be continually exposed to new tech, methodologies and ideas to help design and develop Java applications for a variety of customers. Your work will be real and have a tangible effect on the UK cyber security industry.


To discuss further please click apply

Briq
  • Santa Barbara, CA
  • Salary: $70k - 100k

Briq is hiring a Senior Full Stack Software Engineer Big Data/ML Pipelines to scale up its AI and ML dev team. You will need to have strong programming skills, a proven knowledge of traditional Big Data technologies, experience working with heterogeneous data types at scale, experience with Big Data architectures, past experience working with a team to transform proof-of-concept tools to production-ready toolkits, and excellent communication and planning skills. You and other engineers in this team will help advance Briq's capacity to build and deploy leading solutions for AI-based applications in cyber security.


What You'll Be Doing • Working with data scientists and data engineers to turn proof-of-concept analytics/workflows into production-ready tools and toolkits • Architecting and implementing high performance data pipelines and integrating them with existing cyber security infrastructure and solutions • Deploying and productionalizing solutions focused around threat hunting, anomaly detection, and security analytics • Providing input and feedback to teams regarding decisions surrounding topics such as infrastructure, data architectures, and DevOps strategy • Building automation and tools that will increase the productivity of teams developing distributed systems


What We Need To See • You have a BS, MS, or PhD in Computer Science, Computer Engineering, or closely related field with 4+ years of work or research experience in software development • 1+ years working with data scientists and data engineers to implement proof-of-concept ideas to production environments, including transitioning tools from research-based tools to those ready for deployment • Strong skills in Python and scripting tasks as well as comfort in using Linux and typical development tools (e.g., Git, Jira, Kanban) • Solid knowledge of traditional big data technologies (e.g., Hadoop, Spark, Cassandra) and expertise developing for and targeting at least one of these platforms • Experience with using automation tools (e.g., Ansible, Puppet, Chef) and DevOps tools (e.g., Jenkins, Travis-CI, Gitlab CI) • Experience with or exposure to cyber network data (e.g., PCAP, flow, host logs) or a demonstrated ability to work with heterogeneous data types that are produced at high velocity • Highly motivated with strong communication skills, you have the ability to work successfully with multi-functional teams and coordinate effectively across organizational boundaries and geographies


Ways To Stand Out From The Crowd • Experience working with AI/machine learning/deep learning computing is the most productive and pervasive platform for deep learning and AI.  We integrate and optimize every deep learning framework. With deep learning, we can teach AI to do almost anything.  We are making sense of the complex world of construction. These are just a few examples. AI will spur a wave of social progress unmatched since the industrial revolution.


Briq is changing the world of construction. Join our development team and help us build the real-time, cost-effective computing platform driving our success in this dynamic and quickly growing field in one of the world's largest industries.


Briq is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression , sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.


Read About Us on TechCrunch : https://techcrunch.com/2019/02/22/briq-the-next-building-block-in-techs-reconstruction-of-the-construction-business-raises-3-million/ 

Brickschain
  • Santa Barbara, CA
  • Salary: $70k - 100k

Briq is hiring a Senior Full Stack Software Engineer Big Data/ML Pipelines to scale up its AI and ML dev team. You will need to have strong programming skills, a proven knowledge of traditional Big Data technologies, experience working with heterogeneous data types at scale, experience with Big Data architectures, past experience working with a team to transform proof-of-concept tools to production-ready toolkits, and excellent communication and planning skills. You and other engineers in this team will help advance Briq's capacity to build and deploy leading solutions for AI-based applications in cyber security.


What You'll Be Doing • Working with data scientists and data engineers to turn proof-of-concept analytics/workflows into production-ready tools and toolkits • Architecting and implementing high performance data pipelines and integrating them with existing cyber security infrastructure and solutions • Deploying and productionalizing solutions focused around threat hunting, anomaly detection, and security analytics • Providing input and feedback to teams regarding decisions surrounding topics such as infrastructure, data architectures, and DevOps strategy • Building automation and tools that will increase the productivity of teams developing distributed systems


What We Need To See • You have a BS, MS, or PhD in Computer Science, Computer Engineering, or closely related field with 4+ years of work or research experience in software development • 1+ years working with data scientists and data engineers to implement proof-of-concept ideas to production environments, including transitioning tools from research-based tools to those ready for deployment • Strong skills in Python and scripting tasks as well as comfort in using Linux and typical development tools (e.g., Git, Jira, Kanban) • Solid knowledge of traditional big data technologies (e.g., Hadoop, Spark, Cassandra) and expertise developing for and targeting at least one of these platforms • Experience with using automation tools (e.g., Ansible, Puppet, Chef) and DevOps tools (e.g., Jenkins, Travis-CI, Gitlab CI) • Experience with or exposure to cyber network data (e.g., PCAP, flow, host logs) or a demonstrated ability to work with heterogeneous data types that are produced at high velocity • Highly motivated with strong communication skills, you have the ability to work successfully with multi-functional teams and coordinate effectively across organizational boundaries and geographies


Ways To Stand Out From The Crowd • Experience working with AI/machine learning/deep learning computing is the most productive and pervasive platform for deep learning and AI.  We integrate and optimize every deep learning framework. With deep learning, we can teach AI to do almost anything.  We are making sense of the complex world of construction. These are just a few examples. AI will spur a wave of social progress unmatched since the industrial revolution.


Briq is changing the world of construction. Join our development team and help us build the real-time, cost-effective computing platform driving our success in this dynamic and quickly growing field in one of the world's largest industries.


Briq is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression , sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.


Read About Us on TechCrunch : https://techcrunch.com/2019/02/22/briq-the-next-building-block-in-techs-reconstruction-of-the-construction-business-raises-3-million/ 

Marketcircle
  • No office location
  • Salary: C$65k - 75k

Are you a software developer that loves the option of working from home, collaborating with a fun team, and enjoy solving challenging problems?  If so, we're looking for an experienced software developer to join our Backend Team. This team is primarily responsible for developing, maintaining and supporting the backend services for our apps. You will be a highly motivated team player, as well as creative and passionate about developing new technology that not only improves the way our apps work, but also helps small businesses world wide. You are a self-starter and aren’t afraid to jump in on the deep end.  Why you’ll love working at Marketcircle:



  • Work remotely! No one likes having to battle traffic during rush-hour on a daily basis, so you don’t have to! 

  • Startup style company. You won’t find any of that corporate BS here!

  • Ownership. We give you the freedom and flexibility to take ownership of your work. In fact, we believe in this so much that its one of our core values. 

  • Learn. We invest in our employees both vertically and horizontally. Want to attend a conference? Great! Want to learn the latest language? We have unlimited Udemy courses. 

  • Team. Our team is like our second family. And why shouldn’t they be? We work, learn, eat and in some cases even live with each other! 


You are:



  • an experienced software developer, with some experience building backend services

  • comfortable working remotely

  • comfortable working independently or collaboratively

  • willing to participate in a rotating on-call schedule


You’ll be working on:



  • a HTTP/REST API written in Ruby (you will probably spend most of you time here)

  • an Authentication/Payment backend written in Ruby

  • a PostgreSQL database with a custom C extension to track changes


You have:



  • a solid understanding of modern backend applications

  • experience with modern API design and ideally know your way around in a web framework such as Ruby on Rails, Django, or Sinatra

  • experience with a either Ruby, Python, or a similar scripting language

  • an appreciation for well written, tested and documented code

  • experience with Linux or a BSD

  • experience with Git and GitHub


Bonus Points for:



  • experience with infrastructure management tools (like Puppet, Ansible or Chef) (we use Ansible)

  • experience with cloud infrastructure providers (like AWS, Google Cloud, Microsoft Azure or DigitalOcean)

  • knowing your way around the network stack, from HTTP to TCP to IP and have a solid understanding of security (TLS/ IPSec/Firewalls)


How to Apply:  Send your resume over to jobs[at]marketcircle[dot]com and be sure to include why you’d be the best fit for this role. 

Marketcircle
  • No office location
  • Salary: C$65k - 75k

Are you a software developer that believes in remote work, collaborating with a fun team, and enjoy solving challenging problems?  If so, we're looking for an experienced software developer to join our Backend Team. This team is primarily responsible for developing, maintaining and supporting the backend services for our apps. You will be a highly motivated team player, as well as creative and passionate about developing new technology that not only improves the way our apps work, but also helps small businesses world wide. You are a self-starter and aren’t afraid to jump in on the deep end.  Why you’ll love working at Marketcircle:



  • Work remotely! No one likes having to battle traffic during rush-hour on a daily basis, so you don’t have to! 

  • Startup style company. You won’t find any of that corporate BS here!

  • Learn. We invest in our employees both vertically and horizontally. Want to attend a conference? Great! Want to learn the latest language? We have unlimited Udemy courses. 

  • Team. Our team is like our second family. And why shouldn’t they be? We work, learn, eat and in some cases even live with each other! 


You are:



  • an experienced software developer, with some experience building backend services

  • comfortable working remotely

  • comfortable working independently or collaboratively

  • willing to participate in a rotating on-call schedule


You’ll be working on:



  • a HTTP/REST API written in Ruby (you will probably spend most of you time here)

  • an Authentication/Payment backend written in Ruby

  • PostgreSQL database(s) with a custom C extension to track changes

  • Elasticsearch 


You have:



  • a solid understanding of modern backend applications

  • experience with modern API design and ideally know your way around in a web framework such as Ruby on Rails, Django, or Sinatra

  • experience with a either Ruby, Python, or a similar scripting language

  • an appreciation for well written, tested and documented code

  • experience with Linux or a BSD

  • experience with Git and GitHub


Bonus Points for:



  • experience with infrastructure management tools (like Puppet, Ansible or Chef) (we use Ansible)

  • experience with cloud infrastructure providers (like AWS, Google Cloud, Microsoft Azure or DigitalOcean)

  • knowing your way around the network stack, from HTTP to TCP to IP and have a solid understanding of security (TLS/ IPSec/Firewalls)


How to Apply:  Send your resume over to jobs[at]marketcircle[dot]com and be sure to include why you’d be the best fit for this role. 

Ultra Tendency
  • Madrid, Spanien

Got experiences with infrastructure and release automation? Deep knowledge about clusters and strong commitment for excellent processes? You've been there? You've done that? Then join us now!



This is your mission:



  • Support our customers and development teams in transitioning capabilities from development and testing to operations

  • Deploy and extend large-scale server clusters for our clients

  • Fine-tune and optimize our clusters to process millions of records every day 

  • Learn something new every day and enjoy solving complex problems


What we need:



  • You know Linux like the back of your hand

  • You love to automate all the things – SaltStack, Ansible, Terraform and Puppet are your daily business

  • You can write code in Python, Java, Ruby or similar languages.

  • You are driven by high quality standards and attention to detail

  • Understanding of the Hadoop ecosystem and knowledge of Docker is a plus


Things we offer you:



  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • Work on the open-source community and become a contributor. Learn from open-source enthusiasts which you will find nowhere else in Germany!

  • Work in an English-speaking, international environment

  • Work with cutting edge equipment and tools







About Ultra Tendency



Ultra Tendency is a software-development and consulting company, which is fast growing and specialising in the fields of Big Data and Data Science. We design, develop, and support complex algorithms and applications that enable data-driven products and services for our customers.


data privacy statement: http://www.ultratendency.com/data-protection.html

ECRI Institute
  • Plymouth Meeting, PA

Work with data, pipelines and analytics across our web properties, CRM platform, and member services. Enhance organizational strategy and tactics by providing actionable recommendations to business groups. Support ECRI Institute’s 50-year mission of advancing effective evidence-based healthcare worldwide. Partner closely with DevOps, Software Engineers, Data Scientists and our respected and accomplished business partners, all working onsite at ECRI’s scenic suburban world headquarters in Plymouth Meeting, PA. Benefit from a healthy work-life balance while staying on the leading edge of technology and thriving in an innovative startup-like culture minus the risk. Sleep well knowing you are helping achieve a world where safe, high-quality healthcare is accessible to everyone.


Responsibilities:



  • Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • Create and troubleshoot simple to advanced level SQL scripts.

  • Perform analysis of existing data pipeline and data store performance and provide improvement recommendations

  • Gather and document business requirements.

  • Support and work with cross-functional teams.

  • Support bug fixes and implement enhancements to existing systems.

  • Participate in team meetings and code reviews.

  • Adhere to ECRI platform, standards, and best practices.

  • Work independently and within a team when needed.

  • Participate in personal growth opportunities.


Qualifications:


Experience:



  • 3-5 years of recent experience in least one modern programming language in a data engineering or back-end development capacity: Python, R, Node.JS, Clojure, Scala

  • Experience with Data Frames (any language)

  • Experience with relational SQL and NoSQL databases, such as SQL Server, PostgreSQL, Cassandra and Redis.

  • Experience optimizing SQL queries

  • Experience building and deploying ETL pipelines

  • Experience with agile or Kanban methodologies.

  • Desire to learn and grow professionally.


Critical Skills



  • Efficiently communicate complex ideas, learn from others, and adopt standards.

  • Analyze large amounts of data: facts, figures, and numbers to find conclusions and actionable recommendations for business groups.

  • Troubleshoot and effectively diagnose and fix problems.

  • Attention to details.


Beneficial Additional Knowledge and Skills (not required):



  • Familiarity with DevOps technologies such as Containers, Kubernetes, Chef, Puppet, Ansible.

  • Familiarity with CI/CD pipelines.

  • Familiarity with Open Source data pipeline tools such as: airflow, minio, nats, spark, kafka

  • Healthcare business experience.

  • Experience transitioning SSIS/Biztalk data pipelines to open-source based pipelines.

  • Experience using SharePoint REST API


Education: Associate/Bachelor’s degree in Computer Science or related major degree OR Equivalent professional experience.


About ECRI Institute: ECRI Institute is a nonprofit organization that researches the best approaches to improving patient safety and care. It has its headquarters in Plymouth Meeting, Pennsylvania. We have a diverse working environment that encourages teamwork and an open exchange of ideas. Over 400 dedicated staff blend extraordinary scope and depth of clinical, management, and technical expertise with a wide range of experienced healthcare professionals. Our competitive benefit package for full-time and benefit-eligible part-time employees includes medical, dental, vision, and prescription coverage which begin on the first day of the month following their date of employment.


For 50 years, ECRI Institute has dedicated itself to bringing the discipline of applied scientific research to healthcare. Through rigorous, evidence-based patient safety research, ECRI Institute has recommended actionable solutions that have saved countless lives. ECRI Institute is designated an Evidence-Based Practice Center by the U.S. Agency for Healthcare Research and Quality. ECRI Institute PSO is listed as a federally certified Patient Safety Organization by the U.S. Department of Health and Human Services and strives to achieve the highest levels of safety and quality in healthcare by collecting and analyzing patient safety information and sharing best practices and lessons learned. Qualified applicants must be legally authorized to work in the United States.


ECRI Institute is an equal opportunity and affirmative action employer and does not discriminate against otherwise qualified applicants on the basis of race, color, creed, religion, ancestry, age, sex, sexual orientation, marital status, national origin, disability or handicap, or veteran status. If you need a reasonable accommodation for any part of the application and/or hiring process, please contact the Human Resources Department at 610-825-6000. EOE Minority/Female/Disability/Veteran


To be considered further for this opportunity interested candidates must apply directly to our website. https://www.ecri.org/about/pages/careers.aspx

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Summary

Responsible for promoting the use of industry and Company technology standards. Monitors emerging technologies/technology practices for potential use within the Company. Designs and develops updated infrastructure in support of one or more business processes. Helps to ensure a balance between tactical and strategic technology solutions. Considers business problems "end-to-end": including people, process, and technology, both within and outside the enterprise, as part of any design solution. Mentors, reviews code and verifies that the object-oriented design best practices and that coding and architectural guidelines are adhered to. Identifies and drives issues through closure. Speaks at conferences and tech meetups about Comcast technologies and assists in finding key technical positions.

This role brings to bear significant cloud experience in the private and public cloud space as well as big data and software engineering. This role will be key in the re-platforming of the CX Personalization program in support of wholesale requirements. This person will engage as part of software delivery teams and contribute to several strategic efforts that drive personalized customer experiences across product usage, support interactions and customer journeys. This role leads the building of real-time big data platforms, machine learning algorithms and data services that enable proactive responses for customers at every critical touch point.

Core Responsibilities

-Enterprise-Level architect for "Big Data" Event processing, analytics, data store, and cloud platforms.

-Enterprise-Level architect for cloud applications and "Platform as a Service" capabilities

-Detailed current-state product and requirement analysis.

-Security Architecture for "Big Data" applications and infrastructure

-Ensures programs are envisioned, designed, developed, and implemented across the enterprise to meet business needs. Interfaces with the enterprise architecture team and other functional areas to ensure that most efficient solution is designed to meet business needs.

-Ensures solutions are well engineered, operable, maintainable, and delivered on schedule. Develops, documents, and ensures compliance with best practices including but not limited to the following coding standards, object oriented design, platform and framework specific design concerns and human interface guidelines.

-Tracks and documents requirements for enterprise development projects and enhancements.

-Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture.

-Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations etc. to meet goals.

-Gathers and understands client needs, finding key areas where technology leverage is possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction.

-Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent from the start and seeks to remove as much re-work as possible.

-Works with product marketing to define requirements. Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems.

-Acts as architectural lead on project.

-Applies new and innovative ideas to old or new problems. Fosters environments that encourage innovation. Contributes to and supports effort to further build intellectual property via patents.

-Consistent exercise of independent judgment and discretion in matters of significance.

-Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary.

-Other duties and responsibilities as assigned.

Requirements:

-Demonstrated experience with "Platform as a Service" (PaaS) architectures including strategy, architectural patterns and standards, approaches to multi-tenancy, scalability, and security.

-Demonstrated experience with schema and data governance and message metadata stores

-Demonstrated experience with public cloud resources such as AWS.

-Demonstrated experience with cloud automation technologies including Ansible, Terraform, Chef, Puppet, etc

-Hands-on experience with Data Flow processing engines, such as Apache NiFi and Apache Flink

-Working knowledge / experience with Big Data platforms (Kafka, Hadoop, Storm/Spark, NoSQL, In-memory data grid)

-Working knowledge / experience with Linux, Java, Python.

Education Level

- Bachelor's Degree or Equivalent

Field of Study

- Engineering, Computer Science

Years Experience

2+ years in Software Engineering Experience

1+ years in Cloud Infrastructure

Compliance

Comcast is an EEO/AA/Drug Free Workplace.

Disclaimer

The above information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Summary

Responsible for promoting the use of industry and Company technology standards. Monitors emerging technologies/technology practices for potential use within the Company. Designs and develops updated infrastructure in support of one or more business processes. Helps to ensure a balance between tactical and strategic technology solutions. Considers business problems "end-to-end": including people, process, and technology, both within and outside the enterprise, as part of any design solution. Mentors, reviews code and verifies that the object-oriented design best practices and that coding and architectural guidelines are adhered to. Identifies and drives issues through closure. Speaks at conferences and tech meetups about Comcast technologies and assists in finding key technical positions.

This role brings to bear significant cloud experience in the private and public cloud space as well as big data and software engineering. This role will be key in the re-platforming of the CX Personalization program in support of wholesale requirements. This person will engage as part of software delivery teams and contribute to several strategic efforts that drive personalized customer experiences across product usage, support interactions and customer journeys. This role leads the building of real-time big data platforms, machine learning algorithms and data services that enable proactive responses for customers at every critical touch point.

Core Responsibilities

-Enterprise-Level architect for "Big Data" Event processing, analytics, data store, and cloud platforms.

-Enterprise-Level architect for cloud applications and "Platform as a Service" capabilities

-Detailed current-state product and requirement analysis.

-Security Architecture for "Big Data" applications and infrastructure

-Ensures programs are envisioned, designed, developed, and implemented across the enterprise to meet business needs. Interfaces with the enterprise architecture team and other functional areas to ensure that most efficient solution is designed to meet business needs.

-Ensures solutions are well engineered, operable, maintainable, and delivered on schedule. Develops, documents, and ensures compliance with best practices including but not limited to the following coding standards, object oriented design, platform and framework specific design concerns and human interface guidelines.

-Tracks and documents requirements for enterprise development projects and enhancements.

-Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture.

-Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations etc. to meet goals.

-Gathers and understands client needs, finding key areas where technology leverage is possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction.

-Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent from the start and seeks to remove as much re-work as possible.

-Works with product marketing to define requirements. Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems.

-Acts as architectural lead on project.

-Applies new and innovative ideas to old or new problems. Fosters environments that encourage innovation. Contributes to and supports effort to further build intellectual property via patents.

-Consistent exercise of independent judgment and discretion in matters of significance.

-Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary.

-Other duties and responsibilities as assigned.

Requirements:

-Demonstrated experience with "Platform as a Service" (PaaS) architectures including strategy, architectural patterns and standards, approaches to multi-tenancy, scalability, and security.

-Demonstrated experience with schema and data governance and message metadata stores

-Demonstrated experience with public cloud resources such as AWS.

-Demonstrated experience with cloud automation technologies including Ansible, Terraform, Chef, Puppet, etc

-Hands-on experience with Data Flow processing engines, such as Apache NiFi and Apache Flink

-Working knowledge / experience with Big Data platforms (Kafka, Hadoop, Storm/Spark, NoSQL, In-memory data grid)

-Working knowledge / experience with Linux, Java, Python.

Education Level

- Bachelor's Degree or Equivalent

Field of Study

- Engineering, Computer Science

Years Experience

3+ years in Software Engineering Experience

1+ years in Cloud Infrastructure

Compliance

Comcast is an EEO/AA/Drug Free Workplace.

Disclaimer

The above information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Summary

Responsible for promoting the use of industry and Company technology standards. Monitors emerging technologies/technology practices for potential use within the Company. Designs and develops updated infrastructure in support of one or more business processes. Helps to ensure a balance between tactical and strategic technology solutions. Considers business problems "end-to-end": including people, process, and technology, both within and outside the enterprise, as part of any design solution. Mentors, reviews code and verifies that the object-oriented design best practices and that coding and architectural guidelines are adhered to. Identifies and drives issues through closure. Speaks at conferences and tech meetups about Comcast technologies and assists in finding key technical positions.

This role brings to bear significant cloud experience in the private and public cloud space as well as big data and software engineering. This role will be key in the re-platforming of the CX Personalization program in support of wholesale requirements. This person will engage as part of software delivery teams and contribute to several strategic efforts that drive personalized customer experiences across product usage, support interactions and customer journeys. This role leads the building of real-time big data platforms, machine learning algorithms and data services that enable proactive responses for customers at every critical touch point.

Core Responsibilities

-Enterprise-Level architect for "Big Data" Event processing, analytics, data store, and cloud platforms.

-Enterprise-Level architect for cloud applications and "Platform as a Service" capabilities

-Detailed current-state product and requirement analysis.

-Security Architecture for "Big Data" applications and infrastructure

-Ensures programs are envisioned, designed, developed, and implemented across the enterprise to meet business needs. Interfaces with the enterprise architecture team and other functional areas to ensure that most efficient solution is designed to meet business needs.

-Ensures solutions are well engineered, operable, maintainable, and delivered on schedule. Develops, documents, and ensures compliance with best practices including but not limited to the following coding standards, object oriented design, platform and framework specific design concerns and human interface guidelines.

-Tracks and documents requirements for enterprise development projects and enhancements.

-Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture.

-Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations etc. to meet goals.

-Gathers and understands client needs, finding key areas where technology leverage is possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction.

-Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent from the start and seeks to remove as much re-work as possible.

-Works with product marketing to define requirements. Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems.

-Acts as architectural lead on project.

-Applies new and innovative ideas to old or new problems. Fosters environments that encourage innovation. Contributes to and supports effort to further build intellectual property via patents.

-Consistent exercise of independent judgment and discretion in matters of significance.

-Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary.

-Other duties and responsibilities as assigned.

Requirements:

-Demonstrated experience with "Platform as a Service" (PaaS) architectures including strategy, architectural patterns and standards, approaches to multi-tenancy, scalability, and security.

-Demonstrated experience with schema and data governance and message metadata stores

-Demonstrated experience with public cloud resources such as AWS.

-Demonstrated experience with cloud automation technologies including Ansible, Terraform, Chef, Puppet, etc

-Hands-on experience with Data Flow processing engines, such as Apache NiFi

-Working knowledge / experience with Big Data platforms (Kafka, Hadoop, Storm/Spark, NoSQL, In-memory data grid)

-Working knowledge / experience with Linux, Java, Python.

Education Level

- Bachelor's Degree or Equivalent

Field of Study

- Engineering, Computer Science

Years Experience

11+ years in Software Engineering Experience

4+ years in Technical Leadership roles

1+ years in Cloud Infrastructure

Compliance

Comcast is an EEO/AA/Drug Free Workplace.

Disclaimer

The above information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications

Comcast is an EOE/Veterans/Disabled/LGBT employer

Agoda
  • Khwaeng Pathum Wan, Thailand

Agoda is a Booking Holdings (BKNG) company, the world’s leading provider of brands that help people book great experiences through technology. And as a Booking Holdings company, we are part of the largest online travel company in the world. Technology is not just what we do – it’s at the heart of who we are. We have the dynamism and short chain of command of a start-up and the capital to make things happen. We love innovation and putting new technologies to work to extend our lead on the competition.


For our rapidly growing IT organization, we are looking for software engineers from many different experiences and qualifications to strengthen our back-end engineering teams. Being a software engineer at Agoda isn’t just about developing software. It’s about being a center piece of the innovation and technical excellence that the rapidly changing field of online travel requires.


Do you find yourself pondering:



  • How to apply optimal technology to solve specific requirement?

  • How a customer would use your product?

  • How to simplify and optimize existing processes?

  • How to contribute to better work environment in your team?


If so, then Agoda might be the right place for you.


Responsibilities:


As a member of a scrum team, engineer is responsible not only for developing solution, but also for other steps in software development process, including:



  • Taking ownership of delivering epics as part of a scrum team throughout sprints

  • Participating in finalization of user stories as well as estimating them

  • Collaborating with team members as well as system owners towards achieving sprint goals

  • Developing, implementing tests, passing code reviews and delivering features

  • Doing code reviews for other team members

  • Actively participating in optimizing scrum process through retrospectives

  • Actively participating in making team better place to work in


Qualifications:


Some of the technologies that we work with:



  • Scala is main programming language that we use although knowledge of Java is welcome as well

  • C# is optional as we do maintain some libraries for .NET platform

  • Vue.js is our client side technology of choice for backend UIs

  • Data platforms like MSSQL, Cassandra, Elasticsearch, Hadoop

  • Other systems such as Kafka, RabbitMQ, Zookeeper

  • Core engineering infrastructure tools like Git for source control, TeamCity for continuous integration and Puppet for deployments


About You:


You are an enthusiast about engineering high scalability systems that have big impact on business, and you already have been exposed to scaling challenges.
You care about uptime of products that your team owns and are working towards finding best KPIs in order to reach it.
Working as part of a team and sharing success as well as learnings while contributing with your experience and knowledge is what you always wanted.
You are flexible, open minded, and want to have an impact on great product by helping making it even greater.


We'd love to hear from you if you are experienced in any of the technologies we work with (note – we are not looking for you to have them all).


Please apply now and we will tell you all about the numerous projects we currently have on the go.


We are happy to receive CV's from both international & local applicants as we offer relocation assistance and visa sponsorship for eligible persons.


We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status or disability status.

Acxiom
  • Austin, TX
As a Senior Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze the latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You are able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You are also a self-starter able to continuously evaluate new technologies, innovate and deliver solutions for business critical applications


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Lead the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Lead and review Hadoop log files with the help of log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 6+ years of Big Data Administration Experience
  • Extensive knowledge and Hands-on Experience of Hadoop based data manipulation/storage technologies like HDFS, MapReduce, Yarn, Spark/Kafka, HBASE, HIVE, Pig, Impala, R and Sentry/Ranger/Knox
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Experience supporting Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems
  • Great operational expertise such as excellent troubleshooting skills, understanding of system's capacity, bottlenecks, core resource utilizations (CPU, OS, Storage, and Networks)
  • Experience in Hadoop cluster migrations or upgrades
  • Strong scripting skills in Perl, Python, shell scripting, and/or Ruby on Rails
  • Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera, HortonWorks, and/or MapR versions along with monitoring/alerting tools (Nagios, Ganglia, Zenoss, Cloudera Manager)
  • Strong problem solving and critical thinking skills
  • Excellent verbal and written communication skills


What will set you apart:


  • Solid understanding and hands-on experience of Big Data on private/public cloud technologies(AWS/GCP/Azure)
  • DevOps experience (CHEF, Puppet and Ansible)
  • Strong knowledge of JAVA/J2EE and other web technologies

 
Limelight Networks
  • Phoenix, AZ

Job Purpose:

The Sr. Data Services Engineer assists in maintaining the operational aspects of Limelight Networks platforms, provides guidance to the Operations group and acts as an escalation point for advanced troubleshooting of systems issues. The Sr. Data Services Engineer assists in the execution of tactical and strategic operational infrastructure initiatives by building and managing complex computing systems and processes that facilitate the introduction of new products and services while allowing existing services to scale.


Qualifications: Experience and Education (minimums)

  • Bachelors Degree or equivalent experience.
  • 2+ years experience working with MySQL (or other relational databases: Mongo DB, Cassandra, Hadoop, etc.) in a large-scale enterprise environment.
  • 2+ years Linux Systems Administration experience.
  • 2+ years Version Control and Shell scripting and one or more scripting languages including Python, Perl, Ruby and PHP.
  • 2+ Configuration Management Systems, using Puppet, Chef or SALT.
  • Experienced w/MySQL HA/Clustering solutions; Corosync, Pacemaker and DRBD preferred.
  • Experience supporting open-source messaging solutions such as RabbitMQ or ActiveMQ preferred.

Knowledge, Skills & Abilities

  • Collaborative in a fast-paced environment while providing exceptional visibility to management and end-toend ownership of incidents, projects and tasks.
  • Ability to implement and maintain complex datastores.
  • Knowledge of configuration management and release engineering processes and methodologies.
  • Excellent coordination, planning and written and verbal communication skills.
  • Knowledge of the Agile project management methodologies preferred.
  • Knowledge of a NoSQL/Big Data platform; Hadoop, MongoDB or Cassandra preferred.
  • Ability to participate in a 24/7 on call rotation.
  • Ability to travel when necessary.

Essential Functions:

  • Develop and maintain core competencies of the team in accordance with applicable architectures and standards.
  • Participate in capacity management of services and systems.
  • Maintain plans, processes and procedures necessary for the proper deployment and operation of systems and services.
  • Identify gaps in the operation of products and services and drive enhancements.
  • Evaluate release processes and tools to find areas for improvement.
  • Contribute to the release and change management process by collaborating with the developers and other Engineering groups.
  • Participate in development meetings and implement required changes to the operational architecture, standards, processes or procedures and ensure they are in place prior to release (e.g., monitoring, documentation and metrics).
  • Maintain a positive demeanor and a high level of professionalism at all times.
  • Implement proactive monitoring capabilities that ensure minimal disruption to the user community including: early failure detection mechanisms, log monitoring, session tracing and data capture to aid in the troubleshooting process.
  • Implement HA and DR capabilities to support business requirements.
  • Troubleshoot and investigate database related issues.
  • Maintain migration plans and data refresh mechanisms to keep environments current and in sync with production.
  • Implement backup and recovery procedures utilizing various methods to provide flexible data recovery capabilities.
  • Work with management and security team to assist in implementing and enforcing security policies.
  • Create and manage user and security profiles ensuring application security policies and procedures are followed.

FlixBus
  • Berlin, Germany

Your Tasks – Paint the world green



  • Holistic cloud-based infrastructure automation

  • Distributed data processing clusters as well as data streaming platforms based on Kafka, Flink and Spark

  • Microservice platforms based on Docker

  • Development infrastructure and QA automation

  • Continuous Integration/Delivery/Deployment


Your Profile – Ready to hop on board



  • Experience in building and operating complex infrastructure

  • Expert-level: Linux, System Administration

  • Experience with Cloud Services, Expert-Level with either AWS or GCP  

  • Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShift

  • Mindset: "Automate Everything", "Infrastructure as Code", "Pipelines as Code", "Everything as Code"

  • Hands-on experience with "Infrastructure as Code" tools: TerraForm, CloudFormation, Packer

  • Experience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)

  • Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELK

  • At least basic knowledge in designing and implementing Service Level Agreements

  • Solid knowledge of Network and general Security Engineering

  • At least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, Bamboo

  • At least basic hands-on DBA experience, experience with data backup and recovery

  • Experience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory

GrubHub Seamless
  • New York, NY

Got a taste for something new?

We’re Grubhub, the nation’s leading online and mobile food ordering company. Since 2004 we’ve been connecting hungry diners to the local restaurants they love. We’re moving eating forward with no signs of slowing down.

With more than 90,000 restaurants and over 15.6 million diners across 1,700 U.S. cities and London, we’re delivering like never before. Incredible tech is our bread and butter, but amazing people are our secret ingredient. Rigorously analytical and customer-obsessed, our employees develop the fresh ideas and brilliant programs that keep our brands going and growing.

Long story short, keeping our people happy, challenged and well-fed is priority one. Interested? Let’s talk. We’re eager to show you what we bring to the table.

About the Opportunity: 

Senior Site Reliability Engineers are embedded in Big Data specific Dev teams to focus on the operational aspects of our services, and our SREs run their respective products and services from conception to continuous operation.  We're looking for engineers who want to be a part of developing infrastructure software, maintaining it and scaling. If you enjoy focusing on reliability, performance, capacity planning, and the automation everything, you’d probably like this position.





Some Challenges You’ll Tackle





TOOLS OUR SRE TEAM WORKS WITH:



  • Python – our primary infrastructure language

  • Cassandra

  • Docker (in production!)

  • Splunk, Spark, Hadoop, and PrestoDB

  • AWS

  • Python and Fabric for automation and our CD pipeline

  • Jenkins for builds and task execution

  • Linux (CentOS and Ubuntu)

  • DataDog for metrics and alerting

  • Puppet





You Should Have






  • Experience in AWS services like Kinesis, IAM, EMR, Redshift, and S3

  • Experience managing Linux systems

  • Configuration management tool experiences like Puppet, Chef, or Ansible

  • Continuous integration, testing, and deployment using Git, Jenkins, Jenkins DSL

  • Exceptional communication and troubleshooting skills.


NICE TO HAVE:



  • Python or Java / Scala development experience

  • Bonus points for deploying/operating large-ish Hadoop clusters in AWS/GCP and use of EMR, DC/OS, Dataproc.

  • Experience in Streaming data platforms, (Spark streaming, Kafka)

  • Experience developing solutions leveraging Docker

Accenture
  • San Diego, CA
Join Accenture and help transform leading organizations and communities around the world. The sheer scale of our capabilities and client engagements and the way we collaborate, operate and deliver value provides an unparalleled opportunity to grow and advance. Choose Accenture, and make delivering innovative work part of your extraordinary career.
As part of our Data Business Group, you will lead technology innovation for our clients through robust delivery of world-class solutions. You will build better software better! There will never be a typical day and thats why people love it here. The opportunities to make a difference within exciting client initiatives are unlimited in the ever-changing technology landscape. You will be part of a highly collaborative and growing network of technology and data experts, who are taking on todays biggest, most complex business challenges using the latest data and analytics technologies. We will nurture your talent in an inclusive culture that values diversity. You will have an opportunity to work in roles such as Data Scientist, Data Engineer, or Chief Data Officer covering all aspects of Data including Data Management, Data Governance, Data Intelligence, Knowledge Graphs, and IoT. Come grow your career in Technology at Accenture!
People in our Client Delivery & Operations career track drive delivery and capability excellence through the design, development and/or delivery of a solution, service, capability or offering. They grow into delivery-focused roles, and can progress within their current role, laterally or upward.
Business & Technology Integration professionals advise upon, design, develop and/or deliver technology solutions that support best practice business changes
The Bus&Industry Integration Assoc Mgr aligning technology with business strategy and goals they working directly with the client gathering requirements to analyze, design and/or implement technology best practice business changes. They are sought out as experts internally and externally for their deep functional or industry expertise, domain knowledge, or offering expertise. They enhance Accenture's marketplace reputation.
Job Description
Data and Analytics professionals define strategies, develop and deliver solutions that enable the collection, processing and management of information from one or more sources, and the subsequent delivery of information to audiences in support of key business processes.
Data Management professionals define strategies and develop/deliver solutions and processes for managing enterprise-wide data throughout the data lifecycle from capture to processing to usage across all layers of the application architecture.
A professional at this position level within Accenture has the following responsibilities:
Identifies, assesses and solves complex business problems for area of responsibility, where analysis of situations or data requires an in-depth evaluation of variable factors.
Closely follows the strategic direction set by senior management when establishing near term goals.
Interacts with senior management at a client and/or within Accentureon matters where they may need to gain acceptance on an alternate approach.
Has some latitude in decision-making. Acts independently to determine methods and procedures on new assignments.
Decisions have a major day to day impact on area of responsibility.
Manages large - medium sized teams and/or work efforts (if in an individual contributor role) at a client or within Accenture.
Basic Qualifications
    • Minimum of 3 plus years of hands-on technical experience implementing Big Data solutions utilizing Hadoop or other Data Science and Analytics platforms.
    • Minimum of 3 plus years of experience with a full life cycle development from functional design to deployment
    • Minimum 2 plus years of hands-on technical experience with delivering Big Data Solutions in the cloud with AWS or Azure
    • Minimum 3 plus years of hands-on technical experience in developing solutions utilizing at least two of the following:
    • Kafka based streaming services
    • R Studio
    • Cassandra , MongoDB
    • MapReduce, Pig, Hive
    • Scala, Spark
    • knowledge on Jenkins, Chef, Puppet
  • Bachelor's degree or equivalent years of work experience
  • Ability to travel 100%, Monday- Thursday
Professional Skill Requirements
    • Proven ability to build, manage and foster a team-oriented environment
    • Proven ability to work creatively and analytically in a problem-solving environment
    • Desire to work in an information systems environment
    • Excellent communication (written and oral) and interpersonal skills
    • Excellent leadership and management skills
All of our consulting professionals receive comprehensive training covering business acumen, technical and professional skills development. You'll also have opportunities to hone your functional skills and expertise in an area of specialization. We offer a variety of formal and informal training programs at every level to help you acquire and build specialized skills faster. Learning takes place both on the job and through formal training conducted online, in the classroom, or in collaboration with teammates. The sheer variety of work we do, and the experience it offers, provide an unbeatable platform from which to build a career.
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture.
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is a federal contractor and an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture
  • Atlanta, GA
Are you ready to step up to the New and take your technology expertise to the next level?
As part of our Data Business Group, you will lead technology innovation for our clients through robust delivery of world-class solutions. You will build better software better! There will never be a typical day and thats why people love it here. The opportunities to make a difference within exciting client initiatives are unlimited in the ever-changing technology landscape. You will be part of a highly collaborative and growing network of technology and data experts, who are taking on todays biggest, most complex business challenges using the latest data and analytics technologies. We will nurture your talent in an inclusive culture that values diversity. You will have an opportunity to work in roles such as Data Scientist, Data Engineer, or Chief Data Officer covering all aspects of Data including Data Management, Data Governance, Data Intelligence, Knowledge Graphs, and IoT. Come grow your career in Technology at Accenture!
Join Accenture and help transform leading organizations and communities around the world. The sheer scale of our capabilities and client engagements and the way we collaborate, operate and deliver value provides an unparalleled opportunity to grow and advance. Choose Accenture, and make delivering innovative work part of your extraordinary career.
People in our Client Delivery & Operations career track drive delivery and capability excellence through the design, development and/or delivery of a solution, service, capability or offering. They grow into delivery-focused roles, and can progress within their current role, laterally or upward.
As part of our Advanced Technology & Architecture (AT&A) practice, you will lead technology innovation for our clients through robust delivery of world-class solutions. You will build better software better! There will never be a typical day and thats why people love it here. The opportunities to make a difference within exciting client initiatives are unlimited in the ever-changing technology landscape. You will be part of a growing network of technology experts who are highly collaborative taking on todays biggest, most complex business challenges. We will nurture your talent in an inclusive culture that values diversity. Come grow your career in technology at Accenture!
Job Description
Data and Analytics professionals define strategies, develop and deliver solutions that enable the collection, processing and management of information from one or more sources, and the subsequent delivery of information to audiences in support of key business processes.
    • Produce clean, standards based, modern code with an emphasis on advocacy toward end-users to produce high quality software designs that are well-documented.
    • Demonstrate an understanding of technology and digital frameworks in the context of data integration.
    • Ensure code and design quality through the execution of test plans and assist in development of standards, methodology and repeatable processes, working closely with internal and external design, business, and technical counterparts.
Big Data professionals develop deep next generation Analytics skills to support Accenture's data and analytics agendas, including skills such as Data Modeling, Business Intelligence, and Data Management.
A professional at this position level within Accenture has the following responsibilities:
    • Utilizes existing methods and procedures to create designs within the proposed solution to solve business problems.
    • Understands the strategic direction set by senior management as it relates to team goals.
    • Contributes to design of solution, executes development of design, and seeks guidance on complex technical challenges where necessary.
    • Primary upward interaction is with direct supervisor.
    • May interact with peers, client counterparts and/or management levels within Accenture.
    • Understands methods and procedures on new assignments and executes deliverables with guidance as needed.
    • May interact with peers and/or management levels at a client and/or within Accenture.
    • Determines methods and procedures on new assignments with guidance .
    • Decisions often impact the team in which they reside.
    • Manages small teams and/or work efforts (if in an individual contributor role) at a client or within Accenture.
Basic Qualifications
    • Minimum 2 plus years of hands-on technical experience implementing or supporting Big Data solutions utilizing Hadoop.
      Expe
    • rience developing solutions utilizing at least two of the following: Kaf
        k
      • a based streaming services
      • R Studio
      • Cassandra, MongoDB
      • MapReduce, Pig, Hive
      • Scala, Spark
      • Knowledge on Jenkins, Chef, Puppet
Preferred Qualifications
    • Hands-on technical experience utilizing Python
    • Full life cycle development experience
    • Experience with delivering Big Data Solutions in the cloud with AWS or Azure
    • Ability to configure and support API and OpenSource integrations
    • Experience administering Hadoop or other Data Science and Analytics platforms using the technologies above
    • Experience with DevOps support
    • Designing ingestion, low latency, visualization clusters to sustain data loads and meet business and IT expectations
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is a Federal Contractor and an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Accenture
  • Raleigh, NC
Are you ready to step up to the New and take your technology expertise to the next level?
As part of our Data Business Group, you will lead technology innovation for our clients through robust delivery of world-class solutions. You will build better software better! There will never be a typical day and thats why people love it here. The opportunities to make a difference within exciting client initiatives are unlimited in the ever-changing technology landscape. You will be part of a highly collaborative and growing network of technology and data experts, who are taking on todays biggest, most complex business challenges using the latest data and analytics technologies. We will nurture your talent in an inclusive culture that values diversity. You will have an opportunity to work in roles such as Data Scientist, Data Engineer, or Chief Data Officer covering all aspects of Data including Data Management, Data Governance, Data Intelligence, Knowledge Graphs, and IoT. Come grow your career in Technology at Accenture!
Join Accenture and help transform leading organizations and communities around the world. The sheer scale of our capabilities and client engagements and the way we collaborate, operate and deliver value provides an unparalleled opportunity to grow and advance. Choose Accenture, and make delivering innovative work part of your extraordinary career.
People in our Client Delivery & Operations career track drive delivery and capability excellence through the design, development and/or delivery of a solution, service, capability or offering. They grow into delivery-focused roles, and can progress within their current role, laterally or upward.
As part of our Advanced Technology & Architecture (AT&A) practice, you will lead technology innovation for our clients through robust delivery of world-class solutions. You will build better software better! There will never be a typical day and thats why people love it here. The opportunities to make a difference within exciting client initiatives are unlimited in the ever-changing technology landscape. You will be part of a growing network of technology experts who are highly collaborative taking on todays biggest, most complex business challenges. We will nurture your talent in an inclusive culture that values diversity. Come grow your career in technology at Accenture!
Job Description
Data and Analytics professionals define strategies, develop and deliver solutions that enable the collection, processing and management of information from one or more sources, and the subsequent delivery of information to audiences in support of key business processes.
    • Produce clean, standards based, modern code with an emphasis on advocacy toward end-users to produce high quality software designs that are well-documented.
    • Demonstrate an understanding of technology and digital frameworks in the context of data integration.
    • Ensure code and design quality through the execution of test plans and assist in development of standards, methodology and repeatable processes, working closely with internal and external design, business, and technical counterparts.
Big Data professionals develop deep next generation Analytics skills to support Accenture's data and analytics agendas, including skills such as Data Modeling, Business Intelligence, and Data Management.
A professional at this position level within Accenture has the following responsibilities:
    • Utilizes existing methods and procedures to create designs within the proposed solution to solve business problems.
    • Understands the strategic direction set by senior management as it relates to team goals.
    • Contributes to design of solution, executes development of design, and seeks guidance on complex technical challenges where necessary.
    • Primary upward interaction is with direct supervisor.
    • May interact with peers, client counterparts and/or management levels within Accenture.
    • Understands methods and procedures on new assignments and executes deliverables with guidance as needed.
    • May interact with peers and/or management levels at a client and/or within Accenture.
    • Determines methods and procedures on new assignments with guidance .
    • Decisions often impact the team in which they reside.
    • Manages small teams and/or work efforts (if in an individual contributor role) at a client or within Accenture.
Basic Qualifications
    • Minimum 2 plus years of hands-on technical experience implementing or supporting Big Data solutions utilizing Hadoop.
      Expe
    • rience developing solutions utilizing at least two of the following: Kaf
        k
      • a based streaming services
      • R Studio
      • Cassandra, MongoDB
      • MapReduce, Pig, Hive
      • Scala, Spark
      • Knowledge on Jenkins, Chef, Puppet
Preferred Qualifications
    • Hands-on technical experience utilizing Python
    • Full life cycle development experience
    • Experience with delivering Big Data Solutions in the cloud with AWS or Azure
    • Ability to configure and support API and OpenSource integrations
    • Experience administering Hadoop or other Data Science and Analytics platforms using the technologies above
    • Experience with DevOps support
    • Designing ingestion, low latency, visualization clusters to sustain data loads and meet business and IT expectations
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is a Federal Contractor and an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Job candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.