OnlyDataJobs.com

OverDrive Inc.
  • Garfield Heights, OH

The Data Integration team at OverDrive provides data for other teams to analyze and build their systems upon. We are plumbers, building a company-wide pipeline of clean, usable data for others to use. We typically don’t analyze the data, but instead we make the data available to others. Your job, if you choose to join us, is to help us build a real-time data platform that connects our applications and makes available a data stream of potentially anything happening in our business.


Why Apply:


We are looking for someone who can help us wire up the next step. Help us create something from the ground up (almost a green field). Someone that can help us move large data from one team to the next and come up with ideas and solutions around how we go about looking at data. Using technologies like Kafka, Scala, Clojure, F#.


About You:



  • You always keep up with the latest in distributed systems. You're extremely depressed each summer when the guy who runs highscalability.com hangs out the "Gone Fishin" sign.

  • You’re humble. Frankly, you’re in a supporting role. You help build infrastructure to deliver and transform data for others. (E.g., someone else gets the glory because of your effort, but you don’t care.)

  • You’re patient. Because nothing works the first time, when it comes to moving data around.

  • You hate batch. Real-time is your thing.

  • Scaling services is easy. You realize that the hardest part is scaling your data, and you want to help with that.

  • You think microservices should be event-driven. You prefer autonomous systems over tightly-coupled, time-bound synchronous ones with long chains of dependencies.


 Problems You Could Help Solve:



  • Help us come up with solutions around speeding up our process

  • Help us come up with ideas around making our indexing better

  • Help us create better ways to track all our data

  • If you like to solve problems and use cutting edge technology – keep reading


 Responsibilities:



  • Implement near real-time ETL-like processes from hundreds of applications and data sources using the Apache Kafka ecosystem of technologies.

  • Designing, developing, testing and tuning a large-scale ‘stream data platform’ for connecting systems across our business in a decoupled manner.

  • Deliver data in near real-time from transactional data stores into analytical data stores.

  • R&D ways to acquire data and suggest new uses for that data.

  • “Stream processing.” Enable applications to react to, process and transform streams of data between business domains.

  • “Data Integration.” Capture application events and data store changes and pipe to other interested systems.


 Experience/Skills: 



  • Comfortable with functional programming concepts. While we're not writing strictly functional code, experience with languages like Scala, Haskell, or Clojure will make working with streaming data easier.

  • Familiarity with the JVM.  We’re using Scala with a little bit of Java and need to occasionally tweak the performance settings of the JVM itself.

  • Familiarity with C# and the .Net framework is helpful. While we don’t use it day to day, most of our systems run on Windows and .Net.

  • Comfortable working in both Linux and Windows environments. Our systems all run on Linux, but we interact with many systems running on Windows servers.

  • Shell scripting & common Linux tool skills.

  • Experience with build tools such as Maven, sbt, or rake.

  • Knowledge of distributed systems.

  • Knowledge of, or experience with, Kafka a plus.

  • Knowledge of Event-Driven/Reactive systems.

  • Experience with DevOps practices like Continuous Integration, Continuous Deployment, Build Automation, Server automation and Test Driven Development.


 Things You Dig: 



  • Stream processing tools (Kafka Streams, Storm, Spark, Flink, Google Cloud DataFlow etc.)

  • SQL-based technologies (SQL Server, MySQL, PostgreSQL, etc.)

  • NoSQL technologies (Cassandra, MongoDB, Redis, HBase, etc.)

  • Server automation tools (Ansible, Chef, Puppet, Vagrant, etc.)

  • Distributed Source Control (Mercurial, Git)

  • The Cloud (Azure, Amazon AWS)

  • The ELK Stack (Elasticsearch, Logstash, Kibana)


What’s Next:


As you’ve probably guessed, OverDrive is a place that values individuality and variety. We don’t want you to be like everyone else, we don’t even want you to be like us—we want you to be like you! So, if you're interested in joining the OverDrive team, apply below, and tell us what inspires you about OverDrive and why you think you are perfect for our team.



OverDrive values diversity and is proud to be an equal opportunity employer.

ConocoPhillips
  • Houston, TX
Our Company
ConocoPhillips is the worlds largest independent E&P company based on production and proved reserves. Headquartered in Houston, Texas, ConocoPhillips had operations and activities in 17 countries, $71 billion of total assets, and approximately 11,100 employees as of Sept. 30, 2018. Production excluding Libya averaged 1,221 MBOED for the nine months ended Sept. 30, 2018, and proved reserves were 5.0 billion BOE as of Dec. 31, 2017.
Employees across the globe focus on fulfilling our core SPIRIT Values of safety, people, integrity, responsibility, innovation and teamwork. And we apply the characteristics that define leadership excellence in how we engage each other, collaborate with our teams, and drive the business.
Description
The Analytics Platform Administrator is accountable for managing big data environments, on bare-metal, container infrastructure, or on a cloud platform. This role is responsible for system design, capacity planning, performance tuning, and ongoing monitoring of the data lake environment. As a lead administrator, this role will also manage day to day work of any onshore and offshore contractors on the platforms team. The position reports to the Director of Analytic Platforms and it is in Houston, TX.
Responsibilities May Include
  • Work with IT Operations and Information Security Operations for monitoring, troubleshooting, and support of incidents to maintain service levels
  • 24/7 coverage for analytics platforms
  • Monitor the performance of the systems and ensure high uptime
  • Deploy new and maintain existing data lake environments on Hadoop or AWS/Azure stack
  • Work closely with the various teams to make sure that all the big data applications are highly available and performing as expected. The teams include data science, database, network, BI, application, etc.
  • Work with AICOE and business analysts on designing and running technology proof of concepts on Analytics platforms
  • Capacity planning of the data lake environment
  • Manage and review log files, backup and recovery, upgrades, etc.
  • Responsible security management of the platforms
  • Support of our on-premise Hortonworks Hadoop environment
Basic/Required
  • Legally authorized to work in the United States
  • 5+ years of related IT experience
  • 3+ years of Structure Querying Language experience (SQL)
  • 1+ years of experience with Hadoop technology stack (HDFS, HBase, Spark, Sqoop, Hive, Ranger, NiFi, etc.)
  • Intermediate proficiency analyzing and understanding business/technology system architectures, databases, and client applications
Preferred
  • Bachelor's Degree in Computer Science, MIS, Information Technology or other related technical discipline
  • 1+ years of experience with AWS or Azure analytics stack
  • 1+ years of experience architecting data warehouses and/or data lakes
  • 1+ years of Oil and Gas Industry Experience
  • Delivery experience with enterprise databases and/or data warehouse platforms such as Oracle, SQL Server or Teradata
  • Automation experience with Python, PowerShell or a similar technology
  • Experience with source control and automated deployment. Useful technologies include Git, Jenkins, and Ansible
  • Experience with complex networking infrastructure including firewalls, VLANs, and load balancers
  • Experience as a DBA or Linux Admin
  • Ability to work in a fast-paced environment independently with the customer
  • Ability to learn new technologies quickly and leverage them in data analytics solutions
  • Ability to work with business and technology users to define and gather reporting and analytics requirements
  • Ability to work as a team player
  • Strong analytical, troubleshooting, and problem-solving skills
  • Takes ownership of actions and follows through on commitments by courageously dealing with important problems, holding others accountable, and standing up for what is right
  • Delivers results through realistic planning to accomplish goals
  • Generates effective solutions based on available information and makes timely decisions that are safe and ethical
To be considered for this position you must complete the entire application process, which includes answering all prescreening questions and providing your eSignature on or before the requisition closing date of March 11, 2019.
Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.
ConocoPhillips is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, veteran status, gender identity or expression, genetic information or any other legally protected status.
Job Function
Information Management-Information Technology
Job Level
Individual Contributor/Staff Level
Primary Location
NORTH AMERICA-USA-TEXAS-HOUSTON
Organization
ANALYTICS INNOVATION
Line of Business
Corporate Staffs
Job Posting
Mar 4, 2019, 1:39:58 PM
Briq
  • Santa Barbara, CA
  • Salary: $70k - 100k

Briq is hiring a Senior Full Stack Software Engineer Big Data/ML Pipelines to scale up its AI and ML dev team. You will need to have strong programming skills, a proven knowledge of traditional Big Data technologies, experience working with heterogeneous data types at scale, experience with Big Data architectures, past experience working with a team to transform proof-of-concept tools to production-ready toolkits, and excellent communication and planning skills. You and other engineers in this team will help advance Briq's capacity to build and deploy leading solutions for AI-based applications in cyber security.


What You'll Be Doing • Working with data scientists and data engineers to turn proof-of-concept analytics/workflows into production-ready tools and toolkits • Architecting and implementing high performance data pipelines and integrating them with existing cyber security infrastructure and solutions • Deploying and productionalizing solutions focused around threat hunting, anomaly detection, and security analytics • Providing input and feedback to teams regarding decisions surrounding topics such as infrastructure, data architectures, and DevOps strategy • Building automation and tools that will increase the productivity of teams developing distributed systems


What We Need To See • You have a BS, MS, or PhD in Computer Science, Computer Engineering, or closely related field with 4+ years of work or research experience in software development • 1+ years working with data scientists and data engineers to implement proof-of-concept ideas to production environments, including transitioning tools from research-based tools to those ready for deployment • Strong skills in Python and scripting tasks as well as comfort in using Linux and typical development tools (e.g., Git, Jira, Kanban) • Solid knowledge of traditional big data technologies (e.g., Hadoop, Spark, Cassandra) and expertise developing for and targeting at least one of these platforms • Experience with using automation tools (e.g., Ansible, Puppet, Chef) and DevOps tools (e.g., Jenkins, Travis-CI, Gitlab CI) • Experience with or exposure to cyber network data (e.g., PCAP, flow, host logs) or a demonstrated ability to work with heterogeneous data types that are produced at high velocity • Highly motivated with strong communication skills, you have the ability to work successfully with multi-functional teams and coordinate effectively across organizational boundaries and geographies


Ways To Stand Out From The Crowd • Experience working with AI/machine learning/deep learning computing is the most productive and pervasive platform for deep learning and AI.  We integrate and optimize every deep learning framework. With deep learning, we can teach AI to do almost anything.  We are making sense of the complex world of construction. These are just a few examples. AI will spur a wave of social progress unmatched since the industrial revolution.


Briq is changing the world of construction. Join our development team and help us build the real-time, cost-effective computing platform driving our success in this dynamic and quickly growing field in one of the world's largest industries.


Briq is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression , sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.


Read About Us on TechCrunch : https://techcrunch.com/2019/02/22/briq-the-next-building-block-in-techs-reconstruction-of-the-construction-business-raises-3-million/ 

Brickschain
  • Santa Barbara, CA
  • Salary: $70k - 100k

Briq is hiring a Senior Full Stack Software Engineer Big Data/ML Pipelines to scale up its AI and ML dev team. You will need to have strong programming skills, a proven knowledge of traditional Big Data technologies, experience working with heterogeneous data types at scale, experience with Big Data architectures, past experience working with a team to transform proof-of-concept tools to production-ready toolkits, and excellent communication and planning skills. You and other engineers in this team will help advance Briq's capacity to build and deploy leading solutions for AI-based applications in cyber security.


What You'll Be Doing • Working with data scientists and data engineers to turn proof-of-concept analytics/workflows into production-ready tools and toolkits • Architecting and implementing high performance data pipelines and integrating them with existing cyber security infrastructure and solutions • Deploying and productionalizing solutions focused around threat hunting, anomaly detection, and security analytics • Providing input and feedback to teams regarding decisions surrounding topics such as infrastructure, data architectures, and DevOps strategy • Building automation and tools that will increase the productivity of teams developing distributed systems


What We Need To See • You have a BS, MS, or PhD in Computer Science, Computer Engineering, or closely related field with 4+ years of work or research experience in software development • 1+ years working with data scientists and data engineers to implement proof-of-concept ideas to production environments, including transitioning tools from research-based tools to those ready for deployment • Strong skills in Python and scripting tasks as well as comfort in using Linux and typical development tools (e.g., Git, Jira, Kanban) • Solid knowledge of traditional big data technologies (e.g., Hadoop, Spark, Cassandra) and expertise developing for and targeting at least one of these platforms • Experience with using automation tools (e.g., Ansible, Puppet, Chef) and DevOps tools (e.g., Jenkins, Travis-CI, Gitlab CI) • Experience with or exposure to cyber network data (e.g., PCAP, flow, host logs) or a demonstrated ability to work with heterogeneous data types that are produced at high velocity • Highly motivated with strong communication skills, you have the ability to work successfully with multi-functional teams and coordinate effectively across organizational boundaries and geographies


Ways To Stand Out From The Crowd • Experience working with AI/machine learning/deep learning computing is the most productive and pervasive platform for deep learning and AI.  We integrate and optimize every deep learning framework. With deep learning, we can teach AI to do almost anything.  We are making sense of the complex world of construction. These are just a few examples. AI will spur a wave of social progress unmatched since the industrial revolution.


Briq is changing the world of construction. Join our development team and help us build the real-time, cost-effective computing platform driving our success in this dynamic and quickly growing field in one of the world's largest industries.


Briq is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression , sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.


Read About Us on TechCrunch : https://techcrunch.com/2019/02/22/briq-the-next-building-block-in-techs-reconstruction-of-the-construction-business-raises-3-million/ 

FreshBooks
  • Toronto, ON, Canada

FreshBooks has a big vision. We launched in 2003 but we’re just getting started and there’s a lot left to do. We're a high performing team working towards a common goal: building an extraordinary online accounting application to help small businesses better handle their finances. Known for extraordinary customer service and based in Toronto, Canada, FreshBooks serves paying customers in over 120 countries.


The Opportunity:


FreshBooks is seeking a data engineer to join our team. You will help build new features and update existing ones in our current data pipeline infrastructure. If you’re committed to great work and are constantly looking for ways to improve the systems you’re responsible for, we’d love to chat with you!


What you'll do:



  • Collaborate with data engineers and full-stack developers on cross-functional agile teams working on features for our stakeholders.

  • Work closely with our analytics, data science, and product teams to ensure their data needs are met.

  • Participate and share your ideas in technical design and architecture discussions.

  • Ship your code with our continuous integration process.

  • Provide coaching to junior data engineers and share and learn from your peers.

  • Develop your craft and build your expertise in data engineering.

  • What you bring:

  • Enthusiasm for data engineering!

  • Experience creating and/or maintaining data pipelines, from batch job to real time implementations.

  • Experience with at least one main development language. You’ll almost certainly work closely with Python code. If you have a stronger track record in a different language -- or, better yet, multiple different languages -- that’s great, and we’ll expect you to demonstrate to us that you can learn to produce well-factored idiomatic Python code.

  • Experience with MySQL, Postgres, or similar storage technologies.

  • A strong understanding of test-driven (and behavioural test driven) development, and of building substantially complete test code around the code that you write, and not just for the happy path.

  • Experience working with large codebases and writing robust and testable code.

  • Familiarity with continuous integration and automated build pipelines.

  • Experience using github, reviewing code, reviewing PRs, and merging branches.

  • Experience working in an Agile environment.

  • The ability to balance desire to ship code quickly to our internal customers with responsibility of making good technical decisions.


What you might bring:



  • A track record of staying on the forefront of data engineering technology.

  • Experience with AWS, or another major cloud provider such as Google Cloud Platform.

  • Experience developing and/or managing real time data pipelines and fast data architectures.

  • Experience with BI tools: Periscope, Looker.

  • Experience with Redshift, Big Query, or similar cloud storage technologies.

  • Experience with other modern storage technologies such as Cassandra, MongoDB, and others.

  • Experience with Spark, Kafka, Flink, Gearpump, Dataflow, or other streaming technologies.

  • Experience with Docker, Kubernetes, Ansible, and Terraform, and other DevOps and infrastructure as code technologies.

  • Experience with machine learning technologies, techniques, and algorithms, especially operationalizing machine learning models.

  • A limitless imagination for where data could go and what we can do with it to make our customers and our people awesome!


Why Join Us?


Have we got your attention? Submit your application today and a member of our recruitment team will be in touch with you shortly!


FreshBooks is an equal opportunity employer that embraces the differences in all of our employees. We celebrate diversity and are committed to creating an inclusive environment for all FreshBookers. All applicants are evaluated based on their experience and qualifications in relation to this position.


FreshBooks provides employment accommodation during the recruitment process. Should you require any accommodation, please indicate this on your application and we will work with you to meet your accessibility needs. For any questions, suggestions or required documents regarding accessibility in a different format, please contact us at phone 416-780-2700 and/or accessibility@freshbooks.com

Marketcircle
  • No office location
  • Salary: C$65k - 75k

Are you a software developer that loves the option of working from home, collaborating with a fun team, and enjoy solving challenging problems?  If so, we're looking for an experienced software developer to join our Backend Team. This team is primarily responsible for developing, maintaining and supporting the backend services for our apps. You will be a highly motivated team player, as well as creative and passionate about developing new technology that not only improves the way our apps work, but also helps small businesses world wide. You are a self-starter and aren’t afraid to jump in on the deep end.  Why you’ll love working at Marketcircle:



  • Work remotely! No one likes having to battle traffic during rush-hour on a daily basis, so you don’t have to! 

  • Startup style company. You won’t find any of that corporate BS here!

  • Ownership. We give you the freedom and flexibility to take ownership of your work. In fact, we believe in this so much that its one of our core values. 

  • Learn. We invest in our employees both vertically and horizontally. Want to attend a conference? Great! Want to learn the latest language? We have unlimited Udemy courses. 

  • Team. Our team is like our second family. And why shouldn’t they be? We work, learn, eat and in some cases even live with each other! 


You are:



  • an experienced software developer, with some experience building backend services

  • comfortable working remotely

  • comfortable working independently or collaboratively

  • willing to participate in a rotating on-call schedule


You’ll be working on:



  • a HTTP/REST API written in Ruby (you will probably spend most of you time here)

  • an Authentication/Payment backend written in Ruby

  • a PostgreSQL database with a custom C extension to track changes


You have:



  • a solid understanding of modern backend applications

  • experience with modern API design and ideally know your way around in a web framework such as Ruby on Rails, Django, or Sinatra

  • experience with a either Ruby, Python, or a similar scripting language

  • an appreciation for well written, tested and documented code

  • experience with Linux or a BSD

  • experience with Git and GitHub


Bonus Points for:



  • experience with infrastructure management tools (like Puppet, Ansible or Chef) (we use Ansible)

  • experience with cloud infrastructure providers (like AWS, Google Cloud, Microsoft Azure or DigitalOcean)

  • knowing your way around the network stack, from HTTP to TCP to IP and have a solid understanding of security (TLS/ IPSec/Firewalls)


How to Apply:  Send your resume over to jobs[at]marketcircle[dot]com and be sure to include why you’d be the best fit for this role. 

Marketcircle
  • No office location
  • Salary: C$65k - 75k

Are you a software developer that believes in remote work, collaborating with a fun team, and enjoy solving challenging problems?  If so, we're looking for an experienced software developer to join our Backend Team. This team is primarily responsible for developing, maintaining and supporting the backend services for our apps. You will be a highly motivated team player, as well as creative and passionate about developing new technology that not only improves the way our apps work, but also helps small businesses world wide. You are a self-starter and aren’t afraid to jump in on the deep end.  Why you’ll love working at Marketcircle:



  • Work remotely! No one likes having to battle traffic during rush-hour on a daily basis, so you don’t have to! 

  • Startup style company. You won’t find any of that corporate BS here!

  • Learn. We invest in our employees both vertically and horizontally. Want to attend a conference? Great! Want to learn the latest language? We have unlimited Udemy courses. 

  • Team. Our team is like our second family. And why shouldn’t they be? We work, learn, eat and in some cases even live with each other! 


You are:



  • an experienced software developer, with some experience building backend services

  • comfortable working remotely

  • comfortable working independently or collaboratively

  • willing to participate in a rotating on-call schedule


You’ll be working on:



  • a HTTP/REST API written in Ruby (you will probably spend most of you time here)

  • an Authentication/Payment backend written in Ruby

  • PostgreSQL database(s) with a custom C extension to track changes

  • Elasticsearch 


You have:



  • a solid understanding of modern backend applications

  • experience with modern API design and ideally know your way around in a web framework such as Ruby on Rails, Django, or Sinatra

  • experience with a either Ruby, Python, or a similar scripting language

  • an appreciation for well written, tested and documented code

  • experience with Linux or a BSD

  • experience with Git and GitHub


Bonus Points for:



  • experience with infrastructure management tools (like Puppet, Ansible or Chef) (we use Ansible)

  • experience with cloud infrastructure providers (like AWS, Google Cloud, Microsoft Azure or DigitalOcean)

  • knowing your way around the network stack, from HTTP to TCP to IP and have a solid understanding of security (TLS/ IPSec/Firewalls)


How to Apply:  Send your resume over to jobs[at]marketcircle[dot]com and be sure to include why you’d be the best fit for this role. 

Ultra Tendency
  • Madrid, Spanien

Got experiences with infrastructure and release automation? Deep knowledge about clusters and strong commitment for excellent processes? You've been there? You've done that? Then join us now!



This is your mission:



  • Support our customers and development teams in transitioning capabilities from development and testing to operations

  • Deploy and extend large-scale server clusters for our clients

  • Fine-tune and optimize our clusters to process millions of records every day 

  • Learn something new every day and enjoy solving complex problems


What we need:



  • You know Linux like the back of your hand

  • You love to automate all the things – SaltStack, Ansible, Terraform and Puppet are your daily business

  • You can write code in Python, Java, Ruby or similar languages.

  • You are driven by high quality standards and attention to detail

  • Understanding of the Hadoop ecosystem and knowledge of Docker is a plus


Things we offer you:



  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • Work on the open-source community and become a contributor. Learn from open-source enthusiasts which you will find nowhere else in Germany!

  • Work in an English-speaking, international environment

  • Work with cutting edge equipment and tools







About Ultra Tendency



Ultra Tendency is a software-development and consulting company, which is fast growing and specialising in the fields of Big Data and Data Science. We design, develop, and support complex algorithms and applications that enable data-driven products and services for our customers.


data privacy statement: http://www.ultratendency.com/data-protection.html

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Job Summary:

Billions of requests. Millions of Customers. Be part of Comcast's TPX X1 Stream Apps' Platform and APIs team! Our team designs, builds and operates the APIs that power Comcast's X1 web, mobile, Roku and SmartTV properties. Reliability and performance at this scale require complex information systems to be made simple. We are looking for an engineer who is able to listen to stakeholders and clients, understand technical requirements, collaborate on solutions, and deliver technology services in a high velocity, dynamic, "always on" environment. As a member of our team you will work with other engineers and DevOps practitioners to produce mission-critical applications & infrastructure, tools, and processes that enable our systems to scale at a rapid pace. One day might involve creating an API that returns a customer's channel lineup or performance tuning of a Java web application; the next may be building tools to enable continuous delivery or infrastructure as code.

Technology snapshot: AWS, Apache, EC2, Git/Gerrit, Graphite, Grafana, Java, Lambda, Linux, Memcached, Scala, Akka, Splunk, Spring, Tomcat, Vmware, OpenStack, TerraForm, Ansible, Concourse

Where we headed?

Our goal is to build, scale and guard the systems that delight our customers. To do so, you will need strong skills in the following areas:

Responsibilities

As a member of Advanced Application Engineering's Platform and APIs Team, you will provide technical expertise and guidance within our cross-functional project team, and you'll work closely with other software and QA engineers to build quality, scalable products that delight our customers. Responsibilities range from high-level logical architecture through low-level detailed design and implementation, including:

  • Design, build, deliver and scale sophisticated high-volume web properties and agreed upon solutions from the catalog of TVX application services.
  • Collaborate with project stakeholders to identify product and technical requirements. Conducts analysis to determine integration needs.
  • Write code that meets functional requirements and is testable and maintainable. Have a passion for test driven development.
  • Work with Quality Assurance team to determine if applications fit specification and technical requirements.
  • Produce technical designs and documentation at varying levels of granularity.

Desired Qualifications

  • 2+ years software development experience in Java with a solid understanding of Spring, Hibernate frameworks and REST based architecture.
  • Experience with Java application servers and J2EE containers (Tomcat).
  • Knowledge of object-oriented design methodology and standard software design patterns.
  • Firm grasp of testing methodologies and frameworks.
  • Experience in caching especially in HTTP compliant caches.
  • Fundamental understanding of the HTTP protocol.
  • Experience developing with Major MVC frameworks (Spring MVC).
  • Familiarity with consolidating and normalizing data across many data sources, specifically Internet data aggregation and metadata processing.
  • Familiar with agile development methodologies such as Scrum.
  • Strong technical written and verbal communication skills.
  • A sense of ownership, initiative, and drive and a love of learning!

Additional Qualifications

  • UNIX background (Solaris/Linux)
  • CDN Experience is a plus
  • Familiarity with cloud computing (OpenStack, S3, SQS, Hadoop...).
  • Experience with Scala, Ruby on Rails, Akka

Education

Bachelor's degree in Engineering or Computer Science or a related field, or relevant work experience.

Comcast is an EOE/Veterans/Disabled/LGBT employer

ECRI Institute
  • Plymouth Meeting, PA

Work with data, pipelines and analytics across our web properties, CRM platform, and member services. Enhance organizational strategy and tactics by providing actionable recommendations to business groups. Support ECRI Institute’s 50-year mission of advancing effective evidence-based healthcare worldwide. Partner closely with DevOps, Software Engineers, Data Scientists and our respected and accomplished business partners, all working onsite at ECRI’s scenic suburban world headquarters in Plymouth Meeting, PA. Benefit from a healthy work-life balance while staying on the leading edge of technology and thriving in an innovative startup-like culture minus the risk. Sleep well knowing you are helping achieve a world where safe, high-quality healthcare is accessible to everyone.


Responsibilities:



  • Assemble large, complex data sets that meet functional / non-functional business requirements.

  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.

  • Create and troubleshoot simple to advanced level SQL scripts.

  • Perform analysis of existing data pipeline and data store performance and provide improvement recommendations

  • Gather and document business requirements.

  • Support and work with cross-functional teams.

  • Support bug fixes and implement enhancements to existing systems.

  • Participate in team meetings and code reviews.

  • Adhere to ECRI platform, standards, and best practices.

  • Work independently and within a team when needed.

  • Participate in personal growth opportunities.


Qualifications:


Experience:



  • 3-5 years of recent experience in least one modern programming language in a data engineering or back-end development capacity: Python, R, Node.JS, Clojure, Scala

  • Experience with Data Frames (any language)

  • Experience with relational SQL and NoSQL databases, such as SQL Server, PostgreSQL, Cassandra and Redis.

  • Experience optimizing SQL queries

  • Experience building and deploying ETL pipelines

  • Experience with agile or Kanban methodologies.

  • Desire to learn and grow professionally.


Critical Skills



  • Efficiently communicate complex ideas, learn from others, and adopt standards.

  • Analyze large amounts of data: facts, figures, and numbers to find conclusions and actionable recommendations for business groups.

  • Troubleshoot and effectively diagnose and fix problems.

  • Attention to details.


Beneficial Additional Knowledge and Skills (not required):



  • Familiarity with DevOps technologies such as Containers, Kubernetes, Chef, Puppet, Ansible.

  • Familiarity with CI/CD pipelines.

  • Familiarity with Open Source data pipeline tools such as: airflow, minio, nats, spark, kafka

  • Healthcare business experience.

  • Experience transitioning SSIS/Biztalk data pipelines to open-source based pipelines.

  • Experience using SharePoint REST API


Education: Associate/Bachelor’s degree in Computer Science or related major degree OR Equivalent professional experience.


About ECRI Institute: ECRI Institute is a nonprofit organization that researches the best approaches to improving patient safety and care. It has its headquarters in Plymouth Meeting, Pennsylvania. We have a diverse working environment that encourages teamwork and an open exchange of ideas. Over 400 dedicated staff blend extraordinary scope and depth of clinical, management, and technical expertise with a wide range of experienced healthcare professionals. Our competitive benefit package for full-time and benefit-eligible part-time employees includes medical, dental, vision, and prescription coverage which begin on the first day of the month following their date of employment.


For 50 years, ECRI Institute has dedicated itself to bringing the discipline of applied scientific research to healthcare. Through rigorous, evidence-based patient safety research, ECRI Institute has recommended actionable solutions that have saved countless lives. ECRI Institute is designated an Evidence-Based Practice Center by the U.S. Agency for Healthcare Research and Quality. ECRI Institute PSO is listed as a federally certified Patient Safety Organization by the U.S. Department of Health and Human Services and strives to achieve the highest levels of safety and quality in healthcare by collecting and analyzing patient safety information and sharing best practices and lessons learned. Qualified applicants must be legally authorized to work in the United States.


ECRI Institute is an equal opportunity and affirmative action employer and does not discriminate against otherwise qualified applicants on the basis of race, color, creed, religion, ancestry, age, sex, sexual orientation, marital status, national origin, disability or handicap, or veteran status. If you need a reasonable accommodation for any part of the application and/or hiring process, please contact the Human Resources Department at 610-825-6000. EOE Minority/Female/Disability/Veteran


To be considered further for this opportunity interested candidates must apply directly to our website. https://www.ecri.org/about/pages/careers.aspx

Terazo
  • Raleigh, NC
  • Location: Richmond, VA
  • Type: Full-time
  • Up to 25% travel possible


Job Description:

Terazo is looking to hire a full-time Data Engineer. Terazo is an Platform-focused development and managed services company built to enable the success of our clients in leveraging API integrations, Data Engineering, and DevOps to deliver new business value. Organizations engage Terazo to ensure that their critical APIs and important data integrations are secure and available to their customers and employees.


While we are headquartered in Richmond, Virginia, we work with clients across the United States.


You:
  • View your clientssuccess as your own
  • Are passionate about what you do
  • Love to teach yourself new skills
  • Seek opportunities to learn
  • Thrive in ambiguity
  • Enjoy working on a team
  • Easily adapt to new project requirements and client expectations


What Youll Do:
  • Partner with clients to develop and maintain first-class data platformsthat includes Jupyterhub, DataBricks, and other data sciencetools
  • Develop streaming data processors thatcrunch numbers in real time to help our clients make smart decisions
  • Collaborate with API developers to build data-driven microservices for our clients
  • Write concise, fun cookbooksto empower our clients


Preferred Skills (but not limited to):
  • Designing, developing, scaling, and maintaining data pipelines or reports using Spark, Kafka, Hive, Python or Scala
  • Hadoop Developer
  • Putting modern data platforms into use, including platform as a service variant
  • Communicating complex ideas with clientsand technical staff
  • Using Git or Github in a CI/CD development workflow
  • Developing microservices using languages like Java, Python or JavaScriptand using RESTAPIs
  • Writingeffective technical documentation
  • Automating deployments using DevOps tools like Docker, Ansible, Terraform, or Kubernetes
Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Summary

Responsible for promoting the use of industry and Company technology standards. Monitors emerging technologies/technology practices for potential use within the Company. Designs and develops updated infrastructure in support of one or more business processes. Helps to ensure a balance between tactical and strategic technology solutions. Considers business problems "end-to-end": including people, process, and technology, both within and outside the enterprise, as part of any design solution. Mentors, reviews code and verifies that the object-oriented design best practices and that coding and architectural guidelines are adhered to. Identifies and drives issues through closure. Speaks at conferences and tech meetups about Comcast technologies and assists in finding key technical positions.

This role brings to bear significant cloud experience in the private and public cloud space as well as big data and software engineering. This role will be key in the re-platforming of the CX Personalization program in support of wholesale requirements. This person will engage as part of software delivery teams and contribute to several strategic efforts that drive personalized customer experiences across product usage, support interactions and customer journeys. This role leads the building of real-time big data platforms, machine learning algorithms and data services that enable proactive responses for customers at every critical touch point.

Core Responsibilities

-Enterprise-Level architect for "Big Data" Event processing, analytics, data store, and cloud platforms.

-Enterprise-Level architect for cloud applications and "Platform as a Service" capabilities

-Detailed current-state product and requirement analysis.

-Security Architecture for "Big Data" applications and infrastructure

-Ensures programs are envisioned, designed, developed, and implemented across the enterprise to meet business needs. Interfaces with the enterprise architecture team and other functional areas to ensure that most efficient solution is designed to meet business needs.

-Ensures solutions are well engineered, operable, maintainable, and delivered on schedule. Develops, documents, and ensures compliance with best practices including but not limited to the following coding standards, object oriented design, platform and framework specific design concerns and human interface guidelines.

-Tracks and documents requirements for enterprise development projects and enhancements.

-Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture.

-Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations etc. to meet goals.

-Gathers and understands client needs, finding key areas where technology leverage is possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction.

-Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent from the start and seeks to remove as much re-work as possible.

-Works with product marketing to define requirements. Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems.

-Acts as architectural lead on project.

-Applies new and innovative ideas to old or new problems. Fosters environments that encourage innovation. Contributes to and supports effort to further build intellectual property via patents.

-Consistent exercise of independent judgment and discretion in matters of significance.

-Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary.

-Other duties and responsibilities as assigned.

Requirements:

-Demonstrated experience with "Platform as a Service" (PaaS) architectures including strategy, architectural patterns and standards, approaches to multi-tenancy, scalability, and security.

-Demonstrated experience with schema and data governance and message metadata stores

-Demonstrated experience with public cloud resources such as AWS.

-Demonstrated experience with cloud automation technologies including Ansible, Terraform, Chef, Puppet, etc

-Hands-on experience with Data Flow processing engines, such as Apache NiFi and Apache Flink

-Working knowledge / experience with Big Data platforms (Kafka, Hadoop, Storm/Spark, NoSQL, In-memory data grid)

-Working knowledge / experience with Linux, Java, Python.

Education Level

- Bachelor's Degree or Equivalent

Field of Study

- Engineering, Computer Science

Years Experience

2+ years in Software Engineering Experience

1+ years in Cloud Infrastructure

Compliance

Comcast is an EEO/AA/Drug Free Workplace.

Disclaimer

The above information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Summary

Responsible for promoting the use of industry and Company technology standards. Monitors emerging technologies/technology practices for potential use within the Company. Designs and develops updated infrastructure in support of one or more business processes. Helps to ensure a balance between tactical and strategic technology solutions. Considers business problems "end-to-end": including people, process, and technology, both within and outside the enterprise, as part of any design solution. Mentors, reviews code and verifies that the object-oriented design best practices and that coding and architectural guidelines are adhered to. Identifies and drives issues through closure. Speaks at conferences and tech meetups about Comcast technologies and assists in finding key technical positions.

This role brings to bear significant cloud experience in the private and public cloud space as well as big data and software engineering. This role will be key in the re-platforming of the CX Personalization program in support of wholesale requirements. This person will engage as part of software delivery teams and contribute to several strategic efforts that drive personalized customer experiences across product usage, support interactions and customer journeys. This role leads the building of real-time big data platforms, machine learning algorithms and data services that enable proactive responses for customers at every critical touch point.

Core Responsibilities

-Enterprise-Level architect for "Big Data" Event processing, analytics, data store, and cloud platforms.

-Enterprise-Level architect for cloud applications and "Platform as a Service" capabilities

-Detailed current-state product and requirement analysis.

-Security Architecture for "Big Data" applications and infrastructure

-Ensures programs are envisioned, designed, developed, and implemented across the enterprise to meet business needs. Interfaces with the enterprise architecture team and other functional areas to ensure that most efficient solution is designed to meet business needs.

-Ensures solutions are well engineered, operable, maintainable, and delivered on schedule. Develops, documents, and ensures compliance with best practices including but not limited to the following coding standards, object oriented design, platform and framework specific design concerns and human interface guidelines.

-Tracks and documents requirements for enterprise development projects and enhancements.

-Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture.

-Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations etc. to meet goals.

-Gathers and understands client needs, finding key areas where technology leverage is possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction.

-Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent from the start and seeks to remove as much re-work as possible.

-Works with product marketing to define requirements. Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems.

-Acts as architectural lead on project.

-Applies new and innovative ideas to old or new problems. Fosters environments that encourage innovation. Contributes to and supports effort to further build intellectual property via patents.

-Consistent exercise of independent judgment and discretion in matters of significance.

-Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary.

-Other duties and responsibilities as assigned.

Requirements:

-Demonstrated experience with "Platform as a Service" (PaaS) architectures including strategy, architectural patterns and standards, approaches to multi-tenancy, scalability, and security.

-Demonstrated experience with schema and data governance and message metadata stores

-Demonstrated experience with public cloud resources such as AWS.

-Demonstrated experience with cloud automation technologies including Ansible, Terraform, Chef, Puppet, etc

-Hands-on experience with Data Flow processing engines, such as Apache NiFi and Apache Flink

-Working knowledge / experience with Big Data platforms (Kafka, Hadoop, Storm/Spark, NoSQL, In-memory data grid)

-Working knowledge / experience with Linux, Java, Python.

Education Level

- Bachelor's Degree or Equivalent

Field of Study

- Engineering, Computer Science

Years Experience

3+ years in Software Engineering Experience

1+ years in Cloud Infrastructure

Compliance

Comcast is an EEO/AA/Drug Free Workplace.

Disclaimer

The above information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast is an EOE/Veterans/Disabled/LGBT employer

ConocoPhillips
  • Houston, TX
Our Company
ConocoPhillips is the worlds largest independent E&P company based on production and proved reserves. Headquartered in Houston, Texas, ConocoPhillips had operations and activities in 17 countries, $71 billion of total assets, and approximately 11,100 employees as of Sept. 30, 2018. Production excluding Libya averaged 1,221 MBOED for the nine months ended Sept. 30, 2018, and proved reserves were 5.0 billion BOE as of Dec. 31, 2017.
Employees across the globe focus on fulfilling our core SPIRIT Values of safety, people, integrity, responsibility, innovation and teamwork. And we apply the characteristics that define leadership excellence in how we engage each other, collaborate with our teams, and drive the business.
Description
The Analytics Platform Administrator is accountable for managing big data environments, on bare-metal, container infrastructure, or on a cloud platform. This role is responsible for system design, capacity planning, performance tuning, and ongoing monitoring of the data lake environment. As a lead administrator, this role will also manage day to day work of any onshore and offshore contractors on the platforms team. The position reports to the Director of Analytic Platforms and it is in Houston, TX.
Responsibilities May Include
  • Work with IT Operations and Information Security Operations for monitoring, troubleshooting, and support of incidents to maintain service levels
  • 24/7 coverage for analytics platforms
  • Monitor the performance of the systems and ensure high uptime
  • Deploy new and maintain existing data lake environments on Hadoop or AWS/Azure stack
  • Work closely with the various teams to make sure that all the big data applications are highly available and performing as expected. The teams include data science, database, network, BI, application, etc.
  • Work with AICOE and business analysts on designing and running technology proof of concepts on Analytics platforms
  • Capacity planning of the data lake environment
  • Manage and review log files, backup and recovery, upgrades, etc.
  • Responsible security management of the platforms
  • Support of our on-premise Hortonworks Hadoop environment
Basic/Required
  • Legally authorized to work in the United States
  • 5+ years of related IT experience
  • 3+ years of Structure Querying Language experience (SQL)
  • 1+ years of experience with Hadoop technology stack (HDFS, HBase, Spark, Sqoop, Hive, Ranger, NiFi, etc.)
  • Intermediate proficiency analyzing and understanding business/technology system architectures, databases, and client applications
Preferred
  • Bachelor's Degree in Computer Science, MIS, Information Technology or other related technical discipline
  • 1+ years of experience with AWS or Azure analytics stack
  • 1+ years of experience architecting data warehouses and/or data lakes
  • 1+ years of Oil and Gas Industry Experience
  • Delivery experience with enterprise databases and/or data warehouse platforms such as Oracle, SQL Server or Teradata
  • Automation experience with Python, PowerShell or a similar technology
  • Experience with source control and automated deployment. Useful technologies include Git, Jenkins, and Ansible
  • Experience with complex networking infrastructure including firewalls, VLANs, and load balancers
  • Experience as a DBA or Linux Admin
  • Ability to work in a fast-paced environment independently with the customer
  • Ability to learn new technologies quickly and leverage them in data analytics solutions
  • Ability to work with business and technology users to define and gather reporting and analytics requirements
  • Ability to work as a team player
  • Strong analytical, troubleshooting, and problem-solving skills
  • Takes ownership of actions and follows through on commitments by courageously dealing with important problems, holding others accountable, and standing up for what is right
  • Delivers results through realistic planning to accomplish goals
  • Generates effective solutions based on available information and makes timely decisions that are safe and ethical
To be considered for this position you must complete the entire application process, which includes answering all prescreening questions and providing your eSignature on or before the requisition closing date of February 28, 2019.
Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.
ConocoPhillips is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, veteran status, gender identity or expression, genetic information or any other legally protected status.
Job Function
Information Management-Information Technology
Job Level
Individual Contributor/Staff Level
Primary Location
NORTH AMERICA-USA-TEXAS-HOUSTON
Organization
ANALYTICS INNOVATION
Line of Business
Corporate Staffs
Job Posting
Feb 20, 2019, 9:10:39 AM
Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Summary

Responsible for promoting the use of industry and Company technology standards. Monitors emerging technologies/technology practices for potential use within the Company. Designs and develops updated infrastructure in support of one or more business processes. Helps to ensure a balance between tactical and strategic technology solutions. Considers business problems "end-to-end": including people, process, and technology, both within and outside the enterprise, as part of any design solution. Mentors, reviews code and verifies that the object-oriented design best practices and that coding and architectural guidelines are adhered to. Identifies and drives issues through closure. Speaks at conferences and tech meetups about Comcast technologies and assists in finding key technical positions.

This role brings to bear significant cloud experience in the private and public cloud space as well as big data and software engineering. This role will be key in the re-platforming of the CX Personalization program in support of wholesale requirements. This person will engage as part of software delivery teams and contribute to several strategic efforts that drive personalized customer experiences across product usage, support interactions and customer journeys. This role leads the building of real-time big data platforms, machine learning algorithms and data services that enable proactive responses for customers at every critical touch point.

Core Responsibilities

-Enterprise-Level architect for "Big Data" Event processing, analytics, data store, and cloud platforms.

-Enterprise-Level architect for cloud applications and "Platform as a Service" capabilities

-Detailed current-state product and requirement analysis.

-Security Architecture for "Big Data" applications and infrastructure

-Ensures programs are envisioned, designed, developed, and implemented across the enterprise to meet business needs. Interfaces with the enterprise architecture team and other functional areas to ensure that most efficient solution is designed to meet business needs.

-Ensures solutions are well engineered, operable, maintainable, and delivered on schedule. Develops, documents, and ensures compliance with best practices including but not limited to the following coding standards, object oriented design, platform and framework specific design concerns and human interface guidelines.

-Tracks and documents requirements for enterprise development projects and enhancements.

-Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture.

-Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations etc. to meet goals.

-Gathers and understands client needs, finding key areas where technology leverage is possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction.

-Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent from the start and seeks to remove as much re-work as possible.

-Works with product marketing to define requirements. Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems.

-Acts as architectural lead on project.

-Applies new and innovative ideas to old or new problems. Fosters environments that encourage innovation. Contributes to and supports effort to further build intellectual property via patents.

-Consistent exercise of independent judgment and discretion in matters of significance.

-Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary.

-Other duties and responsibilities as assigned.

Requirements:

-Demonstrated experience with "Platform as a Service" (PaaS) architectures including strategy, architectural patterns and standards, approaches to multi-tenancy, scalability, and security.

-Demonstrated experience with schema and data governance and message metadata stores

-Demonstrated experience with public cloud resources such as AWS.

-Demonstrated experience with cloud automation technologies including Ansible, Terraform, Chef, Puppet, etc

-Hands-on experience with Data Flow processing engines, such as Apache NiFi

-Working knowledge / experience with Big Data platforms (Kafka, Hadoop, Storm/Spark, NoSQL, In-memory data grid)

-Working knowledge / experience with Linux, Java, Python.

Education Level

- Bachelor's Degree or Equivalent

Field of Study

- Engineering, Computer Science

Years Experience

11+ years in Software Engineering Experience

4+ years in Technical Leadership roles

1+ years in Cloud Infrastructure

Compliance

Comcast is an EEO/AA/Drug Free Workplace.

Disclaimer

The above information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications

Comcast is an EOE/Veterans/Disabled/LGBT employer

Hays
  • Toronto, ON, Canada


Major Bank looking for a Big Data Engineer to work out of their Downtown Toronto office for 6months + ext

Big Data Engineer

Client: HSBC
Role: Data Engineer
Duration: 6 months, plus likely extension
Rate: Open *depending on experience
Location: Toronto, ON

Our client, a globally recognized bank is looking to hire a Data Engineer for a minimum 6 months based in Toronto to join their team..

Your new company
A leading bank, with multiple offices across Canada and throughout the world are looking for a Big Data Engineer for a 6 month contract in their Toronto office. They have an excellent reputation within their sector and are known as a market leader.

Your new role
You will be working as a Big Data Engineer part of the core big data technology and design team. Person would be entrusted to develop solutions/design ideas, identify design ideas to enable the software to meet the acceptance and success criteria. You will be working with architects/BA to build data components on the Big Data environment.

What you'll need to succeed
* 8+ years professional software development experience and at least 4+ years within Big data environment
* 4+ years of programming experience in Java, Scala, and Spark.
* Proficient in SQL and relational database design.
* Agile and DevOps experience - at least 2+ years
* Project planning.
* Must have excellent communication skills + have strong team-working skills
* Experienced in Java, Scala and/or Python, Unix/Linux environment on-premises and in the cloud
* Experienced in construction of robust batch and real-time data processing solutions on hadoop
* Java development and design using Java 1.7/1.8.
* Experience with most of the following technologies (Apache Hadoop, Scala, Apache Spark, Spark streaming, YARN, Kafka, Hive, HBase, Presto, Python, ETL frameworks, MapReduce, SQL, RESTful services).
* Sound knowledge on working Unix/Linux Platform
* Hands-on experience building data pipelines using Hadoop components Sqoop, Hive, Pig, Spark, Spark SQL.
* Must have experience with developing Hive QL, UDF's for analysing semi structured/structured datasets
* Experience with time-series/analytics db's such as Elastic search or no SQL database.
* Experience with industry standard version control tools (Git, GitHub), automated deployment tools (Ansible & Jenkins) and requirement management in JIRA
* Exposure to Agile Project methodology but also with exposure to other methodologies (such as Kanban)
* Understanding of data modelling techniques using relational and non-relational techniques
* Coordination between global teams
* Experience on Debugging the Code issues and then publishing the highlighted differences to the development team/Architects
* Nice to have: ELK experience. Knowledge of cloud computing technology such as Google Cloud Platform(GCP)
What you'll get in return


The client is offering a 6 month engagement, with a high likelihood of extension and a very competitive rate for the contract.

What you need to do now
If you're interested in this role, click 'apply now' to forward an up-to-date copy of your CV, or call us now.
Viome
  • Santa Clara, CA

Viome is a wellness as a service company that applies artificial intelligence and machine learning to biological data – e.g., microbiome, transcriptome and metabolome data – to provide personalized recommendations for healthy living. We are an interdisciplinary and passionate team of experts in biochemistry, microbiology, medicine, artificial intelligence and machine learning.

We are seeking an experienced and energetic individual for a Senior Backend Software Engineer position who can work onsite at our Santa Clara, CA, office.

Responsibilities:



  • Collaborate with AI/ML and Bioinformatics engineers to gather specific requirements to design innovative solutions

  • Support the entire application lifecycle (concept, design, test, release, and support)

  • Produce fully functional web applications

  • Develop REST APIs to support queries from web clients

  • Design database schemas to support applications

  • Develop Web Applications for internal scientists and operation teams

  • Architect distributed solutions to help applications scale

  • Write unit and UI tests to identify malfunctions

  • Troubleshoot and debug to optimize performance



Qualifications:



  • BS/MS degree in Computer Science or related field with 7+ years of relevant experience

  • Strong Scala, Core Java, Spring Boot, Microservices, Akka/Play Framework, HTML, CSS, JavaScript, React, Redux, PostgreSQL

  • Advanced knowledge of build scripts and tools such as Ansible, Jenkins, CircleCI, Apache Maven, Gradle, AWS CodeDeploy and similar

  • Proven experience with relational database design, object-oriented programming principles, event-driven design principles, and distributed processing design principles

  • Solid understanding of information security standards and methodologies

  • Familiarity with cloud technologies such as AWS and distributed processing technologies such as MapReduce, Hadoop and Spark is a big plus

  • Previous experience scaling large backends (1M+ users) is a big plus



Viome is 130+ person start-up offering a successful commercial product that has generated high demand. With offices in California, New Mexico, New York, and Washington, we are looking to hire team members capable of working in dynamic environments, who have a positive attitude and enjoy collaboration. If you have the skills and are excited about Viome’s mission, we’d love to hear from you.

Acxiom
  • Austin, TX
As a Senior Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze the latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You are able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You are also a self-starter able to continuously evaluate new technologies, innovate and deliver solutions for business critical applications


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Lead the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Lead and review Hadoop log files with the help of log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 6+ years of Big Data Administration Experience
  • Extensive knowledge and Hands-on Experience of Hadoop based data manipulation/storage technologies like HDFS, MapReduce, Yarn, Spark/Kafka, HBASE, HIVE, Pig, Impala, R and Sentry/Ranger/Knox
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Experience supporting Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems
  • Great operational expertise such as excellent troubleshooting skills, understanding of system's capacity, bottlenecks, core resource utilizations (CPU, OS, Storage, and Networks)
  • Experience in Hadoop cluster migrations or upgrades
  • Strong scripting skills in Perl, Python, shell scripting, and/or Ruby on Rails
  • Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera, HortonWorks, and/or MapR versions along with monitoring/alerting tools (Nagios, Ganglia, Zenoss, Cloudera Manager)
  • Strong problem solving and critical thinking skills
  • Excellent verbal and written communication skills


What will set you apart:


  • Solid understanding and hands-on experience of Big Data on private/public cloud technologies(AWS/GCP/Azure)
  • DevOps experience (CHEF, Puppet and Ansible)
  • Strong knowledge of JAVA/J2EE and other web technologies

 
USU Gruppe
  • Karlsruhe, Deutschland

Wir haben spannende Aufgaben für Dich:



  • Konzeption, Implementierung, Deployment und Betrieb von cloud-first Produkten für unsere Data-Science-Plattform

  • Du arbeitest teamorientiert in unserem selbstorganisierten, cross-funktionalen Team

  • Du setzt moderne Technologien und Methoden ein, wie z.B. - Cloud Computing mit Kubernetes, Spark, NoSQL Datenbanken, - DevOps, Continuous Integration and Deployment, Infrastructure as code




Das bringst Du mit:



  • Abgeschlossenes Studium (z.B. Informatik, Wirtschaftsinformatik, Physik) oder vergleichbare Ausbildung

  • Routinierter Umgang mit Linux-Systemen, Container-Lösungen (z.B. Docker, Kubernetes) und (No)SQL-Datenbanken

  • Erfahrung in der Automatisierung und Orchestrierung des Betriebs von verteilten Systemen, beispielsweise mit Ansible, Saltstack und Helm

  • Idealerweise verfügst Du über Kenntnisse im Software-Engineering mit Go, Scala oder Python

  • Teamgeist und Begeisterung für die Entwicklung innovativer Big Data Lösungen für die Industrie 4.0 runden dein Profil ab

Webtrekk GmbH
  • Berlin, Deutschland
Your responsibilities:

In this role, you will set up your full-fledged research and development team of developers and data science engineers. You will evaluate and choose appropriate technologies and develop products that are powered by Artificial Intelligence and Machine Learning



  • Fast pace development of experimental prototypes, POCs and products for our >400 customers

  • Manage fast feedback cycles, adopt learnings and feedbacks and ultimately deliver AI powered products

  • You will develop new and optimise existing components always with an eye on scalability, performance and maintenance

  • Organize and lead team planning meetings and provide advice, clarification and guidance during the execution of sprints

  • Lead your teams' technical vision and drive the design and development of new innovative products and services from the technical side

  • Lead discussions with the team and management to define best practices and approaches

  • Set goals, objectives and priorities. Mentor team members and provide guidance by regular performance reviews.




The assets you bring to the team:


  • Hands on experience in agile software development on all levels based on profound technical understanding

  • Relevant experience in managing a team of software developers in an agile environment

  • At least 3 years of hands-on experience with developing in Frontend Technologies like Angular or React

  • Knowledge of backend technologies such as Java, Python or Scala are a big plus

  • Experience with distributed systems based on RESTful services

  • DevOps mentality and practical experience with tools for build and deployment automation (like Maven, Jenkins, Ansible, Docker)

  • Team and project-oriented leader with excellent problem solving and interpersonal skills

  • Excellent communication, coaching and conflict management skills as well as a strong assertiveness

  • Strong analytical capability, discipline, commitment and enthusiasm

  • Fluent in English, German language skills are a big plus




What we offer:


  • Prospect: We are a continuously growing team with experts in the most future-oriented fields of customer intelligence. We are dealing with real big data scenarios and data from various business models and industries. Apart from interesting tasks we offer you considerable freedom for your ideas and perspectives for the development of your professional and management skills.

  • Team oriented atmosphere: Our culture embraces integrity, team work and innovation. Our employees value the friendly atmosphere that is the most powerful driver within our company.

  • Goodies: Individual trainings, company tickets, team events, table soccer, fresh fruits and a sunny roof terrace.

  • TechCulture: Work with experienced developers who share the ambition for well-written and clean code. Choose your hardware, OS and IDE. Bring in your own ideas, work with open source and have fun at product demos, hackathons and meetups.