OnlyDataJobs.com

Tinamica, S.L.
  • Madrid, Spain

Quieres trabajar en una empresa innovadora, dinámica y especializada en Big Data, Analytics, Inteligencia Artificial y BI?

No te pierdas lo que OFRECEMOS:

- Proyectos innovadores donde poder aportar y desarrollar tus conocimientos.

- Estabilidad profesional.

- Formación gratuita en una de las mejores Instituciones de Madrid.

- Trabajar con grandes profesionales que compartan la pasión por la tecnología.

- Trabajar en clientes de primer nivel en diferentes sectores.

- Ambiente laboral inmejorable.

- Salario atractivo y revisable en función del desarrollo profesional.

- Programa de mentoring para optimizar la adquisición de habilidades y conocimientos.

- Contrato indefinido.

- Estar informado de las últimas novedades del Smart Data (asistencia eventos, meetup, etc.).

- Seguro Médico gratuito.

- 26 días de vacaciones al año.

- Salario adaptable a las necesidades (cheques guardería, tickets restaurant, etc.)

- Instalaciones modernas, divertidas y confortables (futbolín/dardos, desayunos...).

SELECCIONAMOS DESARROLLADORES especializados en BIG DATA (Scala, Spark, Flume, HBase, Kafka...) con experiencia de al menos 2 años en proyectos reales y con el manejo del siguiente entorno tecnológico y herramientas:

- Plataformas ecosistema Hadoop (HortonWorks/Cloudera).

- Desarrollo/programación con Scala.

- Sistemas computacionales Apache Spark/Storm.

- Bases de Datos in-Memory: Bases de Datos no-sql, streaming programming.

- Procesamiento distribuido de datos en tiempo real.

- Componentes de Google.

- Valorable: experiencia en Metodologías Agile (Scrum).

Si te gusta lo que lees, eres una persona apasionada de este sector, con ganas de seguir creciendo y desarrollarte en el mundo de Big Data, Analytics y BI, esta es tu oportunidad, no lo dudes más y ¡únete a nuestro equipo!

OverDrive Inc.
  • Garfield Heights, OH

The Data Integration team at OverDrive provides data for other teams to analyze and build their systems upon. We are plumbers, building a company-wide pipeline of clean, usable data for others to use. We typically don’t analyze the data, but instead we make the data available to others. Your job, if you choose to join us, is to help us build a real-time data platform that connects our applications and makes available a data stream of potentially anything happening in our business.


Why Apply:


We are looking for someone who can help us wire up the next step. Help us create something from the ground up (almost a green field). Someone that can help us move large data from one team to the next and come up with ideas and solutions around how we go about looking at data. Using technologies like Kafka, Scala, Clojure, F#.


About You:



  • You always keep up with the latest in distributed systems. You're extremely depressed each summer when the guy who runs highscalability.com hangs out the "Gone Fishin" sign.

  • You’re humble. Frankly, you’re in a supporting role. You help build infrastructure to deliver and transform data for others. (E.g., someone else gets the glory because of your effort, but you don’t care.)

  • You’re patient. Because nothing works the first time, when it comes to moving data around.

  • You hate batch. Real-time is your thing.

  • Scaling services is easy. You realize that the hardest part is scaling your data, and you want to help with that.

  • You think microservices should be event-driven. You prefer autonomous systems over tightly-coupled, time-bound synchronous ones with long chains of dependencies.


 Problems You Could Help Solve:



  • Help us come up with solutions around speeding up our process

  • Help us come up with ideas around making our indexing better

  • Help us create better ways to track all our data

  • If you like to solve problems and use cutting edge technology – keep reading


 Responsibilities:



  • Implement near real-time ETL-like processes from hundreds of applications and data sources using the Apache Kafka ecosystem of technologies.

  • Designing, developing, testing and tuning a large-scale ‘stream data platform’ for connecting systems across our business in a decoupled manner.

  • Deliver data in near real-time from transactional data stores into analytical data stores.

  • R&D ways to acquire data and suggest new uses for that data.

  • “Stream processing.” Enable applications to react to, process and transform streams of data between business domains.

  • “Data Integration.” Capture application events and data store changes and pipe to other interested systems.


 Experience/Skills: 



  • Comfortable with functional programming concepts. While we're not writing strictly functional code, experience with languages like Scala, Haskell, or Clojure will make working with streaming data easier.

  • Familiarity with the JVM.  We’re using Scala with a little bit of Java and need to occasionally tweak the performance settings of the JVM itself.

  • Familiarity with C# and the .Net framework is helpful. While we don’t use it day to day, most of our systems run on Windows and .Net.

  • Comfortable working in both Linux and Windows environments. Our systems all run on Linux, but we interact with many systems running on Windows servers.

  • Shell scripting & common Linux tool skills.

  • Experience with build tools such as Maven, sbt, or rake.

  • Knowledge of distributed systems.

  • Knowledge of, or experience with, Kafka a plus.

  • Knowledge of Event-Driven/Reactive systems.

  • Experience with DevOps practices like Continuous Integration, Continuous Deployment, Build Automation, Server automation and Test Driven Development.


 Things You Dig: 



  • Stream processing tools (Kafka Streams, Storm, Spark, Flink, Google Cloud DataFlow etc.)

  • SQL-based technologies (SQL Server, MySQL, PostgreSQL, etc.)

  • NoSQL technologies (Cassandra, MongoDB, Redis, HBase, etc.)

  • Server automation tools (Ansible, Chef, Puppet, Vagrant, etc.)

  • Distributed Source Control (Mercurial, Git)

  • The Cloud (Azure, Amazon AWS)

  • The ELK Stack (Elasticsearch, Logstash, Kibana)


What’s Next:


As you’ve probably guessed, OverDrive is a place that values individuality and variety. We don’t want you to be like everyone else, we don’t even want you to be like us—we want you to be like you! So, if you're interested in joining the OverDrive team, apply below, and tell us what inspires you about OverDrive and why you think you are perfect for our team.



OverDrive values diversity and is proud to be an equal opportunity employer.

WB Solutions LLC
  • Houston, TX

Role: Hadoop Engineer

Location: Irving, TX

Duration: Long Term Contract

Requirement:

    • Majority is related to Hadoop and not on Oracle pl/sql, ODI, OBIEE
    • Data scientist experience with building statistical models and intelligence around it
    • Proven understanding and related experience with Hadoop, HBase, Hive, Pig, Sqoop, Flume, Hbase, Map/Reduce, Apache Spark as well as Unix OS Core Java programming, Scala, shell scripting experience.
    • Hands on Experience in Oozie Job Scheduling, Zookeeper, Solr, ElasticSearch, Storm, LogStash or other similar technologies
    • Solid experience in writing SQL, stored procedures, query performance tuning preferably on Oracle 12c, ODI jobs

Responsibilities:

    • Participate in Agile development on a large Hadoop-based data platform as a member of a distributed team
    • Come out with different statistical models and build data insights on revenue leakages possibilities
    • Code programs to load data from diverse data sources into ODI, OBIEE , Hive structures using SQOOP and other tools.
    • Translate complex functional and technical requirements into detailed design.
    • Analyze vast data stores.
    • Code business logic using Scala on Apache Spark.
    • Create workflows using Oozie.
    • Code and test prototypes.
    • Code to existing frameworks where applicable.

EDUCATION/CERTIFICATIONS:

    • Bachelor's/Masters in Computer Engineering or Information Technology
    • Oracle - ODI, OBIEE , Hadoop certification is added advantage
Appen
  • Sunnyvale, CA

Overview



Artificial Intelligence is transforming the world in almost every industry. Only good training data can produce the best machine learning solutions. The AI world is starving for great training-data, however, creating training-data with high quality in a scale-able way is very challenging and very few companies do it well. Appen is a global leader in the development of high-quality training data-sets for machine learning. We bring over 20 years of experience capturing and enriching a wide variety of data-types including speech, audio, text, image and video.



The Appen Tech team solves AI data problems by combining the power of humans and technology. This team is responsible for building Appen’s data lake and data products. It is an exciting opportunity to make your own mark in the AI data industry!



Our team is located in the Bay Area, but you will have the opportunity of working with other Appen team-members located in our Shanghai and Sydney offices. If you are looking to make a huge impact to the AI world and rise with a leading data company that has a start-up culture as we continue to grow, Appen is the place for you.



Responsibilities




  • Build Appen’s data lake, which models and stores all Appen’s data.

  • Build data insights products.

  • Learn and understand AI data needs and build all the training data-related knowledge into our platform.

  • Technical lead for new and existing product initiatives, assist with definition of product direction.

  • Define, design, troubleshoot, and debug complex, multi-tier distributed software applications.

  • Work with managers, engineers, product management and Operations team to design and implement application features.

  • Estimate engineering effort, plan execution cycles, and rollout system changes.



Requirements



  • BS, MS, or PhD in Computer Science or related technical discipline (or equivalent).

  • 5+ years’ work experience in software development area with at least 2+ years’ experience in big data development.

  • Strong technical acumen with big data platform (relational, non-relational, batch, real time);

  • Experience of Hadoop, Map/Reduce, SQL, Kafka/Storm, Elastic Search;

  • Full stack engineer who can also work on front-end development as well as web service;

  • Excellent understanding of computer science fundamentals, data structures, and algorithms.

  • Excellent problem-solving skills.

  • Proven results-oriented person with a delivery focus in a high velocity, high quality environment.

  • Strong communication skills in Oral and Written English.

Cheetah Digital
  • Cyberjaya, Malaysia

Cheetah Digital is hiring Full Stack C#/.NET Engineers to join its fast growing innovation engineering team in Kuala Lumpur. The C#/.NET Engineer is responsible for designing, developing, deploying, and supporting Cheetah Digital’s cloud-based platform and solutions used by leading brands in North America, Europe, and Asia. Cheetah’s Marketing Suite Platform processes and analyzes billions of transactions per day on an Apache Hadoop and .NET platform hosted on AWS and Azure. The .NET Engineer will work closely with Cheetah’s product management, quality assurance, operations, and customer success teams on a daily basis.


The ideal candidate will possess strong technical foundation in C#/.NET, Microsoft SQL server, APIs, and performance tuning in cloud environments. The ideal candidate should also have an aptitude for quality and a collaborative mindset to learn and contribute while working closely with team members.

RESPONSIBILITIES:



  • Translate business requirements into specifications and detailed designs

  • Develop and support Cheetah Digital’s .NET applications and RESTful web services by writing efficient, maintainable code to meet requirements and adhere to security standards

  • Work through all phases of the software development life cycle, including analysis, design, implementation, testing, deployment, and maintenance

  • Conduct large-scale performance benchmarks and tune the system for high throughput

  • Investigate, analyze and address reported defects in a timely manner


QUALIFICATIONS:



  • Bachelor’s Degree in Computer Science or related field, with a minimum A- GPA, from a top technical university

  • 7+ years programming experience in C#/.NET, or other enterprise, high-scale framework and fundamental understanding of the core server-side development concepts

  • Proficient in writing and performance tuning complex T-SQL

  • Advanced relational DB experience with Microsoft SQL Server, Oracle, or Postgres

  • Experience building and integrating with web services (REST, SOAP), APIs, JSON, XML

  • Strong knowledge of multi-tier web application design

  • Experience with Hadoop components, such as Hbase, Spark, Kafka, Hive, Storm is a huge plus

  • Pass a strict criminal background check and provide strong references


COMMUNICATION SKILLS:



  • Excellent communication skills, both verbal and written

  • Demonstrated ability to collaborate with local and remote teams in different time zones

  • Demonstrated ability to compose clear and concise technical documentation


TECHNICAL QUALIFICATIONS:



  • Languages: C#, C++, Javascript, PowerShell

  • Frameworks: .NET, MVC, WebApi, git

  • UI: HTML, JS, CSS, JQuery, Angular, React


Databases: SQLServer, Oracle, Postgres

Appen
  • Sunnyvale, CA

Overview



Artificial Intelligence is transforming the world in almost every industry. Only good training data can produce the best machine learning solutions. The AI world is starving for great training-data, however, creating training-data with high quality in a scaleable way is very challenging and very few companies do it well. Appen is the global leader in the development of high-quality training data-sets for machine learning. We bring over 20 years of experience capturing and enriching a wide variety of data-types including speech, audio, text, image and video.



The Appen Tech team solves AI data problems by combining the power of humans and technology. This team is responsible for building Appen’s data lake and data products. It is an exciting opportunity to make your own mark in the AI data industry!



Our team is located in the Bay Area, but you will have the opportunity of working with other Appen team-members located in our Shanghai and Sydney offices. If you are looking to make a huge impact to the AI world and rise with a leading data company that has a start-up culture as we continue to grow, Appen is the place for you.



Responsibilities




  • Build Appen’s data lake, which models and stores all Appen’s data.

  • Build data insights products.

  • Learn and understand the AI data need and build all the training data related knowledge into our platform.

  • Technical lead for new and existing product initiatives, assist with definition of product direction.

  • Define, design, troubleshoot, and debug complex, multi-tier distributed software applications.

  • Work with managers, engineers, product management and our Operations team to design and implement application features.

  • Estimate engineering effort, plan execution cycles, and rollout system changes.



Requirements



  • BS, MS, or PhD in Computer Science or related technical discipline (or equivalent).

  • 8+ years’ work experience in software development area with at least 5+ years’ experience in big data development.

  • Strong technical acumen with big data platform (relational, non-relational, batch, real time);

  • Experience of Hadoop, Map/Reduce, SQL, Kafka/Storm, Elastic Search;

  • Full stack engineer who can also work on frontend development as well as web service;

  • Excellent understanding of computer science fundamentals, data structures, and algorithms.

  • Excellent problem-solving skills.

  • Proven results-oriented person with a delivery focus in a high velocity, high quality environment.

  • Strong communication skills in Oral and Written English.

Royal Group LLC
  • Abu Dhabi, United Arab Emirates

We are looking for a Lead Enterprise Architect to be a part of our core Aviation Project Team to work on both Airline and Airport Projects.  You would be responsible for defining the vision for the Aviation Enterprise Architecture Framework and aligning this with corporate Business and IT strategies across both projects. 


In this role, you are responsible for creating and maintaining the Architecture guiding principles and determining the direction of significant transformation.   You assess the client’s current Architectural frameworks in place and identifying where our work can complement and align with theirs.


For the Airport engagement, leading a team of Technical, Solutions, and Business Architects; you direct and ensure the design and development of models that aligned with the architectural frameworks in place.


For the Airline engagement, which is a more hands-on role, you work closely with the Project Manager to build a first draft Enterprise Data Model.  You are in charge of defining an approach for a Data Maturity Assessment of core enabling Data capabilities.



















Requirements


·         Experience in independently handling large projects with specific technology ownership


·         Hands on experience architecting solutions for distributed, mission critical systems with emphasis on real time integration, high availability, scalability and performance


·         Excellent working knowledge of enterprise architecture standards and tools


·         Resourceful team player who excels at building trusting relationships with customers and colleagues


·         Hands on experience in programming language: Java, Python or C++


·         Experience and knowledge of Big Data ecosystems such as Hadoop, Spark, Flink, Hive, Pig, Presto, Storm and Heron


·         Proven experience in Sequel Database such as SQL, MySQL


·         Experience and knowledge of NoSQL, MongoDB, GDB, SOLR, HBase, Cassandra, Neo4J


·         Excellence in abstraction and logic thinking


·         Strong verbal and written English communication skills


·         Excellent communication, coordination, organizational and collaborative skills








Preferred Skills


Experience in Airline & Airport Industry

TransTech IT Staffing
  • Detroit, MI

The Lead Software Engineer will contribute to architecting, designing, developing, deploying, and supporting end-to-end solutions within our in-house and/or client Big Data infrastructure. This is a full stack engineering role. The applications we create are used across our organization to help internal teams optimize media planning and buying.
 

With responsibilities across multiple clients, this candidate will gain exposure to a variety of strategic business problems and the opportunity to contribute to our teams set of innovative methods to solve them.

Experience:

  • BA/BS in Computer Science or a related degree, and 7+ years practical experience in the field.
  • 3+ year experience in Hadoop related software design, development, and deployment

 CoreSkills

  • Design and build tools, applications, and frameworks within Hadoop
  • Programming experience in Python and Java (including any JVM based language, e.g., Scala); comfortable working in *nix environments.
  • Broad experience across the full Hadoop stack, including Hive, Oozie, Spark, Flume, Storm, Kafka, etc.
  • Hadoop platform experience managing user permissions, security, monitoring performance, and troubleshooting issues.
  • Integrate with non-Hadoop platforms such as MS SQL Server, application server, and third-party APIs.
  • Experience developing and deploying data pipelines to support analytics applications.
  • Document and publish Big Data coding standards, architecture specifications, and technology framework
  • Provide technical leadership in establishing best practices in initiating, planning, and executing Big Data application development.
  • Work with external analytical stakeholders (both internal media teams and external clients) to plan, analyze, and execute against client analytics roadmaps.

Functional Role:

    Excel
    • lent at working with cross-functional teams to develop use cases and propose engineering solutions.
    • Ability to rapidly brainstorm and develop testable ideas alone or with teams.
    • Develop of analytically oriented solutions
    • Highly effective communicator, both spoken and written
    • Facilitate requirements gathering
    • Contribute in multiple projects as architect and creative consultant
    • Key role in testing and deployment
    • Self-reliant, demonstrated desire to take ownership of projects
    • Ability to prioritize and manage concurrent projects
    • Demonstrate initiative and work independently with minimal supervision

Valued Skills

    Integ
    • rating with other data management platforms like Informatica, Talend, or Alteryx a plus Worki
    • ng knowledge of SQL Server Integration Services a plus Exper
    • ience developing in R or SAS programming is a plus Appli
    • ed experience with marketing data and digital media is highly desirable Exper
    • ience with programmatic media and platforms in particular a plus.
    • Broad knowledge of large-scale distributed systems and client-server architectures a plus.
    • Knowledge of AWS and/or Azure Cloud platforms is good to have
Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Summary

Responsible for promoting the use of industry and Company technology standards. Monitors emerging technologies/technology practices for potential use within the Company. Designs and develops updated infrastructure in support of one or more business processes. Helps to ensure a balance between tactical and strategic technology solutions. Considers business problems "end-to-end": including people, process, and technology, both within and outside the enterprise, as part of any design solution. Mentors, reviews code and verifies that the object-oriented design best practices and that coding and architectural guidelines are adhered to. Identifies and drives issues through closure. Speaks at conferences and tech meetups about Comcast technologies and assists in finding key technical positions.

This role brings to bear significant cloud experience in the private and public cloud space as well as big data and software engineering. This role will be key in the re-platforming of the CX Personalization program in support of wholesale requirements. This person will engage as part of software delivery teams and contribute to several strategic efforts that drive personalized customer experiences across product usage, support interactions and customer journeys. This role leads the building of real-time big data platforms, machine learning algorithms and data services that enable proactive responses for customers at every critical touch point.

Core Responsibilities

-Enterprise-Level architect for "Big Data" Event processing, analytics, data store, and cloud platforms.

-Enterprise-Level architect for cloud applications and "Platform as a Service" capabilities

-Detailed current-state product and requirement analysis.

-Security Architecture for "Big Data" applications and infrastructure

-Ensures programs are envisioned, designed, developed, and implemented across the enterprise to meet business needs. Interfaces with the enterprise architecture team and other functional areas to ensure that most efficient solution is designed to meet business needs.

-Ensures solutions are well engineered, operable, maintainable, and delivered on schedule. Develops, documents, and ensures compliance with best practices including but not limited to the following coding standards, object oriented design, platform and framework specific design concerns and human interface guidelines.

-Tracks and documents requirements for enterprise development projects and enhancements.

-Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture.

-Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations etc. to meet goals.

-Gathers and understands client needs, finding key areas where technology leverage is possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction.

-Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent from the start and seeks to remove as much re-work as possible.

-Works with product marketing to define requirements. Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems.

-Acts as architectural lead on project.

-Applies new and innovative ideas to old or new problems. Fosters environments that encourage innovation. Contributes to and supports effort to further build intellectual property via patents.

-Consistent exercise of independent judgment and discretion in matters of significance.

-Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary.

-Other duties and responsibilities as assigned.

Requirements:

-Demonstrated experience with "Platform as a Service" (PaaS) architectures including strategy, architectural patterns and standards, approaches to multi-tenancy, scalability, and security.

-Demonstrated experience with schema and data governance and message metadata stores

-Demonstrated experience with public cloud resources such as AWS.

-Demonstrated experience with cloud automation technologies including Ansible, Terraform, Chef, Puppet, etc

-Hands-on experience with Data Flow processing engines, such as Apache NiFi and Apache Flink

-Working knowledge / experience with Big Data platforms (Kafka, Hadoop, Storm/Spark, NoSQL, In-memory data grid)

-Working knowledge / experience with Linux, Java, Python.

Education Level

- Bachelor's Degree or Equivalent

Field of Study

- Engineering, Computer Science

Years Experience

2+ years in Software Engineering Experience

1+ years in Cloud Infrastructure

Compliance

Comcast is an EEO/AA/Drug Free Workplace.

Disclaimer

The above information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Summary

Responsible for promoting the use of industry and Company technology standards. Monitors emerging technologies/technology practices for potential use within the Company. Designs and develops updated infrastructure in support of one or more business processes. Helps to ensure a balance between tactical and strategic technology solutions. Considers business problems "end-to-end": including people, process, and technology, both within and outside the enterprise, as part of any design solution. Mentors, reviews code and verifies that the object-oriented design best practices and that coding and architectural guidelines are adhered to. Identifies and drives issues through closure. Speaks at conferences and tech meetups about Comcast technologies and assists in finding key technical positions.

This role brings to bear significant cloud experience in the private and public cloud space as well as big data and software engineering. This role will be key in the re-platforming of the CX Personalization program in support of wholesale requirements. This person will engage as part of software delivery teams and contribute to several strategic efforts that drive personalized customer experiences across product usage, support interactions and customer journeys. This role leads the building of real-time big data platforms, machine learning algorithms and data services that enable proactive responses for customers at every critical touch point.

Core Responsibilities

-Enterprise-Level architect for "Big Data" Event processing, analytics, data store, and cloud platforms.

-Enterprise-Level architect for cloud applications and "Platform as a Service" capabilities

-Detailed current-state product and requirement analysis.

-Security Architecture for "Big Data" applications and infrastructure

-Ensures programs are envisioned, designed, developed, and implemented across the enterprise to meet business needs. Interfaces with the enterprise architecture team and other functional areas to ensure that most efficient solution is designed to meet business needs.

-Ensures solutions are well engineered, operable, maintainable, and delivered on schedule. Develops, documents, and ensures compliance with best practices including but not limited to the following coding standards, object oriented design, platform and framework specific design concerns and human interface guidelines.

-Tracks and documents requirements for enterprise development projects and enhancements.

-Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture.

-Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations etc. to meet goals.

-Gathers and understands client needs, finding key areas where technology leverage is possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction.

-Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent from the start and seeks to remove as much re-work as possible.

-Works with product marketing to define requirements. Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems.

-Acts as architectural lead on project.

-Applies new and innovative ideas to old or new problems. Fosters environments that encourage innovation. Contributes to and supports effort to further build intellectual property via patents.

-Consistent exercise of independent judgment and discretion in matters of significance.

-Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary.

-Other duties and responsibilities as assigned.

Requirements:

-Demonstrated experience with "Platform as a Service" (PaaS) architectures including strategy, architectural patterns and standards, approaches to multi-tenancy, scalability, and security.

-Demonstrated experience with schema and data governance and message metadata stores

-Demonstrated experience with public cloud resources such as AWS.

-Demonstrated experience with cloud automation technologies including Ansible, Terraform, Chef, Puppet, etc

-Hands-on experience with Data Flow processing engines, such as Apache NiFi and Apache Flink

-Working knowledge / experience with Big Data platforms (Kafka, Hadoop, Storm/Spark, NoSQL, In-memory data grid)

-Working knowledge / experience with Linux, Java, Python.

Education Level

- Bachelor's Degree or Equivalent

Field of Study

- Engineering, Computer Science

Years Experience

3+ years in Software Engineering Experience

1+ years in Cloud Infrastructure

Compliance

Comcast is an EEO/AA/Drug Free Workplace.

Disclaimer

The above information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Summary

Responsible for promoting the use of industry and Company technology standards. Monitors emerging technologies/technology practices for potential use within the Company. Designs and develops updated infrastructure in support of one or more business processes. Helps to ensure a balance between tactical and strategic technology solutions. Considers business problems "end-to-end": including people, process, and technology, both within and outside the enterprise, as part of any design solution. Mentors, reviews code and verifies that the object-oriented design best practices and that coding and architectural guidelines are adhered to. Identifies and drives issues through closure. Speaks at conferences and tech meetups about Comcast technologies and assists in finding key technical positions.

This role brings to bear significant cloud experience in the private and public cloud space as well as big data and software engineering. This role will be key in the re-platforming of the CX Personalization program in support of wholesale requirements. This person will engage as part of software delivery teams and contribute to several strategic efforts that drive personalized customer experiences across product usage, support interactions and customer journeys. This role leads the building of real-time big data platforms, machine learning algorithms and data services that enable proactive responses for customers at every critical touch point.

Core Responsibilities

-Enterprise-Level architect for "Big Data" Event processing, analytics, data store, and cloud platforms.

-Enterprise-Level architect for cloud applications and "Platform as a Service" capabilities

-Detailed current-state product and requirement analysis.

-Security Architecture for "Big Data" applications and infrastructure

-Ensures programs are envisioned, designed, developed, and implemented across the enterprise to meet business needs. Interfaces with the enterprise architecture team and other functional areas to ensure that most efficient solution is designed to meet business needs.

-Ensures solutions are well engineered, operable, maintainable, and delivered on schedule. Develops, documents, and ensures compliance with best practices including but not limited to the following coding standards, object oriented design, platform and framework specific design concerns and human interface guidelines.

-Tracks and documents requirements for enterprise development projects and enhancements.

-Monitors current and future trends, technology and information that will positively affect organizational projects; applies and integrates emerging technological trends to new and existing systems architecture. Mentors team members in relevant technologies and implementation architecture.

-Contributes to the overall system implementation strategy for the enterprise and participates in appropriate forums, meetings, presentations etc. to meet goals.

-Gathers and understands client needs, finding key areas where technology leverage is possible to improve business processes, defines architectural approaches and develops technology proofs. Communicates technology direction.

-Monitors the project lifecycle from intake through delivery. Ensures the entire solution design is complete and consistent from the start and seeks to remove as much re-work as possible.

-Works with product marketing to define requirements. Develops and communicates system/subsystem architecture. Develops clear system requirements for component subsystems.

-Acts as architectural lead on project.

-Applies new and innovative ideas to old or new problems. Fosters environments that encourage innovation. Contributes to and supports effort to further build intellectual property via patents.

-Consistent exercise of independent judgment and discretion in matters of significance.

-Regular, consistent and punctual attendance. Must be able to work nights and weekends, variable schedule(s) as necessary.

-Other duties and responsibilities as assigned.

Requirements:

-Demonstrated experience with "Platform as a Service" (PaaS) architectures including strategy, architectural patterns and standards, approaches to multi-tenancy, scalability, and security.

-Demonstrated experience with schema and data governance and message metadata stores

-Demonstrated experience with public cloud resources such as AWS.

-Demonstrated experience with cloud automation technologies including Ansible, Terraform, Chef, Puppet, etc

-Hands-on experience with Data Flow processing engines, such as Apache NiFi

-Working knowledge / experience with Big Data platforms (Kafka, Hadoop, Storm/Spark, NoSQL, In-memory data grid)

-Working knowledge / experience with Linux, Java, Python.

Education Level

- Bachelor's Degree or Equivalent

Field of Study

- Engineering, Computer Science

Years Experience

11+ years in Software Engineering Experience

4+ years in Technical Leadership roles

1+ years in Cloud Infrastructure

Compliance

Comcast is an EEO/AA/Drug Free Workplace.

Disclaimer

The above information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications

Comcast is an EOE/Veterans/Disabled/LGBT employer

Wipro Limited
  • Dallas, TX
  • ·         7+ yrs on Design/Architecture/implementation/consulting experience on Enterprice Data warehouse -On-premise/Cloud Platform

    ·         Experience in Designing best architecture for EWD/Data Lake on Public/Private Cloud and select the most appropriate tools and techniques for implementation of cost effective and optimised solutions technologically

    ·         Formulate conceptual architectures and communicate architectural vision, goals and design objectives to multiple audiences.

    ·         Should have good expertise on non-functional aspects (performance, HA, scalability, volume, security)

    ·         Work on Architectural review discussions and prepare design solutions that can be submitted for stakeholder approvals, and subsequently taken up with scrum teams for implementation

    ·         Should have hands-on experience in Database migration tools of cloud service providers

    ·         Should have hands on experience in data orchestrations using cloud service provider tools

    ·         Solid hands on experience in implementing EDW/Data lake application on Cloud platform - AWS-S3/EC2/Redshift/ EMR/ Glue, Snowflake / Azure / Google Cloud)

    ·         Create architectural principles/blueprint to support business goals, and develop IT frameworks that support EDW applications.

    ·         Solid hands experience on Big Data Technology Stack (Hadoop, Spark, Kafka)

    ·         Relational and NoSQL databases. (Mongo DB , Hbase, Casandra)

    ·         Stream-processing systems such as Storm and Spark-Streaming

    ·         One or more programming languages such as Python, Java, Perl

    ·         Good Knowledge and understanding of JSON , Web REST Full API services

    ·         Deep understanding of EDW data modeling using Star-schema/Snowflake and data architecting , Capacity Planning and Sizing

    ·         Effectively evaluate the various tools available in the marketplace (open source and commercial) and suggests the right tools to use to accomplish the project objectives in terms of documenting the requirements of the project

    ·         Provide periodic feedback to the Competency / Center of Excellence groups on patterns of requirements or use cases or other insights collected through various forums, pre-sales activities.

    ·         Work with tech team to code and implement solutions in production and QA environments for permanent resolutions

    ·         Excellent presentation skills with a high degree of comfort speaking with internal and external executives, IT management, and technical teams including software development groups

SafetyCulture
  • Surry Hills, Australia
  • Salary: A$120k - 140k

The Role



  • Be an integral member on the team responsible for design, implement and maintain distributed big data capable system with high-quality components (Kafka, EMR + Spark, Akka, etc).

  • Embrace the challenge of dealing with big data on a daily basis (Kafka, RDS, Redshift, S3, Athena, Hadoop/HBase), perform data ETL, and build tools for proper data ingestion from multiple data sources.

  • Collaborate closely with data infrastructure engineers and data analysts across different teams, find bottlenecks and solve the problem

  • Design, implement and maintain the heterogeneous data processing platform to automate the execution and management of data-related jobs and pipelines

  • Implement automated data workflow in collaboration with data analysts, continue to improve, maintain and improve system in line with growth

  • Collaborate with Software Engineers on application events, and ensuring right data can be extracted

  • Contribute to resources management for computation and capacity planning

  • Diving deep into code and constantly innovating


Requirements



  • Experience with AWS data technologies (EC2, EMR, S3, Redshift, ECS, Data Pipeline, etc) and infrastructure.

  • Working knowledge in big data frameworks such as Apache Spark, Kafka, Zookeeper, Hadoop, Flink, Storm, etc

  • Rich experience with Linux and database systems

  • Experience with relational and NoSQL database, query optimization, and data modelling

  • Familiar with one or more of the following: Scala/Java, SQL, Python, Shell, Golang, R, etc

  • Experience with container technologies (Docker, k8s), Agile development, DevOps and CI tools.

  • Excellent problem-solving skills

  • Excellent verbal and written communication skills 

Signify Health
  • Dallas, TX

Position Overview:

Signify Health is looking for a savvy Data Engineer to join our growing team of deep learning specialists. This position would be responsible for evolving and optimizing data and data pipeline architectures, as well as, optimizing data flow and collection for cross-functional teams. The Data Engineer will support software developers, database architects, data analysts, and data scientists. The ideal candidate would be self-directed, passionate about optimizing data, and comfortable supporting the Data Wrangling needs of multiple teams, systems and products.

If you enjoy providing expert level IT technical services, including the direction, evaluation, selection, configuration, implementation, and integration of new and existing technologies and tools while working closely with IT team members, data scientists, and data engineers to build our next generation of AI-driven solutions, we will give you the opportunity to grow personally and professionally in a dynamic environment. Our projects are built on cooperation and teamwork, and you will find yourself working together with other talented, passionate and dedicated team member, all working towards a shared goal.

Essential Job Responsibilities:

  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data models for greater scalability, etc.
  • Leverage Azure for extraction, transformation, and loading of data from a wide variety of data sources in support of AI/ML Initiatives
  • Design and implement high performance data pipelines for distributed systems and data analytics for deep learning teams
  • Create tool-chains for analytics and data scientist team members that assist them in building and optimizing AI workflows
  • Work with data and machine learning experts to strive for greater functionality in our data and model life cycle management capabilities
  • Communicate results and ideas to key decision makers in a concise manner
  • Comply with applicable legal requirements, standards, policies and procedures including, but not limited to the Compliance requirements and HIPAA.


Qualifications:Education/Licensing Requirements:
  • High school diploma or equivalent.
  • Bachelors degree in Computer Science, Electrical Engineer, Statistics, Informatics, Information Systems, or another quantitative field. or related field or equivalent work experience.


Experience Requirements:
  • 5+ years of experience in a Data Engineer role.
  • Experience using the following software/tools preferred:
    • Experience with big data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with AWS or Azure cloud services.
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C#, etc.
  • Strong work ethic, able to work both collaboratively, and independently without a lot of direct supervision, and solid problem-solving skills
  • Must have strong communication skills (written and verbal), and possess good one-on-one interpersonal skills.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
  • 2 years of experience in data modeling, ETL development, and Data warehousing
 

Essential Skills:

  • Fluently speak, read, and write English
  • Fantastic motivator and leader of teams with a demonstrated track record of mentoring and developing staff members
  • Strong point of view on who to hire and why
  • Passion for solving complex system and data challenges and desire to thrive in a constantly innovating and changing environment
  • Excellent interpersonal skills, including teamwork and negotiation
  • Excellent leadership skills
  • Superior analytical abilities, problem solving skills, technical judgment, risk assessment abilities and negotiation skills
  • Proven ability to prioritize and multi-task
  • Advanced skills in MS Office

Essential Values:

  • In Leadership Do whats right, even if its tough
  • In Collaboration Leverage our collective genius, be a team
  • In Transparency Be real
  • In Accountability Recognize that if it is to be, its up to me
  • In Passion Show commitment in heart and mind
  • In Advocacy Earn trust and business
  • In Quality Ensure what we do, we do well
Working Conditions:
  • Fast-paced environment
  • Requires working at a desk and use of a telephone and computer
  • Normal sight and hearing ability
  • Use office equipment and machinery effectively
  • Ability to ambulate to various parts of the building
  • Ability to bend, stoop
  • Work effectively with frequent interruptions
  • May require occasional overtime to meet project deadlines
  • Lifting requirements of
Mix.com
  • Phoenix, AZ

Are you interested in scalability & distributed systems? Do you want to work to help shaping a discovery engine powered by cutting edge technologies and machine learning at scale? If you answered yes to the above questions, Mix's Research and Development is the team for you!


In this role, you'll be part of a small and innovative team comprised of engineers and data scientists working together to understand content by leveraging machine learning and NLP technologies. You will have the opportunity to work on core problems like detection of low quality content or spam, text semantic analysis, video and image processing, content quality assessment and monitoring. Our code operates at massive scale, ingesting, processing and indexing millions of URLs.



Responsibilities

  • Write code to build an infrastructure, which is capable of scaling based on the load
  • Collaborate with researchers and data scientists to integrate innovative Machine Learning and NLP techniques with our serving, cloud and data infrastructure
  • Automate build and deployment process, and setup monitoring and alerting systems
  • Participate in the engineering life-cycle, including writing documentation and conducting code reviews


Required Qualifications

  • Strong knowledge of algorithms, data structures, object oriented programming and distributed systems
  • Fluency in OO programming language, such as  Scala (preferred), Java, C, C++
  • 3+ years demonstrated expertise in stream processing platforms like Apache Flink, Apache Storm and Apache Kafka
  • 2+ years experience with a cloud platform like Amazon Web Services (AWS) or Microsoft Azure
  • 2+ years experience with monitoring frameworks, and analyzing production platforms, UNIX servers and mission critical systems with alerting and self-healing systems
  • Creative thinker and self-starter
  • Strong communication skills


Desired Qualifications

  • Experience with Hadoop, Hive, Spark or other MapReduce solutions
  • Knowledge of statistics or machine learning
Ripple
  • San Francisco, CA
  • Salary: $135k - 185k

Ripple is the world’s only enterprise blockchain solution for global payments. Today the world sends more than $155 trillion* across borders. Yet, the underlying infrastructure is dated and flawed. Ripple connects banks, payment providers, corporates and digital asset exchanges via RippleNet to provide one frictionless experience to send money globally.


Ripple is growing rapidly and we are looking for a results-oriented and passionate Senior Software Engineer, Data to help build and maintain infrastructure and empower the data-driven culture of the company. Ripple’s distributed financial technology outperforms today’s banking infrastructure by driving down costs, increasing processing speeds and delivering end-to-end visibility into payment fees, timing, and delivery.


WHAT YOU’LL DO:



  • Support our externally-facing data APIs and applications built on top of them

  • Build systems and services that abstract the engines and will allow the users to focus on business and application logic via higher-level programming models

  • Build data pipelines and tools to keep pace with the growth of our data and its consumers

  • Identify and analyze requirements and use cases from multiple internal teams (including finance, compliance, analytics, data science, and engineering); work with other technical leads to design solutions for the requirements


WHAT WE’RE LOOKING FOR:



  • Deep experience with distributed systems, distributed data stores, data pipelines, and other tools in cloud services environments (e.g AWS, GCP)

  • Experience with distributed processing compute engines like Hadoop, Spark, and/or GCP data ecosystems (BigTable, BigQuery, Pub/Sub)

  • Experience with stream processing frameworks such as Kafka, Beam, Storm, Flink, Spark streaming

  • Experience building scalable backend services and data pipelines

  • Proficient in Python, Java, or Go

  • Able to support Node.js in production

  • Familiarity with Unix-like operating systems

  • Experience with database internals, database design, SQL and database programming

  • Familiarity with distributed ledger technology concepts and financial transaction/trading data

  • You have a passion for working with great peers and motivating teams to reach their potential

WeWork
  • New York, NY

About the Role:


If youre passionate about building large scale data processing systems, and you are motivated to make an impact in creating a robust and scalable data platform used by every team, come join us. You will jump into an early stage team that builds the data transport, collection and orchestration layers. You will help shape the vision and architecture of WeWork's next generation data platform, making it easy for developers to build data-driven products and features. You are responsible for developing a reliable platform that scales with the companys incredible growth. Your efforts will allow accessibility to business and user behavior insights, using huge amounts of WeWork data to fuel several teams such as Analytics, Data Science, Sales, Revenue, Product, Growth and many others as well as empowering them to depend on each other reliably. You will be a part of an experienced engineering team and work with passionate leaders on challenging distributed systems problems.


About the Team:


Data is at the core of our business, providing insights into the effectiveness of our products and enabling the technology that powers them. We build and operate the platform used by the rest of the company for streaming and batch computation and to train ML models. Were building an ecosystem where consumers and producers of data can depend on each other safely. We thrive to build high quality systems we can be proud to open source and an amazing experience for our users and ourselves. We regard culture and trust highly and are looking forward to welcoming your contribution to the team.


Responsibilities



  • You will build and maintain a high-performance, fault-tolerant, secure, and scalable data platform

  • You will lead development of high leverage projects and capabilities of the platform

  • You will partner with architects and business leaders to design and build robust services using storage layer, streaming and batch data

  • Thinking through long-term impacts of key design decisions and handling failure scenarios

  • Form a holistic understanding of tools, key business concepts (data tables), and the data dependencies and team dependencies

  • You can help drive Storage layer and API features roadmap, be responsible in the overall engineering(design, implementation and testing)

  • Building self-service platforms to power WeWorks Technology


Requirements



  • 7 - 9+ years of experience

  • Experience and interest in writing production­ quality code in Java/Python/Scala/

  • Experience shipping several high quality of complex software's release.

  • When talking about dynamic data infrastructures - RDBMS, Columnar Databases, NoSQL and File-based storage solutions - you know these "under the hood".

  • Experience with Query execution optimization (Columnar storage, push downs): Hive, Presto, Parquet,

  • You have a strong foundation in algorithms, data structures, and their real-world use cases.

  • You have a strong knowledge of distributed systems concepts and principles (consistency and availability, liveness and safety, durability, reliability, fault-tolerance, consensus algorithms)

  • When thinking about Big Data processing you have a deep understanding of Spark, Hadoop, Storm, Flink or Apache Beam.

  • Strong Experience with one or more of the following technologies:


    • Distributed logging systems (Kafka, Pulsar, Kinesis, etc)

    • IDL: Avro, Protobuf or Thrift

    • MPP databases (Redshift, Vertica, )

    • Workflow management (Airflow, Oozie, Azkaban, ...)

    • Cloud storage: S3, GCS, ...



Bonus points:



  • Experience with contributing to open source software. Open source is on our heart.

  • Experience with the following Cassandra, DynamoDB, RocksDB/LevelDB, Graphite, StatsD, CollectD


About WeWork



  • WeWork Technology is bridging the gap between physical and digital platforms, providing a delightful, flawless & powerful experience for members and employees. We build software and hardware that enables our members to connect with each other and the space around them like never before.

  • We augment our community and culture teams through the tools we build. We believe theres a macro shift toward a new way of workingone focused on a movement towards meaning and purpose. WeWork Technology is proud to be shaping this movement.

  • We are a team of passionate, fearless and collaborative problem-solvers distributed globally with one goal in mind - to humanize technology across the world.

  • We are an equal opportunity employer and value diversity in our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.




Perficient, Inc.
  • Dallas, TX
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient currently has a career opportunity for a Senior MapR Solutions Architect.
Job Overview
One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics.
A Senior Solutions Architect is expected to be knowledgeable in two or more technologies within (a given Solutions/Practice area). The Solutions Architect may or may not have a programming background, but will have expert infrastructure architecture, client presales / presentation, team management and thought leadership skills.
You will provide best-fit architectural solutions for one or more projects; you will assist in defining scope and sizing of work; and anchor Proof of Concept developments. You will provide solution architecture for the business problem, platform integration with third party services, designing and developing complex features for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Sales and various pursuits focused on our clients' business needs.
You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains. This role is considered part of the Business Unit Senior Leadership team and may mentor junior architects and other delivery team members.
Responsibilities
  • Provide vision and leadership to define the core technologies necessary to meet client needs including: development tools and methodologies, package solutions, systems architecture, security techniques, and emerging technologies
  • HANDS ON ARCHITECT with VERY STRONG Map R, HBASE, AND HIVE Skills
  • Ability to architect and design end to end on data architecture (ingestion to semantic layer). Identify best ways to export the data to the reporting/analytic layer
  • Recommend best practices and approach on Distributed architecture (Doesnt have to be Map R specific)
  • Most recent project/job to be the Architect of an end to end Big Data implementation which is deployed.
  • Need to articulate best practices on building framework for Data layer (Ingesting, Curating), Aggregation layer, and Reporting layer
  • Understand and articulate DW principles on Hadoop landscape (not just data lake)
  • Performed data model design based HBase and Hive
  • Background of database design for DW on RDBMS is preferred
  • Ability to look at the end to end and suggest physical design remediation on Hadoop
  • Ability to design solutions for different use cases
  • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
Qualifications
  • Apache framework (Kafka, Spark, Hive, HBase)
  • Map R or similar distribution (Optional)
  • Java
  • Data formats (Parquet, Avro, JSON, XML, etc.)
  • Microservices
Responsibilities
  • At least 10+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
  • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
  • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
  • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
  • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
  • Experience with end-to-end solution architecture for data capabilities including:
  • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
  • Ability to produce high quality work products under pressure and within deadlines with specific references
  • VERY strong communication, solutioning, and client facing skills especially non-technical business users
  • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
  • At least 5+ years of working with a complex Big Data environment
  • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
Preferred Skills And Education
Masters degree in Computer Science or related field
Certification in Azure platform
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Perficient, Inc.
  • Houston, TX
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient currently has a career opportunity for a Senior MapR Solutions Architect.
Job Overview
One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics.
A Senior Solutions Architect is expected to be knowledgeable in two or more technologies within (a given Solutions/Practice area). The Solutions Architect may or may not have a programming background, but will have expert infrastructure architecture, client presales / presentation, team management and thought leadership skills.
You will provide best-fit architectural solutions for one or more projects; you will assist in defining scope and sizing of work; and anchor Proof of Concept developments. You will provide solution architecture for the business problem, platform integration with third party services, designing and developing complex features for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Sales and various pursuits focused on our clients' business needs.
You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains. This role is considered part of the Business Unit Senior Leadership team and may mentor junior architects and other delivery team members.
Responsibilities
  • Provide vision and leadership to define the core technologies necessary to meet client needs including: development tools and methodologies, package solutions, systems architecture, security techniques, and emerging technologies
  • HANDS ON ARCHITECT with VERY STRONG Map R, HBASE, AND HIVE Skills
  • Ability to architect and design end to end on data architecture (ingestion to semantic layer). Identify best ways to export the data to the reporting/analytic layer
  • Recommend best practices and approach on Distributed architecture (Doesnt have to be Map R specific)
  • Most recent project/job to be the Architect of an end to end Big Data implementation which is deployed.
  • Need to articulate best practices on building framework for Data layer (Ingesting, Curating), Aggregation layer, and Reporting layer
  • Understand and articulate DW principles on Hadoop landscape (not just data lake)
  • Performed data model design based HBase and Hive
  • Background of database design for DW on RDBMS is preferred
  • Ability to look at the end to end and suggest physical design remediation on Hadoop
  • Ability to design solutions for different use cases
  • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
Qualifications
  • Apache framework (Kafka, Spark, Hive, HBase)
  • Map R or similar distribution (Optional)
  • Java
  • Data formats (Parquet, Avro, JSON, XML, etc.)
  • Microservices
Responsibilities
  • At least 10+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
  • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
  • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
  • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
  • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
  • Experience with end-to-end solution architecture for data capabilities including:
  • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
  • Ability to produce high quality work products under pressure and within deadlines with specific references
  • VERY strong communication, solutioning, and client facing skills especially non-technical business users
  • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
  • At least 5+ years of working with a complex Big Data environment
  • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
Preferred Skills And Education
Masters degree in Computer Science or related field
Certification in Azure platform
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Perficient, Inc.
  • Dallas, TX
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient currently has a career opportunity for a Big Data Engineer(Microservices Developer),
Job Overview
One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics. As a lead developer, you will be responsible for Microservices development.
Responsibilities
  • Ability to focus on framework for Dev Ops, Ingestion, and Reading/writing into HDFSWorked with different data formats (Parquet, Avro, JSON, XML, etc.)
  • Worked on containerized solutions (Kubernetes..)
  • Provide end to end vision and hands on experience with MapR Platform especially best practices around HIVE and HBASE
  • Should be a Rockstar in HBase and Hive Best Practices
  • Ability to focus on framework for Dev Ops, Ingestion, and Reading/writing into HDFS
  • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
  • Worked on containerized solutions (Spring Boot and Docker)
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data MOdeling
  • Performance tune HIVE and HBASE jobs with a focus on ingestion
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components
  • Lead the technical planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
Qualifications
    • Spring, Docker, Hibernate /Spring , JPA, Pivotal, Kafka, NoSQL,
      Hadoop Containers Docker, work, Spring boot .
    • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
    • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
    • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
    • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
    • Experience with end-to-end solution architecture for data capabilities including:
    • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
    • Ability to produce high quality work products under pressure and within deadlines with specific references
    • VERY strong communication, solutioning, and client facing skills especially non-technical business users
    • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
    • At least 5+ years of working with a complex Big Data environment
    • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
  • Preferred Skills And Education
    Masters degree in Computer Science or related field
    Certification in Azure platform
    Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
    More About Perficient
    Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
    Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
    Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
    Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
    Select work authorization questions to ask when applicants apply
    • Are you legally authorized to work in the United States?
    • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?