OnlyDataJobs.com

WorldLink US
  • Dallas, TX

Business Analyst

Dallas, TX

Full time, direct hire position

Seeking a bright, motivated individual with a unique, wide range of skills and the ability to process large data sets while communicating findings clearly and concisely.

Responsibilities

  • Analyze data from a myriad of sources and generate valuable insights
  • Interface with our sales team and clients to discuss issues related to data availability and customer targeting
  • Execute marketing list processing for mail, email and integrated multi-channel campaigns
  • Assist in development of tools to optimize and automate internal systems and processes
  • Assist in conceptualization and maintenance of business intelligence tools

Requirements

  • Bachelors degree in math, economics, statistics or related quantitative field
  • An ability to deal and thrive with imperfect, mixed, varied and inconsistent data from multiple sources
  • Must possess rigorous analytical disciplined approach, as well as dynamic, abstract problem solving skills (get to the answer via both inspiration and perspiration)
  • Proven ability to work in a fast-paced environment and to meet changing deadlines / priorities on multiple simultaneous projects
  • Extensive experience writing queries for large, complex data sets in SQL (MySQL, PostgreSQL, Oracle, other SQL/RDBMS)
  • Highly proficient with Excel (or an alternate spreadsheet application like OpenOffice Calc) including macros, pivot tables, vlookups, charts and graphs
  • Solid knowledge of statistics and able to perform analysis in R SAS or SPSS proficiently
  • Strong interpersonal skills as a team leader and team player
  • Self-learning attitude, constantly pushing towards new opportunities, approaches, ideas and perspectives
  • Bonus points for experience with high-level, dynamically compiled programming languages: Python, Ruby, Perl, Lisp or PHP

  **No VISA Sponsorship available

PrimeRevenue
  • Atlanta, GA

ARE YOU READY TO WORK AT PRIMEREVENUE?

Do you want to work for a high growth, FinTech company helping other companies innovate, grow and create jobs? Do you enjoy working within an entrepreneurial environment that is mission-driven, results-driven and community oriented? Were looking for a Director of Data Architecture to continue the impressive development of our data enterprise, exposing our customers to the wealth of insights and predictive analytics. The Director will be responsible for design, development, and execution of data product initiatives. This individual will be part of multi-disciplinary team including data architects, BI developers, technical architects, data scientist, engineering, and operational teams for data products.  

WHAT YOU GET TO DO:

    • Design, create, deploy and manage our organization's data architecture
    • Develop and own our Data Product Roadmap
    • Map the systems and interfaces used to manage data, set standards for data management, analyze current state and conceive desired future state, and conceive projects needed to close the gap between current state and future goals
    • Provide a standard common business vocabulary, express strategic data requirements, outline high level integrated designs to meet these requirements, and align with enterprise strategy and related business architecture
    • Set data architecture principles, create models of data that enable the implementation of the intended business architecture
    • Create diagrams showing key data entities, and create an inventory of the data needed to implement the architecture vision
    • Drive all phases of data modelling, from conceptualization to database optimization, including SQL development and any database administration
    • Design ETL architecture



WHAT ARE WE LOOKING FOR?

  • Bachelor's degree in Computer Science or related discipline
  • Minimum 10 years working in data products organization
  • Knowledge of relational and dimensional data modelling
  • Knowledge of RDBMS solution (DB2, Oracle)
  • Excellent SQL skills
  • Experienced in Agile methodologies
  • Deep understanding of Data Management principles
  • Strong oral and written communication skills
  • Strong leadership skills
  • BI tools (Tableau, MicroStrategy)
  • Experience architecting enterprise data lakes in AWS
  • Hands-on experience with Hadoop frameworks/tools such as Kinesis, Glue, Redshift, Spark, Hive, Pig etc
  • Experience with distributions in Amazon EMR
  • Previous work with NoSQL databases such as PostgreSQL, MongoDB


WHO ARE YOU?

SMART, HUNGRY, & HUMBLE PERSONALITY IS A MUST!


WORKING AT PRIMEREVENUE BENEFITS:

    • Professional growth within our company
    • Monthly fun TEAM events
    • Generous benefits package
    • Community Service-Oriented Culture
Accenture
  • Detroit, MI
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Acxiom
  • Austin, TX
As a Senior Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze the latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You are able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You are also a self-starter able to continuously evaluate new technologies, innovate and deliver solutions for business critical applications


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Lead the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Lead and review Hadoop log files with the help of log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 6+ years of Big Data Administration Experience
  • Extensive knowledge and Hands-on Experience of Hadoop based data manipulation/storage technologies like HDFS, MapReduce, Yarn, Spark/Kafka, HBASE, HIVE, Pig, Impala, R and Sentry/Ranger/Knox
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Experience supporting Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems
  • Great operational expertise such as excellent troubleshooting skills, understanding of system's capacity, bottlenecks, core resource utilizations (CPU, OS, Storage, and Networks)
  • Experience in Hadoop cluster migrations or upgrades
  • Strong scripting skills in Perl, Python, shell scripting, and/or Ruby on Rails
  • Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera, HortonWorks, and/or MapR versions along with monitoring/alerting tools (Nagios, Ganglia, Zenoss, Cloudera Manager)
  • Strong problem solving and critical thinking skills
  • Excellent verbal and written communication skills


What will set you apart:


  • Solid understanding and hands-on experience of Big Data on private/public cloud technologies(AWS/GCP/Azure)
  • DevOps experience (CHEF, Puppet and Ansible)
  • Strong knowledge of JAVA/J2EE and other web technologies

 
Accenture
  • Minneapolis, MN
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Atlanta, GA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Philadelphia, PA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • San Diego, CA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Vector Consulting, Inc
  • Atlanta, GA
 

Our Government client is looking for an experienced ETL Developer on a renewable contract in Atlanta, GA

Position ETL Developer

The desired candidate will be responsible for design, development, testing, maintenance and support of complex data extract, transformation and load (ETL) programs for an Enterprise Data Warehouse. An understanding of how complex data should be transformed from the source and loaded into the data warehouse is a critical part of this job.

  • Deep hands-on experience on OBIEE RPD & BIP Reporting Data models, Development for seamless cross-functional and cross-systems data reporting
  • Expertise and solid experience in BI Tools OBIEE, Oracle Data Visualization and Power BI
  • Strong Informatica technical knowledge in design, development and management of complex Informatica mappings, sessions and workflows on Informatica Designer Components -Source Analyzer, Warehouse Designer, Mapping Designer & Mapplet Designer, Transformation Developer, Workflow Manager and Workflow Monitor.
  • Strong programming skills, relational database skills with expertise in Advanced SQL and PL/SQL, indexing and query tuning
  • Having implemented Advanced Analytical models in Python or R
  • Experienced in Business Intelligence and Data warehousing concepts and methodologies.
  • Extensive experience in data analysis and root cause analysis and proven problem solving and analytical thinking capabilities.
  • Analytical capabilities to slice and dice data and display data in reports for best user experience.
  • Demonstrated ability to review business processes and translate into BI reporting and analysis solutions.
  • Ability to follow Software Development Lifecycle (SDLC) process and should be able to work under any project management methodologies used.
  • Ability to follow best practices and standards.
  • Ability to identify BI application performance bottlenecks and tune.
  • Ability to work quickly and accurately under pressure and project time constraints
  • Ability to prioritize workload and work with minimal supervision
  • Basic understanding of software engineering principles and skills working on Unix/Linux/Windows Operating systems, Version Control and Office software
  • Exposure Data Modeling using Star/Snowflake Schema Design, Data Marts, Relational and Dimensional Data Modeling, Slowly Changing Dimensions, Fact and Dimensional tables, Physical and Logical data modeling and in big data technologies
  • Experience with Big Data Lake / Hadoop implementations

 Required Qualifications:

  • A bachelors degree in Computer Science or related field
  • 6 to 10 years of experience working with OBIEE / Data Visualization / Informatica / Python
  • Ability to design and develop complex Informatica mappings, sessions, workflows and identify areas of optimizations
  • Experience with Oracle RDBMS 12g
  • Effective communication skills (both oral and written) and the ability to work effectively in a team environment are required
  • Proven ability and desire to mentor/coach others in a team environment
  • Strong analytical, problem solving and presentation skills.

Preferred Qualifications:

  • Working knowledge with Informatica Change Data Capture installed on DB2 z/OS
  • Working knowledge of Informatica Power Exchange
  • Experience with relational, multidimensional and OLAP techniques and technology
  • Experience with OBIEE tools version 10.X
  • Experience with Visualization tools like MS Power BI, Tableau, Oracle DVD
  • Experience with Python building predictive models

Soft Skills:

  • Strong written and oral communication skills in English Language
  • Ability to work with Business and communicate technical solution to solve business problems

About Vector:

Vector Consulting, Inc., (Headquartered in Atlanta) is an IT Talent Acquisition Solutions firm committed to delivering results. Since our founding in 1990, we have been partnering with our customers, understanding their business, and developing solutions with a commitment to quality, reliability and value. Our continuing growth has been and continues to be built around successful relationships that are based on our organization's operating philosophy and commitment to ** People, Partnerships, Purpose and Performance - THE VECTOR WAY

Avaloq Evolution AG
  • Zürich, Switzerland

The position


Are you passionate about data architecture? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

Responsible for selecting appropriate technologies from open source, commercial on-premises and cloud-based offerings. Integrating a new generation of tools within the existing environment to ensure access to accurate and current data. Consider not only the functional requirements, but also the non-functional attributes of platform quality such as security, usability, and stability.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.


Your profile


  • PhD, Master or Bachelor degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, data lakes, stream processing)

  • Practical experience with Container Platforms (OpenShift) and/or containerization software (Kubernetes, Dockers)

  • Hands-on experience developing data extraction and transformation pipelines (ETL process)

  • Expert knowledge in RDBMS, NoSQL and Data Warehousing

  • Familiar with information retrieval software such as Elastic Search/Lucene/SOLR

  • Firm understanding of major programming/scripting languages like Java/Scala, Linux, PHP, Python and/or R

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Fluent in English; German, Italian and French a plus





 Professional requirements


  • Be a thought leader for best practice how to develop and deploy data science products & services

  • Provide an infrastructure to make data driven insights scalable and agile

  • Liaise and coordinate with stakeholders regarding setting up and running a BigData and analytics platform

  • Lead the evaluation of business and technical requirements

  • Support data-driven activities and a data-driven mindset where needed



Main place of work
Zurich

Contact
Avaloq Evolution AG
Anna Drozdowska, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

www.avaloq.com/en/open-positions

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
State Farm
  • Atlanta, GA

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


Skills & Professional Experience

·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

·        Experience with unstructured data sets: text analytics, image recognition etc.

·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

·        Exercise sound judgment to diagnose & resolve problems within area of expertise

·        Familiarity with CI/CD development methods, Git and Docker a plus


Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


Remote work option is not available.


There is no sponsorship for an employment visa for the position at this time.


Competencies desired:
Critical Thinking
Leadership
Initiative
Resourcefulness
Relationship Building
State Farm
  • Dallas, TX

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


Skills & Professional Experience

·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

·        Experience with unstructured data sets: text analytics, image recognition etc.

·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

·        Exercise sound judgment to diagnose & resolve problems within area of expertise

·        Familiarity with CI/CD development methods, Git and Docker a plus


Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


Remote work option is not available.


There is no sponsorship for an employment visa for the position at this time.


Competencies desired:
Critical Thinking
Leadership
Initiative
Resourcefulness
Relationship Building
Ultra Tendency
  • Berlin, Deutschland

You love writing high quality code? You enjoy designing algorithms for large-scale Hadoop clusters? Spark is your daily business? We have new challenges for you!


Your Responsibilities:



  • Solve Big Data problems for our customers in all phases of the project life cycle

  • Build program code, test and deploy to various environments (Cloudera, Hortonworks, etc.)

  • Enjoy being challenged and solve complex data problems on a daily basis

  • Be part of our newly formed team in Berlin and help driving its culture and work attitude


Job Requirements



  • Strong experience developing software using Java or a comparable language

  • At least 2 years of experience with data ingestion, analysis, integration, and design of Big Data applications using Apache open-source technologies

  • Strong background in developing on Linux

  • Solid computer science fundamentals (algorithms, data structures and programming skills in distributed systems)

  • Sound knowledge of SQL, relational concepts and RDBMS systems is a plus

  • Computer Science (or equivalent degree) preferred or comparable years of experience

  • Being able to work in an English-speaking, international environment 


We offer:



  • Fascinating tasks and unique Big Data challenges in various industries

  • Benefit from 10 years of delivering excellence to our customers

  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • Work on the open-source community and become a contributor

  • Fair pay and bonuses

  • Work with cutting edge equipment and tools

  • Enjoy our additional benefits such as a free BVG ticket and fresh fruits in the office

Pythian
  • Dallas, TX

Google Cloud Solutions Architect (Pre Sales)

United States | Canada | Remote | Work from Home

Why You?

Are you a US or Canada based Cloud Solutions Architect who likes to operate with a high degree of autonomy and have diverse responsibilities that require strong leadership, deep technology skills and a dedication to customer service? Do you have Big data and Data centric skills? Do you want to take part in the strategic planning of organizations data estate with a focus of fulfilling business requirements around cost, scalability and flexibility of the platform? Can you draft technology roadmaps and document best practice gaps with precise steps of how to get there? Can you implement the details of the backlogs you have helped build? Do you demonstrate consistent best practices and deliver strong customer satisfaction? Do you enjoy pre sales? Can you demonstrate adoption of new technologies and frameworks through the development of proofs of concepts?

If you have a passion for solving complex problems and for pre sales then this could be the job for you!

What Will You Be Doing?  

  • Collaborating with and supporting Pythian sales teams in the pre-sales & account management process from the technical perspective, remotely and on-site (approx 75%).
  • Defining solutions for current and future customers that efficiently address their needs. Leading through example and influence, as a master of applying technology solutions to solve business problems.
  • Developing Proof of Concepts (PoC) in order to demonstrate feasibility and value to Pythians customers (approx 25%).
  • Defining solutions for current and future customers that efficiently address their needs.
  • Identifying then executing solutions with a commitment to excellent customer service
  • Collaborating with others in refining solutions presented to customers
  • Conducting technical audits of existing architectures (Infrastructure, Performance, Security, Scalability and more) document best practices and recommendations
  • Providing component or site-wide performance optimizations and capacity planning
  • Recommending best practices & improvements to current operational processes
  • Communicating status and planning activities to customers and team members
  • Participate in periodic overtime (occasionally on short notice) travelling up to approx 50%).

What Do We Need From You?

While we realise you might not have everything on the list to be the successful candidate for the Solutions Architect job you will likely have at least 10 years experience in a variety of positions in IT. The position requires specialized knowledge and experience in performing the following:

  • Undergraduate degree in computer science, computer engineering, information technology or related field or relevant experience.
  • Systems design experience
  • Understanding and experience with Cloud architectures specifically: Google Cloud Platform (GCP) or Microsoft Azure
  • In-depth knowledge of popular database and data warehouse technologies from Microsoft, Amazon and/or Google (Big Data & Conventional RDBMS), Microsoft Azure SQL Data Warehouse, Teradata, Redshift,  BigQuery, Snowflake etc.
  • Be fluent in a few languages, preferably Java and Python, and having familiarity with Scala and Go would be a plus.
  • Proficient in SQL. (Experience with Hive and Impala would be great)
  • Proven ability to work with software engineering teams and understand complex development systems, environments and patterns.
  • Experience presenting to high level executives (VPs, C Suite)
  • This is a North American based opportunity and it is preferred that the candidate live on the West Coast, ideally in San Francisco or the Silicon Valley area but strong candidates may be considered from anywhere in the US or Canada.
  • Ability to travel and work across North America frequently (occasionally on short notice) up to 50% with some international travel also expected.

Nice-to-Haves:

  • Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions.
  • Knowledge of real-time Hadoop query engines like Dremel, Cloudera Impala, Facebook Presto or Berkley Spark/Shark.
  • Experience with BI platforms, reporting tools, data visualization products, ETL engines.
  • Experience with any MPP (Oracle Exadata/DW, Teradata, Netezza, etc)
  • Understanding of continuous delivery and deployment patterns and tools (Jenkins, Artifactory, Maven, etc)
  • Prior experience working as/with Machine Learning Engineers, Data Engineers, or Data Scientists.
  • A certification such as Google Cloud Professional Cloud Architect, Google Professional Data Engineer or related AWS Certified Solutions Architect / Big Data or Microsoft Azure Architect
  • Experience or strong interest in people management, in a player-coach style of leadership longer term would be great.

What Do You Get in Return?

  • Competitive total rewards package
  • Flexible work environment: Why commute? Work remotely from your home, theres no daily travel requirement to the office!
  • Outstanding people: Collaborate with the industrys top minds.
  • Substantial training allowance: Hone your skills or learn new ones; participate in professional development days, attend conferences, become certified, whatever you like!
  • Amazing time off: Start with a minimum 3 weeks vacation, 7 sick days, and 2 professional development days!
  • Office Allowance: A device of your choice and personalise your work environment!  
  • Fun, fun, fun: Blog during work hours; take a day off and volunteer for your favorite charity.
IT People Corporation
  • Raleigh, NC

Senior Big Data Platform Architect w/Data Migration- Direct Hire- Raleigh, NC

Want to take your career to the next level and work for a company that truly cares about their employees and the community around them?

We have a great a direct hire career opportunity for a Senior Big Data Platform Architect w/Data Migration expertise.

Our client is one of the most revolutionary and trusted resources for IT and information services. They play a vital role in supporting business processes and provide business intelligence that their clients can truly rely upon to increase productivity and achieve better operational efficiency.

With a generous benefits package- our client is one of the best places to work in the area.  They offer:
Competitive Compensation, Annual Review and Bonus, Employee Assistance Program On-Site Workout Facility, Recreational Activities, Flexible Work Arrangements, Ergonomic Work Stations, Medical Coverage Dental Coverage, Vision Coverage, 401(k) Retirement Program with matching, 12 paid holidays, Generous allowance for Vacation and Sick Days , Flexible Spending Accounts, Dependent Care Life Insurance, Short-Term and Long-Term Disability Insurance, and Supplemental Long-Term Disability Insurance .

Position Summary:

The Senior Big Data Platform Architect will provide thought leadership and technical direction for the data engineering team and work with the lead of the advanced analytics capability to develop technical strategies and mature the technical stack towards improving operational outcomes and usability, as well as, keeping current with new emerging technologies. Will lead project teams through POC efforts related to new technologies or new use of existing technologies.  

Minimum Requirements

  • Extensive experience troubleshooting issues in complex, distributed systems
  • 5+ years experience architecting, developing, releasing, and maintaining large-scale enterprise data platforms both on premise as well as cloud. 5+ years of experience analyzing data with SQL and implementing large-scale RDBMS. 5+ years experience designing software for performance, reliability and scalability.
  • 5+ years of programming proficiency in a subset of Python, R, Java, and Scala.
  • 2+ years of experience with building solutions leveraging NoSQL and highly distributed databases such as HBase and Cassandra.
  • 2+ years of experience implementing cloud-based systems (AWS/Azure/GCP)
  • 3+ years proficiency in configuring and deploying applications on Linux-based systems
  • 5+ years of experience implementing data pipelines in large-scale data analysis systems such as Hadoop or MPP databases. 3+ years of experience Spark or similar engines. 5+ years of experience in data flow and systems integration. 3+ Experience operationalizing and integrating analytics models and solutions within products and applications
  • Experience of hands-on platform architecture and solutions design and implementation (5+ years).
  • Deep understanding of algorithms, data structures, performance optimization techniques, and design patterns for building highly scalable Big Data Solutions and distributed applications
  • Machine Learning is a big plus
  • Experience collaborating with business and IT counterparts, as well summarizing and presenting complex technical architectures and solutions to a wide variety of stakeholders
  • Ability to manage multiple activities in a deadline-oriented environment
  • Superior problem-solving skills
  • Ability to work independently in unstructured environments in a self-directed way, with accuracy and attention to detail. Ability to take a leadership role on engagements and with customers.
  • Strong teamwork skills and ability to work effectively with multiple internal customers
  • Ability to provide technical expertise to others and explain concepts to technical staff and leadership team
  • Ability to quickly learn and master recent technologies and various business applications
  • Ability to build business acumen and understand business domain. Experience mentoring other technical resources and leading technical implementations.  

Education

Bachelors degree in Computer Science or equivalent field and 10+ years of technical experience or Masters Degree in Computer Science or equivalent field and 7+ years of technical experience

Responsibilities

Provide thought leadership and technical direction for the data engineering team and work with the lead of the advanced analytics capability to develop technical strategies and mature the technical stack towards improving operational outcomes and usability, as well as, keeping current with new emerging technologies. Will lead project teams through POC efforts related to new technologies or new use of existing technologies.  

Responsible for assisting product managers and the analytics teams in translating business requirements into solutions that meet business value objectives and are aligned with best practices and industry standards. Document architectural decisions through the depiction of concepts, relationships, constraints, and operations

 

Salary range is negotiable and is contingent upon level of expertise and years of experience.


For immediate consideration for this consulting opportunity, please submit your resume attachment to:  Dianne Lancaster, Technical Recruiter at IT People, the appropriate email is: dianne.lancaster@itpeoplecorp.com .

NO 3rd parties please!


NO Sponsorship available at this time

Acxiom
  • Austin, TX
As a Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You must be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You must be a self-starter to continuously evaluate new technologies, innovate and deliver solutions for business critical applications. 


 

What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Own the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Manage and review Hadoop log files with the help of  Log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with Vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 3+ years of Big Data Administration experience
  • Extensive knowledge of Hadoop based data manipulation/storage technologies such as HDFS, MapReduce, Yarn, HBASE, HIVE, Pig, Impala and Sentry
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Great operational expertise such as good troubleshooting skills, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
  • Experience in Hadoop cluster migrations or upgrades
  • Strong Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera/Horton Works/MapR versions along with Monitoring/Alerting tools (Nagios, Ganglia, Zenoss , Cloudera Manager)
  • Scripting skills in Perl, Python, Shell Scripting, and/or Ruby on Rails
  • Knowledge of JAVA/J2EE and other web technologies
  • Understanding of On-premise and Cloud network architectures
  • DevOps experience is a great plus (CHEF, Puppet and Ansible)
  • Excellent verbal and written communication skills


 

Perficient, Inc.
  • Dallas, TX
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient currently has a career opportunity for a Senior MapR Solutions Architect.
Job Overview
One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics.
A Senior Solutions Architect is expected to be knowledgeable in two or more technologies within (a given Solutions/Practice area). The Solutions Architect may or may not have a programming background, but will have expert infrastructure architecture, client presales / presentation, team management and thought leadership skills.
You will provide best-fit architectural solutions for one or more projects; you will assist in defining scope and sizing of work; and anchor Proof of Concept developments. You will provide solution architecture for the business problem, platform integration with third party services, designing and developing complex features for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Sales and various pursuits focused on our clients' business needs.
You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains. This role is considered part of the Business Unit Senior Leadership team and may mentor junior architects and other delivery team members.
Responsibilities
  • Provide vision and leadership to define the core technologies necessary to meet client needs including: development tools and methodologies, package solutions, systems architecture, security techniques, and emerging technologies
  • HANDS ON ARCHITECT with VERY STRONG Map R, HBASE, AND HIVE Skills
  • Ability to architect and design end to end on data architecture (ingestion to semantic layer). Identify best ways to export the data to the reporting/analytic layer
  • Recommend best practices and approach on Distributed architecture (Doesnt have to be Map R specific)
  • Most recent project/job to be the Architect of an end to end Big Data implementation which is deployed.
  • Need to articulate best practices on building framework for Data layer (Ingesting, Curating), Aggregation layer, and Reporting layer
  • Understand and articulate DW principles on Hadoop landscape (not just data lake)
  • Performed data model design based HBase and Hive
  • Background of database design for DW on RDBMS is preferred
  • Ability to look at the end to end and suggest physical design remediation on Hadoop
  • Ability to design solutions for different use cases
  • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
Qualifications
  • Apache framework (Kafka, Spark, Hive, HBase)
  • Map R or similar distribution (Optional)
  • Java
  • Data formats (Parquet, Avro, JSON, XML, etc.)
  • Microservices
Responsibilities
  • At least 10+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
  • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
  • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
  • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
  • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
  • Experience with end-to-end solution architecture for data capabilities including:
  • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
  • Ability to produce high quality work products under pressure and within deadlines with specific references
  • VERY strong communication, solutioning, and client facing skills especially non-technical business users
  • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
  • At least 5+ years of working with a complex Big Data environment
  • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
Preferred Skills And Education
Masters degree in Computer Science or related field
Certification in Azure platform
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Perficient, Inc.
  • Houston, TX
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient currently has a career opportunity for a Senior MapR Solutions Architect.
Job Overview
One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics.
A Senior Solutions Architect is expected to be knowledgeable in two or more technologies within (a given Solutions/Practice area). The Solutions Architect may or may not have a programming background, but will have expert infrastructure architecture, client presales / presentation, team management and thought leadership skills.
You will provide best-fit architectural solutions for one or more projects; you will assist in defining scope and sizing of work; and anchor Proof of Concept developments. You will provide solution architecture for the business problem, platform integration with third party services, designing and developing complex features for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Sales and various pursuits focused on our clients' business needs.
You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains. This role is considered part of the Business Unit Senior Leadership team and may mentor junior architects and other delivery team members.
Responsibilities
  • Provide vision and leadership to define the core technologies necessary to meet client needs including: development tools and methodologies, package solutions, systems architecture, security techniques, and emerging technologies
  • HANDS ON ARCHITECT with VERY STRONG Map R, HBASE, AND HIVE Skills
  • Ability to architect and design end to end on data architecture (ingestion to semantic layer). Identify best ways to export the data to the reporting/analytic layer
  • Recommend best practices and approach on Distributed architecture (Doesnt have to be Map R specific)
  • Most recent project/job to be the Architect of an end to end Big Data implementation which is deployed.
  • Need to articulate best practices on building framework for Data layer (Ingesting, Curating), Aggregation layer, and Reporting layer
  • Understand and articulate DW principles on Hadoop landscape (not just data lake)
  • Performed data model design based HBase and Hive
  • Background of database design for DW on RDBMS is preferred
  • Ability to look at the end to end and suggest physical design remediation on Hadoop
  • Ability to design solutions for different use cases
  • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
Qualifications
  • Apache framework (Kafka, Spark, Hive, HBase)
  • Map R or similar distribution (Optional)
  • Java
  • Data formats (Parquet, Avro, JSON, XML, etc.)
  • Microservices
Responsibilities
  • At least 10+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
  • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
  • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
  • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
  • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
  • Experience with end-to-end solution architecture for data capabilities including:
  • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
  • Ability to produce high quality work products under pressure and within deadlines with specific references
  • VERY strong communication, solutioning, and client facing skills especially non-technical business users
  • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
  • At least 5+ years of working with a complex Big Data environment
  • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
Preferred Skills And Education
Masters degree in Computer Science or related field
Certification in Azure platform
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Perficient, Inc.
  • Dallas, TX
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient currently has a career opportunity for a Big Data Engineer(Microservices Developer),
Job Overview
One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics. As a lead developer, you will be responsible for Microservices development.
Responsibilities
  • Ability to focus on framework for Dev Ops, Ingestion, and Reading/writing into HDFSWorked with different data formats (Parquet, Avro, JSON, XML, etc.)
  • Worked on containerized solutions (Kubernetes..)
  • Provide end to end vision and hands on experience with MapR Platform especially best practices around HIVE and HBASE
  • Should be a Rockstar in HBase and Hive Best Practices
  • Ability to focus on framework for Dev Ops, Ingestion, and Reading/writing into HDFS
  • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
  • Worked on containerized solutions (Spring Boot and Docker)
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data MOdeling
  • Performance tune HIVE and HBASE jobs with a focus on ingestion
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components
  • Lead the technical planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
Qualifications
    • Spring, Docker, Hibernate /Spring , JPA, Pivotal, Kafka, NoSQL,
      Hadoop Containers Docker, work, Spring boot .
    • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
    • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
    • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
    • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
    • Experience with end-to-end solution architecture for data capabilities including:
    • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
    • Ability to produce high quality work products under pressure and within deadlines with specific references
    • VERY strong communication, solutioning, and client facing skills especially non-technical business users
    • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
    • At least 5+ years of working with a complex Big Data environment
    • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
  • Preferred Skills And Education
    Masters degree in Computer Science or related field
    Certification in Azure platform
    Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
    More About Perficient
    Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
    Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
    Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
    Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
    Select work authorization questions to ask when applicants apply
    • Are you legally authorized to work in the United States?
    • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
    Perficient, Inc.
    • San Diego, CA
    At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
    Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
    Perficient currently has a career opportunity for a Senior MapR Solutions Architect.
    Job Overview
    One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics.
    A Senior Solutions Architect is expected to be knowledgeable in two or more technologies within (a given Solutions/Practice area). The Solutions Architect may or may not have a programming background, but will have expert infrastructure architecture, client presales / presentation, team management and thought leadership skills.
    You will provide best-fit architectural solutions for one or more projects; you will assist in defining scope and sizing of work; and anchor Proof of Concept developments. You will provide solution architecture for the business problem, platform integration with third party services, designing and developing complex features for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Sales and various pursuits focused on our clients' business needs.
    You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains. This role is considered part of the Business Unit Senior Leadership team and may mentor junior architects and other delivery team members.
    Responsibilities
    • Provide vision and leadership to define the core technologies necessary to meet client needs including: development tools and methodologies, package solutions, systems architecture, security techniques, and emerging technologies
    • HANDS ON ARCHITECT with VERY STRONG Map R, HBASE, AND HIVE Skills
    • Ability to architect and design end to end on data architecture (ingestion to semantic layer). Identify best ways to export the data to the reporting/analytic layer
    • Recommend best practices and approach on Distributed architecture (Doesnt have to be Map R specific)
    • Most recent project/job to be the Architect of an end to end Big Data implementation which is deployed.
    • Need to articulate best practices on building framework for Data layer (Ingesting, Curating), Aggregation layer, and Reporting layer
    • Understand and articulate DW principles on Hadoop landscape (not just data lake)
    • Performed data model design based HBase and Hive
    • Background of database design for DW on RDBMS is preferred
    • Ability to look at the end to end and suggest physical design remediation on Hadoop
    • Ability to design solutions for different use cases
    • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
    Qualifications
    • Apache framework (Kafka, Spark, Hive, HBase)
    • Map R or similar distribution (Optional)
    • Java
    • Data formats (Parquet, Avro, JSON, XML, etc.)
    • Microservices
    Responsibilities
    • At least 10+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
    • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
    • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
    • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
    • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
    • Experience with end-to-end solution architecture for data capabilities including:
    • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
    • Ability to produce high quality work products under pressure and within deadlines with specific references
    • VERY strong communication, solutioning, and client facing skills especially non-technical business users
    • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
    • At least 5+ years of working with a complex Big Data environment
    • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
    Preferred Skills And Education
    Masters degree in Computer Science or related field
    Certification in Azure platform
    Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
    More About Perficient
    Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
    Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
    Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
    Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
    Select work authorization questions to ask when applicants apply
    • Are you legally authorized to work in the United States?
    • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?