• Atlanta, GA

Arcadis is seeking a Director of Advanced Analytics to join our North America Digital Team.

The Director of Advanced Analytics (North America) is responsible for the development and implementation of a strategy centered around elevating our current analytics capabilities, identifying use cases within our current business, and identifying new business opportunities anchored around analytics. Together with the core/extended Arcadis Digital team(s) he/she will drive digital change within the organization which includes frequently interacting with Arcadians at all levels from the executive leadership, project managers, and entry level employees in addition to clients by providing technical expertise, guidance, and hands-on support on data processing, analytical models, and visualization techniques.

Key Responsibilities:

Develop capabilities in analytics phases

  • Evaluate our existing analytics capabilities and develop a strategy for Analytics in North America, that will create value for our employees, clients, and partners.
  • Develop Arcadiss capabilities in descriptive, predictive, and prescriptive analytics
  • Leading the development of required analytics skills in North America - by a combination of recruiting and training
  • Train our colleagues valuable technical skills on-the-job, which can include data processing, coding models in R, Python, BaysiaLab, etc, creating powerful visualizations and interfaces (e.g. using RShiny or Tableau), and developing the data infrastructure needed

Analytics as a value add to projects

  • Advise in the implementation of analytics and data life cycle on projects/programs
  • Capture the resulting insights from the data life cycle
  • Communicate the value of using analytics instead of the traditional way of working
  • Identify where analytics can be used as a differentiator
  • Scope out analytics services and develop monetization strategies
  • Develop powerful use cases, in close collaboration with market leaders and external partners

Advise clients on their analytics journey

  • Advising clients on data governance/management
  • Identify new business opportunities with existing and potential clients
  • Participation in design thinking/client visioning workshops

Evangelist/Leader for analytics

  • Empower our colleagues to implement analytics on their projects
  • Communicate complex ideas in a simple way using story telling
  • Inspire our colleagues to innovate and adopt of analytics

Develop strategy for analytics on overall business/bring in best practices in analytics

  • Integrate yourself within the existing business to evaluate where analytics can be strategically be implemented.
  • Collaborate closely with / support the Chief Digital Officer, North America with taking the lead and providing technical expertise within Analytics
  • Collaborate closely with / support the Global CDO / Analytics team by supporting the development of a global Analytics roadmap and data strategy
  • Participating in regular exchanges with other digital / analytics leaders in other regions

Leadership Requirements

  • Proven leadership capabilities
  • Ability to work cross-functionally with all levels of management, both internally and externally.
  • Ability to inspire clients and potential partners with new ideas and discipline combined with pragmatism to implement them rapidly;
  • Ability to develop a plan and approach to strategic problems - communicating and mobilizing teams to deliver against the plan;
  • The ideal candidate: Talented, passionate and high energy leader with a blend of technology/digital savviness and built & natural asset industry knowledge.

Key Requirements

  • Deep knowledge and expertise in analytics, including expert knowledge of the common software applications, such as R, Python or database management tools, such as SQL or MongoDB.
  • Complete understanding of cloud technologies
  • Complete understand of data lake implementation and other data governance practices
  • Knowledge of design thinking and client visioning workshops
  • Experience in agile project management and product development

Key Attributes

  • Knowledge of best practices in analytics across the industry
  • Broad and strategic perspective on the current and future global technology landscape
  • Credible counterpart for existing or potential partners with strong business acumen combined with solid technical skills;
  • Self-starter, high energy, can work independently and remotely and comfortable with ambiguity;
  • Communication and relationship skills, ability to engage with a broad range of international stakeholders;
  • Ability to work in a global matrix organization and to deal with different cultures worldwide;
  • Proactive, practical, entrepreneurial, open-minded, flexible and creative;
  • Excellent command of English, both spoken and written.
  • Given the international spread of the business a certain level of flexibility in working hours is important.
  • 7+ years of professional experience with an impressive track record in Analytics and bringing data-driven initiatives to success, ideally at the intersection of natural & built assets and analytics/digital

Required Qualifications

  • Masters Degree (preferably in Business Analytics, Data Science or Computer Science) preferred.
  • 10+ years of professional experience with an impressive track record in Analytics

Preferred Qualifications

  • MA/MS/MBA degree from a top-tier university / research institution
American Express
  • Phoenix, AZ

Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every single day. Today, creative ideas, insight and new points of view are at the core of how we craft a more powerful, personal and fulfilling experience for all our customers. So if youre passionate about a career building breakthrough software and making an impact on an audience of millions, look no further.

There are hundreds of chances for you to make your mark on Technology and life at American Express. Heres just some of what youll be doing:

    • Take your place as a core member of an Agile team driving the latest application development practices.
    • Find your opportunity to execute new technologies, write code and perform unit tests, as well as working with data science, algorithms and automation processing
    • Engage your collaborative spirit by Collaborate with fellow engineers to craft and deliver recommendations to Finance, Business, and Technical users on Finance Data Management. 



Are you up for the challenge?

    • 4+ years of Software Development experience.
    • BS or MS Degree in Computer Science, Computer Engineering, or other Technical discipline including practical experience effectively interpreting Technical and Business objectives and challenges and designing solutions.
    • Ability to effectively collaborate with Finance SMEs and partners of all levels to understand their business processes and take overall ownership of Analysis, Design, Estimation and Delivery of technical solutions for Finance business requirements and roadmaps, including a deep understanding of Finance and other LOB products and processes. Experience with regulatory reporting frameworks, is preferred.
    • Hands-on expertise with application design and software development across multiple platforms, languages, and tools: Java, Hadoop, Python, Streaming, Flink, Spark, HIVE, MapReduce, Unix, NoSQL and SQL Databases is preferred.
    • Working SQL knowledge and experience working with relational databases, query authoring (SQL), including working familiarity with a variety of databases(DB2, Oracle, SQL Server, Teradata, MySQL, HBASE, Couchbase, MemSQL).
    • Experience in architecting, designing, and building customer dashboards with data visualization tools such as Tableau using accelerator database Jethro.
    • Extensive experience in application, integration, system and regression testing, including demonstration of automation and other CI/CD efforts.
    • Experience with version control softwares like git, svn and CI/CD testing/automation experience.
    • Proficient with Scaled Agile application development methods.
    • Deals well with ambiguous/under-defined problems; Ability to think abstractly.
    • Willingness to learn new technologies and exploit them to their optimal potential, including substantiated ability to innovate and take pride in quickly deploying working software.
    • Ability to enable business capabilities through innovation is a plus.
    • Ability to get results with an emphasis on reducing time to insights and increased efficiency in delivering new Finance product capabilities into the hands of Finance constituents.
    • Focuses on the Customer and Client with effective consultative skills across a multi-functional environment.
    • Ability to communicate effectively verbally and in writing, including effective presentation skills. Strong analytical skills, problem identification and resolution.
    • Delivering business value using creative and effective approaches
    • Possesses strong business knowledge about the Finance organization, including industry standard methodologies.
    • Demonstrates a strategic/enterprise viewpoint and business insights with the ability to identify and resolve key business impediments.

Employment eligibility to work with American Express in the U.S. is required as the company will not pursue visa sponsorship for these positions.

Burnett Specialists / Choice Specialists
  • Houston, TX

Medical Economics & Informatics Analyst


This position is responsible for supporting business analysis, utilization analysis, performance results monitoring, reports development, and analytical support within the Medical Economics and Informatics Department.


Extract, manage, and analyze operational, claims and performance data to identify trends, patterns, insights, and outliers within data.

Translates transactional data into client ready deliverables using visualization tools available, such as Tableau.

Drives a repeatable analytic process, and consistently deliver best-in-class reporting to multiple business stakeholders.

Studies client specific data at multiple geographical and clinical levels to identify utilization and cost trends and provide recommendations and insights.

Contributes in analyzing and developing reports on client specific utilization trends, program savings and performance to be shared with internal Client Services team.

Provides analytical and technical support for development and QA of Client Level Dashboards and other recurrent reporting.

Collaborates on design/development/automatization of Standard Client Reporting Package, including dashboards.

Contributes on data deliverables & takeaways for Quarterly Business Review meetings.

Responsible for ad-hoc Client and Markets requests for mature programs.

Collaborates on processes to integrate new client data (claims/membership) and perform quality control tests.


·         Bachelor Degree or higher in Healthcare Informatics, Health Care Statistics, Public Health Economics, Epidemiology, Mathematics, Computer Science/IT, related field or equivalent experience.

·         3+ years experience in the Healthcare Industry and/or Managed Care Organizations

·         Experience in analytics/informatics and report development is required

·         Experience using Medical Claims data (medical cost, utilization, cost benefit analysis, etc.) is required

·         Experience with Pharmacy claims data and healthcare records is preferred

·         1+ years experience with analytics in data warehouse environment

·         Experience using SQL for report writing and data management

      Direct experience in business intelligence applications, advanced data visualization tools and/or statistical analysis software (such as: SQL/MySQL/R, SAS, Tableau, Minitab, Matlab, Crystal Reports, Business Objects Desktop (Web based) Intelligence, etc.)

      Intermediate to advanced skills with Microsoft Office tools (MS Word, Excel, PowerPoint, Visio, Project) necessary to document, track and present information related to company program/products/clients

      Knowledge of healthcare financial business cycle, healthcare quality reporting and analysis, benchmarking is required

      Knowledge of health system functions, terminology and standard ICD-10 and CPT coding systems is highly desirable

      Excellent critical and analytical thinking skills are highly desirable

      Ability to compile information and prepare reports that are easily translatable for client delivery

Pyramid Consulting, Inc
  • Atlanta, GA

Job Title: Tableau Engineer

Duration: 6-12 Months+ (potential to go perm)

Location: Atlanta, GA (30328) - Onsite

Notes from Manager:

We need a data analyst who knows Tableau, scripting (JSON, Python), Altreyx API, AWS, Analytics.


The Tableau Software engineer will be a key resource to work across our Software Engineering BI/Analytics stack to ensure stability, scalability, and the delivery of valuable BI & Analytics solutions for our leadership teams and business partners. Keys to this position are the ability to excel in identification of problems or analytic gaps and mapping and implementing pragmatic solutions. An excellent blend of analytical, technical and communication skills in a team based environment are essential for this role.

Tools we use: Tableau, Business Objects, AngularJS, OBIEE, Cognos, AWS, Opinion Lab, JavaScript, Python, Jaspersoft, Alteryx and R packages, Spark, Kafka, Scala, Oracle

Your Role:

·         Able to design, build, maintain & deploy complex reports in Tableau

·         Experience integrating Tableau into another application or native platforms is a plus

·         Expertise in Data Visualization including effective communication, appropriate chart types, and best practices.

·         Knowledge of best practices and experience optimizing Tableau for performance.

·         Experience reverse engineering and revising Tableau Workbooks created by other developers.

·         Understand basic statistical routines (mean, percentiles, significance, correlations) with ability to apply in data analysis

·         Able to turn ideas into creative & statistically sound decision support solutions

Education and Experience:

·         Bachelors degree in Computer Science or equivalent work experience

·         3-5 Years of hands on experience in data warehousing & BI technologies (Tableau/OBIEE/Business Objects/Cognos)

·         Three or more years of experience in developing reports in Tableau

·         Have good understanding of Tableau architecture, design, development and end user experience.

What We Look For:

·         Very proficient in working with large Databases in Oracle & Big Data technologies will be a plus.

·         Deep understanding & working experience of data warehouse and data mart concepts.

·         Understanding of Alteryx and R packages is a plus

·         Experience designing and implementing high volume data processing pipelines, using tools such as Spark and Kafka.

·         Experience with Scala, Java or Python and a working knowledge of AWS technologies such as GLUE, EMR, Kinesis and Redshift preferred.

·         Excellent knowledge with Amazon AWS technologies, with a focus on highly scalable cloud-native architectural patterns, especially EMR, Kinesis, and Redshift

·         Experience with software development tools and build systems such as Jenkins

  • Irvine, CA
  • Salary: $96k - 135k

The Senior Data Engineer focuses on designing, implementing and supporting new and existing data solutions- data processing, and data sets to support various advanced analytical needs. You will be designing, building and supporting data pipelines consuming data from multiple different source systems and transforming it into valuable and insightful information. You will have the opportunity to contribute to end-to-end platform design for our cloud architecture and work multi-functionally with operations, data science and the business segments to build batch and real-time data solutions. The role will be part of a team supporting our Corporate, Sales, Marketing, and Consumer business lines.


  • 7+ years of relevant experience in one of the following areas: Data engineering, business intelligence or business analytics

  • 5-7 years of supporting a large data platform and data pipelining

  • 5+ years of experience in scripting languages like Python etc.

  • 5+ years of experience with AWS services including S3, Redshift, EMR andRDS

  • 5+ years of experience with Big Data Technologies (Hadoop, Hive, HBase, Pig, Spark, etc.)

  • Expertise in database design and architectural principles and methodologies

  • Experienced in Physical data modeling

  • Experienced in Logical data modeling

  • Technical expertise should include data models, database design and data mining


  • Design, implement, and support a platform providing access to large datasets

  • Create unified enterprise data models for analytics and reporting

  • Design and build robust and scalable data integration (ETL) pipelines using SQL, Python, and Spark.

  • As part of an Agile development team contribute to architecture, tools and development process improvements

  • Work in close collaboration with product management, peer system and software engineering teams to clarify requirements and translate them into robust, scalable, operable solutions that work well within the overall data architecture

  • Coordinate data models, data dictionaries, and other database documentation across multiple applications

  • Leads design reviews of data deliverables such as models, data flows, and data quality assessments

  • Promotes data modeling standardization, defines and drives adoption of the standards

  • Work with Data Management to establish governance processes around metadata to ensure an integrated definition of data for enterprise information, and to ensure the accuracy, validity, and reusability of metadata

Avaloq Evolution AG
  • Zürich, Switzerland

The position

Are you passionate about data architecture? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

Responsible for selecting appropriate technologies from open source, commercial on-premises and cloud-based offerings. Integrating a new generation of tools within the existing environment to ensure access to accurate and current data. Consider not only the functional requirements, but also the non-functional attributes of platform quality such as security, usability, and stability.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.

Your profile

  • PhD, Master or Bachelor degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, data lakes, stream processing)

  • Practical experience with Container Platforms (OpenShift) and/or containerization software (Kubernetes, Dockers)

  • Hands-on experience developing data extraction and transformation pipelines (ETL process)

  • Expert knowledge in RDBMS, NoSQL and Data Warehousing

  • Familiar with information retrieval software such as Elastic Search/Lucene/SOLR

  • Firm understanding of major programming/scripting languages like Java/Scala, Linux, PHP, Python and/or R

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Fluent in English; German, Italian and French a plus

 Professional requirements

  • Be a thought leader for best practice how to develop and deploy data science products & services

  • Provide an infrastructure to make data driven insights scalable and agile

  • Liaise and coordinate with stakeholders regarding setting up and running a BigData and analytics platform

  • Lead the evaluation of business and technical requirements

  • Support data-driven activities and a data-driven mindset where needed

Main place of work

Avaloq Evolution AG
Anna Drozdowska, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
ITCO Solutions, Inc.
  • Austin, TX

The Sr. Engineer will be building pipelines using Spark ScalaMust Haves:
Expertise in the Big Data processing and ETL Pipeline
Designing large scaling ETL pipelines - batch and realtime
Expertise in Spark Scala coding and Data Frame API (rather than the SQL based APIs)
Expertise in core Data Frame APIs
Expertise in doing unit testing Spark Data frame API based code
Strong in Scripting knowledge using Python and shell scripting
Experience and expertise in working on performance tuning of large scale data pipelines

  • Surry Hills, Australia
  • Salary: A$120k - 140k

The Role

  • Be an integral member on the team responsible for design, implement and maintain distributed big data capable system with high-quality components (Kafka, EMR + Spark, Akka, etc).

  • Embrace the challenge of dealing with big data on a daily basis (Kafka, RDS, Redshift, S3, Athena, Hadoop/HBase), perform data ETL, and build tools for proper data ingestion from multiple data sources.

  • Collaborate closely with data infrastructure engineers and data analysts across different teams, find bottlenecks and solve the problem

  • Design, implement and maintain the heterogeneous data processing platform to automate the execution and management of data-related jobs and pipelines

  • Implement automated data workflow in collaboration with data analysts, continue to improve, maintain and improve system in line with growth

  • Collaborate with Software Engineers on application events, and ensuring right data can be extracted

  • Contribute to resources management for computation and capacity planning

  • Diving deep into code and constantly innovating


  • Experience with AWS data technologies (EC2, EMR, S3, Redshift, ECS, Data Pipeline, etc) and infrastructure.

  • Working knowledge in big data frameworks such as Apache Spark, Kafka, Zookeeper, Hadoop, Flink, Storm, etc

  • Rich experience with Linux and database systems

  • Experience with relational and NoSQL database, query optimization, and data modelling

  • Familiar with one or more of the following: Scala/Java, SQL, Python, Shell, Golang, R, etc

  • Experience with container technologies (Docker, k8s), Agile development, DevOps and CI tools.

  • Excellent problem-solving skills

  • Excellent verbal and written communication skills 

  • Houston, TX
Our Company
ConocoPhillips is the worlds largest independent E&P company based on production and proved reserves. Headquartered in Houston, Texas, ConocoPhillips had operations and activities in 17 countries, $71 billion of total assets, and approximately 11,100 employees as of Sept. 30, 2018. Production excluding Libya averaged 1,221 MBOED for the nine months ended Sept. 30, 2018, and proved reserves were 5.0 billion BOE as of Dec. 31, 2017.
Employees across the globe focus on fulfilling our core SPIRIT Values of safety, people, integrity, responsibility, innovation and teamwork. And we apply the characteristics that define leadership excellence in how we engage each other, collaborate with our teams, and drive the business.
The Sr. Analytics Analyst will be part of the Production, Drilling, and Projects Analytics Services Team within the Analytics Innovation Center of Excellence that enables data analytics across the ConocoPhillips global enterprise. This role works with business units and global functions to help strategically design, implement, and support data analytics solutions. This is a full-time position that provides tremendous career growth potential within ConocoPhillips.
Responsibilities May Include
  • Complete end to end delivery of data analytics solutions to the end user
  • Interacting closely with both business and developers while gathering requirements, designing, testing, implementing and supporting solutions
  • Gather business and technical specifications to support analytic, report and database development
  • Collect, analyze and translate user requirements into effective solutions
  • Build report and analytic prototypes based on initial business requirements
  • Provide status on the issues and progress of key business projects
  • Providing regular reporting on the performance of data analytics solutions
  • Delivering regular updates and maintenance on data analytics solutions
  • Championing the data analytics solutions and technologies at ConocoPhillips
  • Integrate data for data models used by the customers
  • Deliver Data Visualizations used for data driven decision making
  • Provide strategic technology direction while supporting the needs of the business
  • Legally authorized to work in the United States
  • 5+ years of related IT experience
  • 5+ year of Structure Querying Language experience (ANSI SQL, T-SQL, PL/SQL)
  • 3+ years hands-on experience delivering solutions with an Analytics Tools i.e. (Spotfire, SSRS, Power BI, Tableau, Business Objects)
  • Bachelor's Degree in Information Technology or Computer Science
  • 5+ years of Oil and Gas Industry experience
  • 5+ years hands-on experience delivering solutions with Informatica PowerCenter
  • 5+ years architecting data warehouses and/or data lakes
  • 5+ years with Extract Transform and Load (ETL) tools and best practices
  • 3+ years hands-on experience delivering solutions with Teradata
  • 1+ years developing analytics models with R or Python
  • 1+ years developing visualizations using R or Python
  • Experience with Oracle (11g, 12c) and SQL Server (2008 R2, 2010, 2016) and Teradata 15.x
  • Experience with Hadoop technologies (Hortonworks, Cloudera, SQOOP, Flume, etc.)
  • Experience with AWS technologies (S3, SageMaker, Athena, EMR, Redshift, Glue, etc.)
  • Thorough understanding of BI/DW concepts, proficient in SQL, and data modeling
  • Familiarity with ETL tools (Informatica, etc.) and ETL processes
  • Solutions oriented individual; learn quickly, understand complex problems, and apply useful solutions
  • Ability to work in a fast-paced environment independently with the customer
  • Ability to work as a team player
  • Ability to work with business and technology users to define and gather reporting and analytics requirements
  • Strong analytical, troubleshooting, and problem-solving skills experience in analyzing and understanding business/technology system architectures, databases, and client applications to recognize, isolate, and resolve problems
  • Demonstrates the desire and ability to learn and utilize new technologies in data analytics solutions
  • Strong communication and presentation skills
  • Takes ownership of actions and follows through on commitments by courageously dealing with important problems, holding others accountable, and standing up for what is right
  • Delivers results through realistic planning to accomplish goals
  • Generates effective solutions based on available information and makes timely decisions that are safe and ethical
To be considered for this position you must complete the entire application process, which includes answering all prescreening questions and providing your eSignature on or before the requisition closing date of February 20, 2019.
Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.
ConocoPhillips is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, veteran status, gender identity or expression, genetic information or any other legally protected status.
Job Function
Information Management-Information Technology
Job Level
Individual Contributor/Staff Level
Primary Location
Line of Business
Corporate Staffs
Job Posting
Feb 13, 2019, 4:56:49 PM
Riccione Resources
  • Dallas, TX

Sr. Data Engineer Hadoop, Spark, Data Pipelines, Growing Company

One of our clients is looking for a Sr. Data Engineer in the Fort Worth, TX area! Build your data expertise with projects centering on large Data Warehouses and new data models! Think outside the box to solve challenging problems! Thrive in the variety of technologies you will use in this role!

Why should I apply here?

    • Culture built on creativity and respect for engineering expertise
    • Nominated as one of the Best Places to Work in DFW
    • Entrepreneurial environment, growing portfolio and revenue stream
    • One of the fastest growing mid-size tech companies in DFW
    • Executive management with past successes in building firms
    • Leader of its technology niche, setting the standards
    • A robust, fast-paced work environment
    • Great technical challenges for top-notch engineers
    • Potential for career growth, emphasis on work/life balance
    • A remodeled office with a bistro, lounge, and foosball

What will I be doing?

    • Building data expertise and owning data quality for the transfer pipelines that you create to transform and move data to the companys large Data Warehouse
    • Architecting, constructing, and launching new data models that provide intuitive analytics to customers
    • Designing and developing new systems and tools to enable clients to optimize and track advertising campaigns
    • Using your expert skills across a number of platforms and tools such as Ruby, SQL, Linux shell scripting, Git, and Chef
    • Working across multiple teams in high visibility roles and owning the solution end-to-end
    • Providing support for existing production systems
    • Broadly influencing the companys clients and internal analysts

What skills/experiences do I need?

    • B.S. or M.S. degree in Computer Science or a related technical field
    • 5+ years of experience working with Hadoop and Spark
    • 5+ years of experience with Python or Ruby development
    • 5+ years of experience with efficient SQL (Postgres, Vertica, Oracle, etc.)
    • 5+ years of experience building and supporting applications on Linux-based systems
    • Background in engineering Spark data pipelines
    • Understanding of distributed systems

What will make my résumé stand out?

    • Ability to customize an ETL or ELT
    • Experience building an actual data warehouse schema

Location: Fort Worth, TX

Citizenship: U.S. citizens and those authorized to work in the U.S. are encouraged to apply. This company is currently unable to provide sponsorship (e.g., H1B).

Salary: 115 130k + 401k Match



Gravity IT Resources
  • Miami, FL

Overview of Position:

We undertaking an ambitious digital transformation across Sales, Service, Marketing, and eCommerce. We are looking for a web data analytics wizard with prior experience in digital data preparation, discovery, and predictive analytics.

The data scientist/web analyst will work with external partners, digital business partners, enterprise analytics, and technology team to strategically plan and develop datasets, measure web analytics, and execute on predictive and prescriptive use cases. The role demands the ability to (1) Learn quickly (2) Work in a fast-paced, team-driven environment (3) Manage multiple efforts simultaneously (4) Adept at using large datasets and using models to test effectiveness of different courses of action (5) Promote data driven decision making throughout the organization (6) Define and measure success of capabilities we provide the organization.

Primary Duties and Responsibilities

    • ze data captured through Google Analytics and develop meaningful actionable insights on digital behavior. Put t
    • ogether a customer 360 data frame by connecting CRM Sales, Service, Marketing cloud data with Commerce Web behavior data and wrangle the data into a usable form. Use p
    • redictive modelling to increase and optimize customer experiences across online & offline channels. Evalu
    • ate customer experience and conversions to provide insights & tactical recommendations for web optimization
    • Execute on digital predictive use cases and collaborate with enterprise analytics team to ensure use of best tools and methodologies.
    • Lead support for enterprise voice of customer feedback analytics.
    • Enhance and maintain digital data library and definitions.

Minimum Qualifications

  • Bachelors degree in Statistics, Computer Science, Marketing, Engineering or equivalent
  • 3 years or more of working experience in building predictive models.
  • Experience in Google Analytics or similar web behavior tracking tools is required.
  • Experience in R is a must with working knowledge of connecting to multiple data sources such as amazon redshift, salesforce, google analytics, etc.
  • Working knowledge in machine learning algorithms such as Random Forest, K-means, Apriori, Support Vector machine, etc.
  • Experience in A/B testing or multivariate testing.
  • Experience in media tracking tags and pixels, UTM, and custom tracking methods.
  • Microsoft Office Excel & PPT (advanced).

Preferred Qualifications

  • Masters degree in statistics or equivalent.
  • Google Analytics 360 experience/certification.
  • SQL workbench, Postgres.
  • Alteryx experience is a plus.
  • Tableau experience is a plus.
  • Experience in HTML, JavaScript.
  • Experience in SAP analytics cloud or SAP desktop predictive tool is a plus
Apex Clearing Corporation
  • Dallas, TX

Business Analyst  

Dallas, TX

Apex Clearing was brought to life by the idea that the development of integrative technology can improve how business is done. The passion for that idea powers our firm still. As a Business Analyst at Apex you are free to explore unique solutions and try fresh ideas that may benefit our financial services business. Youll collaborate with Apex's sharpest minds to use data to drive business decision making. Working at Apex Clearing means that youll always be presented with a variety of new possibilities as you continue to enhance your skills.

Were looking for someone who:

  • Is passionate. You have a genuine passion for data, modeling and analysis to drive decision making across all levels of the business. You love using technology differently to maximize insight and business impact and you have a way of bringing out that same fire in the people you work with
  • Is motivated. Youre driven to be the best whether thats decreasing onboarding time or making an innovative change to how its always been done. You challenge yourself by setting goals and exceeding them
  • Is collaborative. Youre excited to work with fellow data junkies and big thinkers. You know how to collaborate across the organization and communicate with less technical team members to find a solution to pain points
  • Wants to make an impact. Youre looking to do amazing work. Youre all about using technology to improve efficiency and effect the company's bottom line.

What youll do all day:

  • Defining and analyzing data that will generate insights and help support our core business
  • Building state of the art models
  • Acting as a thought leader for the firm with your ideas and projects enabling data driven decision making
  • Helping to define best practices within our analytics team to increase efficiency and thus create even more business impact.
  • Collaborating with Analytics team members and people across the business to get the best out of the collective team

A few reasons why you might love us:

  • Were a leader in the space.   Apex is recognized for disrupting the financial services industry, enabling fintech standouts like Stash, Robinhood and Betterment. Weve got an amazing track record of success and we foster ongoing innovation. So you get all the benefits of a proven, growing company, while enjoying a very entrepreneurial culture
  • We see data analytics differently. Youll work with people who apply innovative approaches to analysis and modeling. We are passionate engineers dedicated to finding new and different ways to use technology and data to drive the business forward.
  • Your work will have immediate impact. Youll be able to see your direct impact on our business. You wont be just another talented analyst chained to a desk or ticket queue.

And a few reasons why you may not like working for us:

  • You dont like change. This is not a job for someone who likes predictable. We embrace and drive the business toward needed change for our future success.
  • Youre not the collaborative type. We work together to ensure the best possible solutions for the business. We think two brains are better than one so we do most of our work together and often with other departments. Team work makes the dream work on this team.
  • Youre not a people person. We work with people across the organization to understand the data from a business perspective and ensure our results will be actionable by Apex.

The skills youll need to succeed:  

  • 2-4 years working as a Business Analyst, Quantitative Analyst or comparable experience
  • Experience with MS Excel
  • Experience with SQL and Python or R
  • Experience with relational databases
  • Experience with statistical analysis techniques
  • Familiarity with Sisense, Tableau or similar Data visualization tools
Signify Health
  • Dallas, TX

Position Overview:

Signify Health is looking for a savvy Data Engineer to join our growing team of deep learning specialists. This position would be responsible for evolving and optimizing data and data pipeline architectures, as well as, optimizing data flow and collection for cross-functional teams. The Data Engineer will support software developers, database architects, data analysts, and data scientists. The ideal candidate would be self-directed, passionate about optimizing data, and comfortable supporting the Data Wrangling needs of multiple teams, systems and products.

If you enjoy providing expert level IT technical services, including the direction, evaluation, selection, configuration, implementation, and integration of new and existing technologies and tools while working closely with IT team members, data scientists, and data engineers to build our next generation of AI-driven solutions, we will give you the opportunity to grow personally and professionally in a dynamic environment. Our projects are built on cooperation and teamwork, and you will find yourself working together with other talented, passionate and dedicated team member, all working towards a shared goal.

Essential Job Responsibilities:

  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data models for greater scalability, etc.
  • Leverage Azure for extraction, transformation, and loading of data from a wide variety of data sources in support of AI/ML Initiatives
  • Design and implement high performance data pipelines for distributed systems and data analytics for deep learning teams
  • Create tool-chains for analytics and data scientist team members that assist them in building and optimizing AI workflows
  • Work with data and machine learning experts to strive for greater functionality in our data and model life cycle management capabilities
  • Communicate results and ideas to key decision makers in a concise manner
  • Comply with applicable legal requirements, standards, policies and procedures including, but not limited to the Compliance requirements and HIPAA.

Qualifications:Education/Licensing Requirements:
  • High school diploma or equivalent.
  • Bachelors degree in Computer Science, Electrical Engineer, Statistics, Informatics, Information Systems, or another quantitative field. or related field or equivalent work experience.

Experience Requirements:
  • 5+ years of experience in a Data Engineer role.
  • Experience using the following software/tools preferred:
    • Experience with big data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with AWS or Azure cloud services.
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C#, etc.
  • Strong work ethic, able to work both collaboratively, and independently without a lot of direct supervision, and solid problem-solving skills
  • Must have strong communication skills (written and verbal), and possess good one-on-one interpersonal skills.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
  • 2 years of experience in data modeling, ETL development, and Data warehousing

Essential Skills:

  • Fluently speak, read, and write English
  • Fantastic motivator and leader of teams with a demonstrated track record of mentoring and developing staff members
  • Strong point of view on who to hire and why
  • Passion for solving complex system and data challenges and desire to thrive in a constantly innovating and changing environment
  • Excellent interpersonal skills, including teamwork and negotiation
  • Excellent leadership skills
  • Superior analytical abilities, problem solving skills, technical judgment, risk assessment abilities and negotiation skills
  • Proven ability to prioritize and multi-task
  • Advanced skills in MS Office

Essential Values:

  • In Leadership Do whats right, even if its tough
  • In Collaboration Leverage our collective genius, be a team
  • In Transparency Be real
  • In Accountability Recognize that if it is to be, its up to me
  • In Passion Show commitment in heart and mind
  • In Advocacy Earn trust and business
  • In Quality Ensure what we do, we do well
Working Conditions:
  • Fast-paced environment
  • Requires working at a desk and use of a telephone and computer
  • Normal sight and hearing ability
  • Use office equipment and machinery effectively
  • Ability to ambulate to various parts of the building
  • Ability to bend, stoop
  • Work effectively with frequent interruptions
  • May require occasional overtime to meet project deadlines
  • Lifting requirements of
Biswas Information Technology Solutions
  • Herndon, VA

We are seeking a junior-mid level Data Science Engineer to analyze large amounts of raw data from different sources to extract valuable business insights to aid in better business decision-making. Analytical mind, problem-solving skills and passion for machine-learning and research are critical for this role. You will be part of a highly passionate development team that is in the process of refining our Data Science toolkit that includes a wide set of predictive, recommendation, and inference modeling for our AI product — ranging from time-series forecasting, sentiment analysis, custom object-detection, to named-entity recognition, text summarization, and geometric deep learning.


  • Identify valuable data sources and automate collection processes

  • Preprocessing of structured and unstructured data

  • Discover trends and patterns in large amounts of data

  • Build predictive models and machine-learning algorithms

  • Present information using data visualization techniques

  • Propose solutions and strategies to business challenges

  • Collaborate with engineering and product development teams


  • Strong fundamentals in training, evaluating, and benchmarking machine learning models

  • Strong in Python Numpy, Pandas, keras (tensorflow or pytorch is a plus)

  • Familiar with Feature selection, Feature Extraction (especially for deep learning is a plus++)

  • Familiarity of common Hyper-optimization Techniques with different AI models.

  • Experience handling large data sets

  • Familiarity with BI tools (e.g. Tableau) and data frameworks (e.g. Hadoop)

  • Strong math skills (e.g. statistics, algebra)

  • Problem-solving aptitude

  • Excellent communication and presentation skills

  • 3 to 5 years of experience in the above is preferred

  • Englewood, CO

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Job Summary:

Software engineering skills combined with the demands of a high volume, highly-visible analytics platform make this an exciting challenge for the right candidate.

Are you passionate about digital media, entertainment, and software services? Do you like big challenges and working within a highly motivated team environment?

As a software engineer in the Data Experience (DX) team, you will research, develop, support, and deploy solutions in real-time distributing computing architectures. The DX big data team is a fast-moving team of world-class experts who are innovating in providing user-driven, self-service tools for making sense and making decisions with high volumes of data. We are a team that thrives on big challenges, results, quality, and agility.

Who does the data engineer work with?

Big Data software engineering is a diverse collection of professionals who work with a variety of teams ranging from other software engineering teams whose software integrates with analytics services, service delivery engineers who provide support for our product, testers, operational stakeholders with all manner of information needs, and executives who rely on big data for data backed decisioning.

What are some interesting problems you'll be working on?

Develop systems capable of processing millions of events per second and multi-billions of events per day, providing both a real-time and historical view into the operation of our wide-array of systems. Design collection and enrichment system components for quality, timeliness, scale and reliability. Work on high-performance real-time data stores and a massive historical data store using best-of-breed and industry-leading technology.

Where can you make an impact?

Comcast DX is building the core components needed to drive the next generation of data platforms and data processing capability. Running this infrastructure, identifying trouble spots, and optimizing the overall user experience is a challenge that can only be met with a robust big data architecture capable of providing insights that would otherwise be drowned in an ocean of data.

Success in this role is best enabled by a broad mix of skills and interests ranging from traditional distributed systems software engineering prowess to the multidisciplinary field of data science.


  • Develop solutions to big data problems utilizing common tools found in the ecosystem.
  • Develop solutions to real-time and offline event collecting from various systems.
  • Develop, maintain, and perform analysis within a real-time architecture supporting large amounts of data from various sources.
  • Analyze massive amounts of data and help drive prototype ideas for new tools and products.
  • Design, build and support APIs and services that are exposed to other internal teams
  • Employ rigorous continuous delivery practices managed under an agile software development approach
  • Ensure a quality transition to production and solid production operation of the software

Skills & Requirements:

  • 5+ years programming experience
  • Bachelors or Masters in Computer Science, Statistics or related discipline
  • Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem.
  • Experience working on big data platforms in the cloud or on traditional Hadoop platforms
  • AWS Core
  • Kinesis
  • IAM
  • S3/Glacier
  • Glue
  • DynamoDB
  • SQS
  • Step Functions
  • Lambda
  • API Gateway
  • Cognito
  • EMR
  • RDS/Auora
  • CloudFormation
  • CloudWatch
  • Languages
  • Python
  • Scala/Java
  • Spark
  • Batch, Streaming, ML
  • Performance tuning at scale
  • Hadoop
  • Hive
  • HiveQL
  • YARN
  • Pig
  • Scoop
  • Ranger
  • Real-time Streaming
  • Kafka
  • Kinesis
  • Data File Formats:
  • Avro, Parquet, JSON, ORC, CSV, XML
  • NoSQL / SQL
  • Microservice development
  • RESTful API development
  • CI/CD pipelines
  • Jenkins / GoCD
  • AWS
    • CodeCommit
    • CodeBuild
    • CodeDeploy
    • CodePipeline
  • Containers
  • Docker / Kubernetes
  • AWS
    • Lambda
    • Fargate
    • EKS
  • Analytics
  • Presto / Athena
  • QuickSight
  • Tableau
  • Test-driven development/test automation, continuous integration, and deployment automation
  • Enjoy working with data data analysis, data quality, reporting, and visualization
  • Good communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly.
  • Great design and problem solving skills, with a strong bias for architecting at scale.
  • Adaptable, proactive and willing to take ownership.
  • Keen attention to detail and high level of commitment.
  • Good understanding in any: advanced mathematics, statistics, and probability.
  • Experience working in agile/iterative development and delivery environments. Comfort in working in such an environment. Requirements change quickly and our team needs to constantly adapt to moving targets.

About Comcast DX (Data Experience):

Data Experience(DX) is a results-driven, data platform research and engineering team responsible for the delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization. The mission of DX is to gather, organize, make sense of Comcast data, and make it universally accessible to empower, enable, and transform Comcast into an insight-driven organization. Members of the DX team define and leverage industry best practices, work on extremely large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines

Comcast is an EOE/Veterans/Disabled/LGBT employer

  • Dallas, TX

Google Cloud Solutions Architect (Pre Sales)

United States | Canada | Remote | Work from Home

Why You?

Are you a US or Canada based Cloud Solutions Architect who likes to operate with a high degree of autonomy and have diverse responsibilities that require strong leadership, deep technology skills and a dedication to customer service? Do you have Big data and Data centric skills? Do you want to take part in the strategic planning of organizations data estate with a focus of fulfilling business requirements around cost, scalability and flexibility of the platform? Can you draft technology roadmaps and document best practice gaps with precise steps of how to get there? Can you implement the details of the backlogs you have helped build? Do you demonstrate consistent best practices and deliver strong customer satisfaction? Do you enjoy pre sales? Can you demonstrate adoption of new technologies and frameworks through the development of proofs of concepts?

If you have a passion for solving complex problems and for pre sales then this could be the job for you!

What Will You Be Doing?  

  • Collaborating with and supporting Pythian sales teams in the pre-sales & account management process from the technical perspective, remotely and on-site (approx 75%).
  • Defining solutions for current and future customers that efficiently address their needs. Leading through example and influence, as a master of applying technology solutions to solve business problems.
  • Developing Proof of Concepts (PoC) in order to demonstrate feasibility and value to Pythians customers (approx 25%).
  • Defining solutions for current and future customers that efficiently address their needs.
  • Identifying then executing solutions with a commitment to excellent customer service
  • Collaborating with others in refining solutions presented to customers
  • Conducting technical audits of existing architectures (Infrastructure, Performance, Security, Scalability and more) document best practices and recommendations
  • Providing component or site-wide performance optimizations and capacity planning
  • Recommending best practices & improvements to current operational processes
  • Communicating status and planning activities to customers and team members
  • Participate in periodic overtime (occasionally on short notice) travelling up to approx 50%).

What Do We Need From You?

While we realise you might not have everything on the list to be the successful candidate for the Solutions Architect job you will likely have at least 10 years experience in a variety of positions in IT. The position requires specialized knowledge and experience in performing the following:

  • Undergraduate degree in computer science, computer engineering, information technology or related field or relevant experience.
  • Systems design experience
  • Understanding and experience with Cloud architectures specifically: Google Cloud Platform (GCP) or Microsoft Azure
  • In-depth knowledge of popular database and data warehouse technologies from Microsoft, Amazon and/or Google (Big Data & Conventional RDBMS), Microsoft Azure SQL Data Warehouse, Teradata, Redshift,  BigQuery, Snowflake etc.
  • Be fluent in a few languages, preferably Java and Python, and having familiarity with Scala and Go would be a plus.
  • Proficient in SQL. (Experience with Hive and Impala would be great)
  • Proven ability to work with software engineering teams and understand complex development systems, environments and patterns.
  • Experience presenting to high level executives (VPs, C Suite)
  • This is a North American based opportunity and it is preferred that the candidate live on the West Coast, ideally in San Francisco or the Silicon Valley area but strong candidates may be considered from anywhere in the US or Canada.
  • Ability to travel and work across North America frequently (occasionally on short notice) up to 50% with some international travel also expected.


  • Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions.
  • Knowledge of real-time Hadoop query engines like Dremel, Cloudera Impala, Facebook Presto or Berkley Spark/Shark.
  • Experience with BI platforms, reporting tools, data visualization products, ETL engines.
  • Experience with any MPP (Oracle Exadata/DW, Teradata, Netezza, etc)
  • Understanding of continuous delivery and deployment patterns and tools (Jenkins, Artifactory, Maven, etc)
  • Prior experience working as/with Machine Learning Engineers, Data Engineers, or Data Scientists.
  • A certification such as Google Cloud Professional Cloud Architect, Google Professional Data Engineer or related AWS Certified Solutions Architect / Big Data or Microsoft Azure Architect
  • Experience or strong interest in people management, in a player-coach style of leadership longer term would be great.

What Do You Get in Return?

  • Competitive total rewards package
  • Flexible work environment: Why commute? Work remotely from your home, theres no daily travel requirement to the office!
  • Outstanding people: Collaborate with the industrys top minds.
  • Substantial training allowance: Hone your skills or learn new ones; participate in professional development days, attend conferences, become certified, whatever you like!
  • Amazing time off: Start with a minimum 3 weeks vacation, 7 sick days, and 2 professional development days!
  • Office Allowance: A device of your choice and personalise your work environment!  
  • Fun, fun, fun: Blog during work hours; take a day off and volunteer for your favorite charity.
SoftClouds LLC
  • San Diego, CA

Job Overview: SoftClouds is looking for a Data Engineer to join our analytics platform team in designing and developing the next generation data and analytics solutions. The candidate should have deep technical skills as well as the ability to understand data and analytics, and an openness to working with disparate platforms, data sources and data formats.

Roles and Responsibilities:
  • Experience with MySQL, MS SQL Server, or Hadoop, or MongoDB.
  • Writing SQL Queries, tables joins.
  • AWS, python, or bash shell scripting
  • Have some experience pulling data from Hadoop.
  • Analyze data, system and data flows and develop effective ways to store and present data in BI applications
  • ETL experience a plus.
  • Work with data from disparate environments including Hadoop, MongoDB Talend, and other SQL and NoSQL data stores
  • Help develop the next generation analytics platform
  • Proactively ensure data integrity and focus on continuous performance improvements of existing processes.

Required skills and experience:
  • 5  or more years of experience in software development
  • 3 year of experience in writing Data applications using Spark
  • Experience in Java and Python
  • Familiarity  with Agile development methodology `
  • Experience with Scala is a plus
  • Experience with NoSQL databases, e.g., Cassandra is a plus
  • Expertise in Apache Spark & Hadoop.
  • Expertise in machine learning algorithms

Education / Experience:

  • Bachelor's Degree in Engineering or Computer Science or related field required.
  • U.S. Citizens/GC/GC EAD are encouraged to apply. We are unable to sponsor at this time. NO C2C or third-party agencies.

Applied Resource Group
  • Atlanta, GA

Applied Resource Group is seeking a talented and experienced Data Engineer for our client, an emerging leader in the transit solutions space. As an experienced Data Engineer on the Data Services team, you will lead the design, development and maintenance of comprehensible data pipelines and distributed systems for data extraction, analysis, transformation, modelling and visualization. They're looking for independent thinkers that are passionate about technology and building solutions that continually improve the customer experience. Excellent communication skills and the ability to work collaboratively with teams is critical.

Job Duties/Responsibilities:

    • Building a unified data services platform from scratch, leveraging the most suitable Big Data tools following technical requirements and needs
    • Exploring and working with cutting edge data processing technologies
    • Work with distributed, scalable cloud-based technologies
    • Collaborating with a talented team of Software Engineers working on product development
    • Designing and delivering BI solutions to meet a wide range of reporting needs across the organization
    • Providing and maintaining up to date documentation to enable a clear outline of solutions
    • Managing task lists and communicating updates to stakeholders and team members following Agile Scrum methodology
    • Working as a key member of the core team to support the timely and efficient delivery of critical data solutions

Experience Needed:

    • Experience with AWS technologies are desired, especially those used for Data Analytics, including some of these: EMR, Glue, Data Pipelines, Lambda, Redshift, Athena, Kinesis, Elasticache, Aurora
    • Minimum of 5 years working in developing and building data solutions
    • Experience as an ETL/Data warehouse developer with knowledge in design, development and delivery of end-to-end data integration processes
    • Deep understanding of data storage technologies for structured and unstructured data
    • Background in programming and knowledge of programming languages such as Java, Scala, Node.js, Python.
    • Familiarity with cloud services (AWS, Azure, Google Cloud)
    • Experience using Linux as a primary development environment
    • Knowledge of Big data systems - Hadoop, pig, hive, shark/spark etc. a big plus.
    • Knowledge of BI platforms such as Tableau, Jaspersoft etc.
    • Strong communication and analytical skills
    • Capable of working independently under the direction of the Head of Data Services
    • Excellent communication, analytical and problem-solving skills
    • Ability to initially take direction and then work on own initiative
    • Experience working in AGILE

Nice-to-have experience and skills:

    • Masters in Computer-Science, Computer Engineering or equivalent  
    • Building data pipelines to perform real-time data processing using Spark Streaming and Kafka, or similar technologies.
Booz Allen Hamilton - Tagged
  • San Diego, CA
Job Description
Job Number: R0042382
Data Scientist, Mid
The Challenge
Are you excited at the prospect of unlocking the secrets held by a data set? Are you fascinated by the possibilities presented by machine learning, artificial intelligence advances, and IoT? In an increasingly connected world, massive amounts of structured and unstructured data open up new opportunities. As a data scientist, you can turn these complex data sets into useful information to solve global challenges. Across private and public sectors-from fraud detection, to cancer research, to national intelligence-you know the answers are in the data.
We have an opportunity for you to use your analytical skills to improve the DoD and federal agencies. Youll work closely with your customer to understand their questions and needs, then dig into their data-rich environment to find the pieces of their information puzzle. Youll develop algorithms, write scripts, build predictive analytics, use automation, and apply machine learning to turn disparate data points into objective answers to help our nations services and leaders make data-driven decisions. Youll provide your customer with a deep understanding of their data, what it all means, and how they can use it. Join us as we use data science for good in the DoD and federal agencies.
Empower change with us.
Build Your Career
At Booz Allen, we know the power of data science and machine intelligence and were dedicated to helping you grow as a data scientist. When you join Booz Allen, you can expect:
  • access to online and onsite training in data analysis and presentation methodologies, and tools like Hortonworks, Docker, Tableau, Splunk, and other open source and emerging tools
  • a chance to change the world with the Data Science Bowlthe worlds premier data science for social good competition
  • participation in partnerships with data science leaders, like our partnership with NVIDIA to deliver Deep Learning Institute (DLI) training to the federal government
Youll have access to a wealth of training resources through our Analytics University, an online learning portal specifically geared towards data science and analytics skills, where you can access more than 5000 functional and technical, certifications, and books. Build your technical skills through hands-on training on the latest tools and state-of-the-art tech from our in-house experts. Pursuing certifications? Take advantage of our tuition assistance, on-site bootcamps, certification training, academic programs, vendor relationships, and a network of professionals who can give you helpful tips. Well help you develop the career you want as you chart your own course for success.
You Have
  • Experience with one or more statistical analytical programming languages, including Python or R
  • Experience with source control and dependency management software, including Git or Maven
  • Experience with using relational databases, including MySQL
  • Experience with identifying analytic insight in data, developing visualizations, and presenting findings to stakeholders
  • Knowledge of object-oriented programming, including Java and C++
  • Knowledge of various machine learning algorithms and their designs, capabilities, and limitations
  • Knowledge of statistical analysis techniques
  • Ability to build complex extraction, transformation, and loading (ETL) pipelines to clean and fuse data together
  • Ability to obtain a security clearance
  • BA or BS degree
Nice If You Have
  • Experience with designing and implementing custom machine learning algorithms
  • Experience with graph algorithms and semantic Web
  • Experience with designing and setting up relational databases
  • Experience with Big Data computing environments, including Hadoop
  • Experience with Navy mission systems
  • MA degree in Mathematics, CS, or a related quantitative field
Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information.
Were an EOE that empowers our peopleno matter their race, color, religion, sex, gender identity, sexual orientation, national origin, disability, or veteran statusto fearlessly drive change.
, CJ1, GD13, MPPC, SIG2017
phData, Inc.
  • Minneapolis, MN

Title: Big Data Solutions Architect (Minneapolis or US Remote)

Join the Game-Changers in Big Data  

Are you inspired by innovation, hard work and a passion for data?    

If so, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.  

As a Solution Architect on our Big Data Consulting team, your responsibilities include:

    • Design, develop, and innovative Big Data solutions; partner with our internal Managed Services Architects and Data Engineers to build creative solutions to solve tough big data problems.  
    • Determine the project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions
    • Work across a broad range of technologies from infrastructure to applications to ensure the ideal Big Data solution is implemented and optimized
    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources
    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews


  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics - combined with an expertise in Hadoop Technologies and Java programming
  • Technical Leadership experience leading/mentoring junior software/data engineers, as well as scoping activities on large scale, complex technology projects
  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  
  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc
  • Expert programming experience in Java, Scala, or other statically typed programming language
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Excellent communication skills including proven experience working with key stakeholders and customers
  • Ability to translate big picture business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics
  • Customer relationship management including project escalations, and participating in executive steering meetings
  • Ability to learn new technologies in a quickly changing field