OnlyDataJobs.com

Accenture
  • Detroit, MI
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Minneapolis, MN
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Atlanta, GA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Philadelphia, PA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • San Diego, CA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Migo
  • Taipei, Taiwan

  • Responsibility 

    • Collaborate with data scientists to phase in statistical, predictive machine learning and AI models to production scale and continuously optimizing performance.

    • Design, build, optimize, launch and support new and existing data models and ETL processes in production based on data products and stakeholder needs.

    • Define and manage SLA and accuracy for all data sets in allocated areas of ownership.

    • Design and continuously improve data infrastructure and identify infra issues and drive to resolution.

    • Support software development team to build and maintain data collectors in Migo application ecosystem based on data warehouse and analytics user requirements.





  • Basic Qualification:

    • Bachelor's degree in Computer Science, Information Management or related field.

    • 2+ years hands-on experience in the data warehouse space, custom ETL design, implementation and maintenance.

    • 2+ years hands-on experience in SQL or similar languages and development experience in at least one scripting language (Python preferred).

    • Strong data architecture, data modeling, schema design and effective project management skills.

    • Excellent communication skills and proven experience in leading data driven projects from definition through interpretation and execution.

    • Experience with large data sets and data profiling techniques.

    • Ability to initiate and drive projects, and communicate data warehouse plans to internal clients/stakeholders.





  • Preferred Qualification:

    • Experience with big data and distributed computing technology such as Hive, Spark, Presto, Parquet

    • Experience building and maintaining production level data lake with Hadoop Cluster or AWS S3.

    • Experience with batch processing and streaming data pipeline/architecture design patterns such as lambda architecture or kappa architecture.








    • AI

    • ETL

    • (SLA)


    • Migo







    • 2data warehouse space, custom ETL

    • 2SQL (Python)

    • data modeling






    • Hive, Spark, Presto, Parquet

    • Hadoop Cluster or AWS S3.

    • lambda architecture or kappa architecture.


Comcast
  • Englewood, CO

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Job Summary:

Software engineering skills combined with the demands of a high volume, highly-visible analytics platform make this an exciting challenge for the right candidate.

Are you passionate about digital media, entertainment, and software services? Do you like big challenges and working within a highly motivated team environment?

As a software engineer in the Data Experience (DX) team, you will research, develop, support, and deploy solutions in real-time distributing computing architectures. The DX big data team is a fast-moving team of world-class experts who are innovating in providing user-driven, self-service tools for making sense and making decisions with high volumes of data. We are a team that thrives on big challenges, results, quality, and agility.

Who does the data engineer work with?

Big Data software engineering is a diverse collection of professionals who work with a variety of teams ranging from other software engineering teams whose software integrates with analytics services, service delivery engineers who provide support for our product, testers, operational stakeholders with all manner of information needs, and executives who rely on big data for data backed decisioning.

What are some interesting problems you'll be working on?

Develop systems capable of processing millions of events per second and multi-billions of events per day, providing both a real-time and historical view into the operation of our wide-array of systems. Design collection and enrichment system components for quality, timeliness, scale and reliability. Work on high-performance real-time data stores and a massive historical data store using best-of-breed and industry-leading technology.

Where can you make an impact?

Comcast DX is building the core components needed to drive the next generation of data platforms and data processing capability. Running this infrastructure, identifying trouble spots, and optimizing the overall user experience is a challenge that can only be met with a robust big data architecture capable of providing insights that would otherwise be drowned in an ocean of data.

Success in this role is best enabled by a broad mix of skills and interests ranging from traditional distributed systems software engineering prowess to the multidisciplinary field of data science.

Responsibilities:

  • Develop solutions to big data problems utilizing common tools found in the ecosystem.
  • Develop solutions to real-time and offline event collecting from various systems.
  • Develop, maintain, and perform analysis within a real-time architecture supporting large amounts of data from various sources.
  • Analyze massive amounts of data and help drive prototype ideas for new tools and products.
  • Design, build and support APIs and services that are exposed to other internal teams
  • Employ rigorous continuous delivery practices managed under an agile software development approach
  • Ensure a quality transition to production and solid production operation of the software

Skills & Requirements:

  • 5+ years programming experience
  • Bachelors or Masters in Computer Science, Statistics or related discipline
  • Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem.
  • Experience working on big data platforms in the cloud or on traditional Hadoop platforms
  • AWS Core
  • Kinesis
  • IAM
  • S3/Glacier
  • Glue
  • DynamoDB
  • SQS
  • Step Functions
  • Lambda
  • API Gateway
  • Cognito
  • EMR
  • RDS/Auora
  • CloudFormation
  • CloudWatch
  • Languages
  • Python
  • Scala/Java
  • Spark
  • Batch, Streaming, ML
  • Performance tuning at scale
  • Hadoop
  • Hive
  • HiveQL
  • YARN
  • Pig
  • Scoop
  • Ranger
  • Real-time Streaming
  • Kafka
  • Kinesis
  • Data File Formats:
  • Avro, Parquet, JSON, ORC, CSV, XML
  • NoSQL / SQL
  • Microservice development
  • RESTful API development
  • CI/CD pipelines
  • Jenkins / GoCD
  • AWS
    • CodeCommit
    • CodeBuild
    • CodeDeploy
    • CodePipeline
  • Containers
  • Docker / Kubernetes
  • AWS
    • Lambda
    • Fargate
    • EKS
  • Analytics
  • Presto / Athena
  • QuickSight
  • Tableau
  • Test-driven development/test automation, continuous integration, and deployment automation
  • Enjoy working with data data analysis, data quality, reporting, and visualization
  • Good communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly.
  • Great design and problem solving skills, with a strong bias for architecting at scale.
  • Adaptable, proactive and willing to take ownership.
  • Keen attention to detail and high level of commitment.
  • Good understanding in any: advanced mathematics, statistics, and probability.
  • Experience working in agile/iterative development and delivery environments. Comfort in working in such an environment. Requirements change quickly and our team needs to constantly adapt to moving targets.

About Comcast DX (Data Experience):

Data Experience(DX) is a results-driven, data platform research and engineering team responsible for the delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization. The mission of DX is to gather, organize, make sense of Comcast data, and make it universally accessible to empower, enable, and transform Comcast into an insight-driven organization. Members of the DX team define and leverage industry best practices, work on extremely large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines

Comcast is an EOE/Veterans/Disabled/LGBT employer

Pythian
  • Dallas, TX

Google Cloud Solutions Architect (Pre Sales)

United States | Canada | Remote | Work from Home

Why You?

Are you a US or Canada based Cloud Solutions Architect who likes to operate with a high degree of autonomy and have diverse responsibilities that require strong leadership, deep technology skills and a dedication to customer service? Do you have Big data and Data centric skills? Do you want to take part in the strategic planning of organizations data estate with a focus of fulfilling business requirements around cost, scalability and flexibility of the platform? Can you draft technology roadmaps and document best practice gaps with precise steps of how to get there? Can you implement the details of the backlogs you have helped build? Do you demonstrate consistent best practices and deliver strong customer satisfaction? Do you enjoy pre sales? Can you demonstrate adoption of new technologies and frameworks through the development of proofs of concepts?

If you have a passion for solving complex problems and for pre sales then this could be the job for you!

What Will You Be Doing?  

  • Collaborating with and supporting Pythian sales teams in the pre-sales & account management process from the technical perspective, remotely and on-site (approx 75%).
  • Defining solutions for current and future customers that efficiently address their needs. Leading through example and influence, as a master of applying technology solutions to solve business problems.
  • Developing Proof of Concepts (PoC) in order to demonstrate feasibility and value to Pythians customers (approx 25%).
  • Defining solutions for current and future customers that efficiently address their needs.
  • Identifying then executing solutions with a commitment to excellent customer service
  • Collaborating with others in refining solutions presented to customers
  • Conducting technical audits of existing architectures (Infrastructure, Performance, Security, Scalability and more) document best practices and recommendations
  • Providing component or site-wide performance optimizations and capacity planning
  • Recommending best practices & improvements to current operational processes
  • Communicating status and planning activities to customers and team members
  • Participate in periodic overtime (occasionally on short notice) travelling up to approx 50%).

What Do We Need From You?

While we realise you might not have everything on the list to be the successful candidate for the Solutions Architect job you will likely have at least 10 years experience in a variety of positions in IT. The position requires specialized knowledge and experience in performing the following:

  • Undergraduate degree in computer science, computer engineering, information technology or related field or relevant experience.
  • Systems design experience
  • Understanding and experience with Cloud architectures specifically: Google Cloud Platform (GCP) or Microsoft Azure
  • In-depth knowledge of popular database and data warehouse technologies from Microsoft, Amazon and/or Google (Big Data & Conventional RDBMS), Microsoft Azure SQL Data Warehouse, Teradata, Redshift,  BigQuery, Snowflake etc.
  • Be fluent in a few languages, preferably Java and Python, and having familiarity with Scala and Go would be a plus.
  • Proficient in SQL. (Experience with Hive and Impala would be great)
  • Proven ability to work with software engineering teams and understand complex development systems, environments and patterns.
  • Experience presenting to high level executives (VPs, C Suite)
  • This is a North American based opportunity and it is preferred that the candidate live on the West Coast, ideally in San Francisco or the Silicon Valley area but strong candidates may be considered from anywhere in the US or Canada.
  • Ability to travel and work across North America frequently (occasionally on short notice) up to 50% with some international travel also expected.

Nice-to-Haves:

  • Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions.
  • Knowledge of real-time Hadoop query engines like Dremel, Cloudera Impala, Facebook Presto or Berkley Spark/Shark.
  • Experience with BI platforms, reporting tools, data visualization products, ETL engines.
  • Experience with any MPP (Oracle Exadata/DW, Teradata, Netezza, etc)
  • Understanding of continuous delivery and deployment patterns and tools (Jenkins, Artifactory, Maven, etc)
  • Prior experience working as/with Machine Learning Engineers, Data Engineers, or Data Scientists.
  • A certification such as Google Cloud Professional Cloud Architect, Google Professional Data Engineer or related AWS Certified Solutions Architect / Big Data or Microsoft Azure Architect
  • Experience or strong interest in people management, in a player-coach style of leadership longer term would be great.

What Do You Get in Return?

  • Competitive total rewards package
  • Flexible work environment: Why commute? Work remotely from your home, theres no daily travel requirement to the office!
  • Outstanding people: Collaborate with the industrys top minds.
  • Substantial training allowance: Hone your skills or learn new ones; participate in professional development days, attend conferences, become certified, whatever you like!
  • Amazing time off: Start with a minimum 3 weeks vacation, 7 sick days, and 2 professional development days!
  • Office Allowance: A device of your choice and personalise your work environment!  
  • Fun, fun, fun: Blog during work hours; take a day off and volunteer for your favorite charity.
HOOQ
  • Singapore
  • Salary: SGD 48k - 96k

Homegrown stories and hollywood hits. At HOOQ we’re telling millions of stories to billions of people across Singapore, Philippines, Thailand, Indonesia, and India. Just like our content, our team comes from around the world, we’re ambitious, driven and unique, we embrace the difference. There’s many paths people take to join us, what links us together is our love of stories.


HOOQ is backed by some of the biggest players in entertainment, we’re a joint venture between Singtel, Sony and Warner Brothers. We build for the customer first to deliver original, local and international content on their phone, tablet, computer and television wherever they are. 


It’s an exciting time to be at HOOQ! We are looking for a hands-on Lead Data Scientist in the HOOQ Data Science team. Someone who is a creative problem solver with a passion to deliver advanced data-driven solutions. Once you're here, you will make important, strategic decisions that influence product for better retention, and better experience for our customers. More specifically, you will:




  • Dive into HOOQ’s internal and third party data (think Redshift, Python, Tableau, R) to make strategic recommendations (e.g., personalized user flows, segmented marketing audiences, more accurate recommendations, churn prediction, propensity models).

  • Serve as a mentor to other Data Scientists on the team and Analysts across the company by leading learning academies and serving as an available resource for all things related to analytics.

  • Develop a machine learning/deep learning models for propensity modeling, user segmentation, text analytics, video analytics, churn prediction, personalisation/recommendation

  •  Tell stories that describe analytical results and insights in meetings of all sizes with diverse audiences.

  • Provide analytics based thought leadership across a variety of technical and non-technical audiences to ensure that all levels of HOOQ teams make data-driven decisions.



Who you are


You will not shy away from complexity or uncertainty. You will develop a keen understanding of our mission, business models, and personas. We want you to use that intuition you've developed (both in business and real life) to find opportunities for growth and cultivate insights from our massive data sets. We're looking for a dynamic data scientist who has:




  • A degree in a quantitative field (e.g. science, engineering, economics, finance, statistics, or similar). A relevant Master’s degree (or higher) is highly regarded and can be used in place of some work experience

  • 4+ years of work experience involving quantitative data analysis and complex problem solving (preferably focused on consumer-facing internet products).

  • Complete command of SQL, and either Python or R, along with some experience with Tableau. Proficiency with similar BI and visualization tools is also transferable.

  • Extensive experience directly querying multi-terabyte-sized data sets including clickstream data, third party data and raw data ingested from non-standard platforms.

  • Strong written, verbal, and visual communication skills to concisely communicate in a way that provides context, offers insights, and minimizes misinterpretation.

  • The skills to work cross-functionally and push business partners to focus on realistic goals and projects.

  • Experience with distributed analytic processing technologies (think Hive, Pig, Presto, Spark)

  • Experience building and deploying analytic solutions as well as machine learning and/or optimization models in production.

  • Solid statistical knowledge and analytic thinking, ideally utilized in modeling and experimentation.

  • Exceptional interpersonal and communication skills coupled with strong business acumen.

  • Proven ability to partner with senior leaders and influence peer organizations.

  • High-energy self-starter with a passion for your work, tolerance for ambiguity in a fast-paced setting, and a positive attitude.

  • Passion for movies/TV would be a plus!

HOOQ
  • Singapore
  • Salary: SGD 48k - 96k

Homegrown stories and hollywood hits. At HOOQ we’re telling millions of stories to billions of people across Singapore, Philippines, Thailand, Indonesia, and India. Just like our content, our team comes from around the world, we’re ambitious, driven and unique, we embrace the difference. There’s many paths people take to join us, what links us together is our love of stories.


HOOQ is backed by some of the biggest players in entertainment, we’re a joint venture between Singtel, Sony and Warner Brothers. We build for the customer first to deliver original, local and international content on their phone, tablet, computer and television wherever they are. 


It’s an exciting time to be at HOOQ! We are looking for a hands-on Lead Data Scientist in the HOOQ Data Science team. Someone who is a creative problem solver with a passion to deliver advanced data-driven solutions. Once you're here, you will make important, strategic decisions that influence product for better retention, and better experience for our customers. More specifically, you will:



  • Dive into HOOQ’s internal and third party data (think Redshift, Python, Tableau, R) to make strategic recommendations (e.g., personalized user flows, segmented marketing audiences, more accurate recommendations, churn prediction, propensity models).

  • Serve as a mentor to other Data Scientists on the team and Analysts across the company by leading learning academies and serving as an available resource for all things related to analytics.

  • Develop a machine learning/deep learning models for propensity modeling, user segmentation, text analytics, video analytics, churn prediction, personalisation/recommendation

  •  Tell stories that describe analytical results and insights in meetings of all sizes with diverse audiences.

  • Provide analytics based thought leadership across a variety of technical and non-technical audiences to ensure that all levels of HOOQ teams make data-driven decisions.


Who you are


You will not shy away from complexity or uncertainty. You will develop a keen understanding of our mission, business models, and personas. We want you to use that intuition you've developed (both in business and real life) to find opportunities for growth and cultivate insights from our massive data sets. We're looking for a dynamic data scientist who has:



  • A degree in a quantitative field (e.g. science, engineering, economics, finance, statistics, or similar). A relevant Master’s degree (or higher) is highly regarded and can be used in place of some work experience

  • 4+ years of work experience involving quantitative data analysis and complex problem solving (preferably focused on consumer-facing internet products).

  • Complete command of SQL, and either Python or R, along with some experience with Tableau. Proficiency with similar BI and visualization tools is also transferable.

  • Extensive experience directly querying multi-terabyte-sized data sets including clickstream data, third party data and raw data ingested from non-standard platforms.

  • Strong written, verbal, and visual communication skills to concisely communicate in a way that provides context, offers insights, and minimizes misinterpretation.

  • The skills to work cross-functionally and push business partners to focus on realistic goals and projects.

  • Experience with distributed analytic processing technologies (think Hive, Pig, Presto, Spark)

  • Experience building and deploying analytic solutions as well as machine learning and/or optimization models in production.

  • Solid statistical knowledge and analytic thinking, ideally utilized in modeling and experimentation.

  • Exceptional interpersonal and communication skills coupled with strong business acumen.

  • Proven ability to partner with senior leaders and influence peer organizations.

  • High-energy self-starter with a passion for your work, tolerance for ambiguity in a fast-paced setting, and a positive attitude.

  • Passion for movies/TV would be a plus!

Esri
  • New Delhi, India

OVERVIEW



Are you passionate about applying AI and machine learning to solve some of the world’s biggest challenges? So are we! Esri is the world leader in geographic information systems (GIS) and developer of ArcGIS, the leading mapping and analytics software used in 75 percent of Fortune 500 companies. At the Esri R&D Center-New Delhi, we are applying cutting-edge deep learning techniques to revolutionize geospatial analysis and derive insight from imagery and location data.
 
Join our team of exceptional data scientists and software engineers to deliver a spatial data science platform, develop industry-leading AI models for satellite imagery, and build world-class Geo AI solutions.



RESPONSIBILITIES



  • Develop and train deep learning models for computer vision problems such as object detection, image classification, road detection, building footprint segmentation, and 3D point cloud segmentation

  • Develop processes and tools to monitor model performance and accuracy

  • Integrate ArcGIS with popular machine learning and deep learning libraries such as Scikit-learn, Tensorflow/Keras, and PyTorch/FastAI

  • Design, test, release, and support AI capabilities in the ArcGIS platform to enhance overall product quality and applicability for supporting data science and deep learning workflows

  • Author and maintain geospatial data science samples using ArcGIS and machine learning/deep learning libraries like Tensorflow and PyTorch


REQUIREMENTS





  • 3+ years of experience with Python

  • 3+ years of practical machine learning experience, some of which is within established technical organizations

  • Self-learner with coursework in and extensive knowledge of machine learning and deep learning

  • Experience with Python machine learning and deep learning libraries such as Scikit-learn, Pandas, PyTorch/FastAI, or TensorFlow/Keras

  • Understanding of machine learning as well as deep learning techniques and algorithms such as k-NN, Naive Bayes, SVM, Random Forests, CNNs, RNNs, LSTMs

  • Ability to design and implement deep learning models for object detection, semantic and instance segmentation, GANs

  • Experience in data visualization in Jupyter Notebooks using matplotlib and other libraries

  • Experience with Hyperparameter-tuning and training models to a high level of accuracy

  • Ability to perform data extraction, transformation, loading from multiple data sources and sinks

  • Bachelor's or master's in computer science, engineering, or related disciplines from IITs or other top tier engineering colleges




RECOMMENDED QUALIFICATIONS



  • Experience applying deep learning to satellite or medical imagery

  • Familiarity with ArcGIS suite of products and concepts of GIS, including working with ArcGIS API for Python

  • Knowledge of deep learning for natural language processing, probabilistic programming, and reinforcement learning

  • Experience with CUDA/GPU programming

  • Experience with distributed training of deep learning models and big data machine learning using Spark ML

  • Familiarity with one or more of the following: Hadoop HDFS, Spark, Accumulo, Presto, MongoDB, Elastic Search, Cassandra, HBase, R, Mahout, Pig, Hive, DC/OS, Kubernetes

  • Master's or PhD in mathematics, statistics, computer science, or related field, depending on position level

SecurityScorecard
  • Hlavní město Praha, Czechia

New York headquarterd Cyber Security firm, Security Scorecard, opened their first office outside of the US in Prague in September 2018.


The new office is growing to become Security Scorecard’s international technology centre. We have 15 developers in the Prague's team.


The company has >150 employees and has raised >USD 60mm to fuel their expansion (https://www.crunchbase.com/organization/security-scorecard)


Security Scorecard are looking for a Lead Backend Engineer (C++) to run a team of 5-6 back-end engineers.


You will report to Jasson Casey, CTO & SVP Engineering - https://www.linkedin.com/in/jassoncasey/ and Nick Matviko, Director of Engineering - https://www.linkedin.com/in/nickmatviko/ 


What does success look like in this role?


Success in this role will mean leading a team to deliver of quality products/features/applications as well as establishing and growing a positive and productive engineering culture in Prague. 


Requirements:



  • Lead, coach and mentor a team of 6 x backend software engineers, instilling engineering best practices and cultivating a culture to attract and retain talent

  • Serving in a hands-on development capacity as needed, including reviewing code, participating in the full life cycle development of product features and coding Proofs of Concept

  • Develop an effective team by leading hiring and recruiting initiatives in Prague in partnership with global engineering leaders and the People team

  • Evolve the technical onboarding process for new team members in Prague in alliance with global engineering team

  • Responsible for the technical roadmap and direction, leading projects in partnership with global engineering leaders and cross-functional team members

  • Evaluate and introduce new technologies, processes and policies when necessary

  • Partner with product team and other stakeholders on broader engineering and company initiatives


Your Skills/Qualifications:



  • 3+ years experience leading an engineering team with a focus on both delivery of high quality products and motivating a team to do their best work aligned with company vision

  • 5+ years in a software engineering role building high quality products to scale

  • Fluency with C++ (modern versions a plus) and/or C. Experience with python is also a big bonus.

  • Experience designing and building high performance, low latency distributed systems

  • Effective project manager, well versed in agile methodology, cross functional communication (including with remote teams), cost and time management

  • Motivated to tackle large-scale problems in a fast paced, startup environment

  • Exceptional communication skills with the ability to convey intricate systems and logic to both technical and non-technical audiences

  • Bachelor's Degree or higher in Computer Science or related field


Tool We Use:



  • Data definition, format and interfaces

    • Definitions - Protobuf V3

    • Normalize from - AVRO / JSON / XML / CSV

    • Normalize to - Protobuf / ORC

    • Interfaces - REST API(s), gRPC and object store buckets



  • Cloud Services - Amazon Web Services

  • Databases: Postgres, Presto

  • Cache: Redis, Varnish

  • Languages - Python / C++14 / Scala / Go-lang, Javascript, React.js, Node.js

  • Job Orchestration - HT Condor / Apache Airflow

  • Analytics - Spark / Bluepipe (C++)

  • Storage - Gluster / NFS / S3 ( AWS ) / EFS ( AWS ) / Postgres

  • Computation - Docker Containers / VMs / Metal / EMR

Comcast
  • Philadelphia, PA

Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast.

Job Summary:

This is a hands-on job for a "big data" engineer. The job requires both serious technical chops and effective communication skills. You will provide technical leadership in an environment that moves fast, collaborates between teams and team members, and that expects you to own your projects.

The Digital Home Client Analytics team works closely with our platform engineering team, AI teams, and Client development teams to manage all stages of the data pipeline. We make use of technologies like Spark, Hive, Presto, and Snowflake in our ETL pipelines which expose the data to our analyst and machine learning applications. Design and develop tools that automatically detect data anomalies, enable analysts and engineers to easily build/manage their own ETLs. 2019 will be the year of data for the Digital Home and we are looking for someone to help design and implement an A/B testing system for xFi and xHome product lines. Our products are Syndicated to other MSO's so we have to design these systems to be multi-tenant from the beginning.

We are looking for someone with a strong sense of responsibility: taking pride in your work, leveraging others, owning the problem. And you love, and we mean love, data.

Qualifications

- Bachelor of Science in Computer Science, Engineering or equivalent

- 8+ years of large scale, full life cycle development experience

- In-depth technical experience with big data technologies such as Hadoop (HDFS, Hive, Map/Reduce), Spark, Kafka/Samza

- General software engineering and programming experience; most of our larger projects are in Java, but we also have scripts in Python, and Bash.

- ETL and processing from disparate data sets using appropriate technologies including but not limited to, Hive, Pig, MapReduce, HBase, Spark, Storm and Kafka.

- Expert in HIVE SQL and ANSI SQL-Great hands on in Data Analysis using SQL.

- Linux experience

- Regular, consistent and punctual attendance.
- Other duties and responsibilities as assigned.

Nice To Have

- Ansible

- Splunk

- Localytics

- Experience with Hbase, kafka and spark.

- Ops / DevOps experience is a plus

Job Specification:
- Bachelors Degree or Equivalent
- Computer Science, Engineering
- Generally requires 11+ years related experience

Comcast is an EOE/Veterans/Disabled/LGBT employer

Accenture
  • Detroit, MI
Position: Full time
Travel: 100% travel
Recruiter: tim.bennett@accenture.com
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Accenture
  • Atlanta, GA
Position: Full time
Travel: 100% travel
Recruiter: tim.bennett@accenture.com
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
JoCo
  • Houston, TX

What is the position:


The Sr. Data Analyst will be responsible for managing master data sets, troubleshooting data issues, and developing reports.


What will you do:


  • Develop, implement, and maintain analytic systems
  • Develop and maintain protocols for handling, processing, and cleaning data
  • Evaluate complicated problems and build simple frameworks
  • Identify trends and opportunities for growth through analysis of complex data sets
  • Work with management and users to gather requirements and provide status updates
  • Work with business owners to understand their analytical needs, including identifying critical metrics and KPIs, and deliver actionable insights to relevant decision-makers
  • Evaluate organizational methods and provide source-to-target mappings and information-model specification documents for data sets
  • Evaluate internal systems for efficiency, inaccuracies, and problems
  • Create best-practice reports based on data mining, analysis, and visualization
  • Use statistical methods to analyze data and create useful reports


What are the requirements:


  • Bachelors Degree in CS, Statistics, Information Systems, or related
  • 3+ years experience with data mining
  • Experience in the Oil & Gas Industry preferred
  • Strong experience working with data discovery, analytics, and BI software tools (ie. Tableau, Qlik, PowerBI, etc.)
  • Experience with technical writing in relevant areas, including queries and reports
  • Experience with advanced analytics tools for Object-oriented scripting using languages such as (R, Python, Java, C++, etc.)
  • Experience working with SQL on Hadoop tools and technologies (ie. HIVE, Impala, Presto, Hortonworks Data Flow (HDF), Dremio, Informatica, Talend)
  • Experience with database programming languages including SQL, PL/SQL, etc.
  • Knowledge of NoSQL/Hadoop oriented databases (MongoDB, Cassandra, etc.)
  • Excellent communication skills


You would be really happy working here if:


  • Roadblocks dont intimidate you. You understand how to successfully evaluate problems and develop appropriate solutions.
  • You can be counted on in crucial times, possessing great focus while completing projects successfully and efficiently.
WeQ Global Tech GmbH
  • Berlin, Germany

If you have a deep understanding of building microservices in PHP and like seeing your services being used by internal and external partners, you will fit right in.



What you will be responsible for in this role



  • From design to operate autoscaling microservices: you develop the whole path of our backend services providing REST APIs, queue consumer/producer or data manager.

  • Improve and evolve our existing services and architecture to stay on the latest edge of technologies.

  • Work in an agile environment and contribute your ideas to continuously improve our products, processes, and tools like our internal management system, our publisher and advertiser interfaces, reporting and more...

  • Team up with talented developers, learn as much as you can, apply your expertise to build great products and have fun while doing so.


In order to be able to give your best in this role, you should have already accomplished or possess the following



  • 4+ years of relevant working experience

  • Excellent skills in PHP and great knowledge of at least one major framework (Symfony, Phalcon, or Laravel)

  • Expert knowledge on SQL or even NoSQL Databases

  • DevOps mindset with experience in Docker, CI/CD, Cloudservice Provider

  • Experience with Queues and In-Memory Storage is a plus

  • Deep understanding of software engineering best practices

  • Have a passion for continuous learning and improving yourself



Technologies we are using


PHP, Symfony, Laravel, Swagger, Kafka, Hadoop, Presto, Druid, Spark, Cassandra, MongoDB, Aerospike, Redis, Docker, AWS, Hybrid Cloud, CI/CD, Ansible, Packer, Consul, Terraform, Grafana, Scalyr

WeQ Global Tech GmbH
  • Berlin, Germany

You will own deployment and scaling pipelines that operate on fleets of EC2 instances which keep our distributed services highly available and easy to scale. Knowing how to develop infrastructure as code using Terraform, Packer and sometimes Elastic Beanstalk is essential. Monitoring and maintaining both the infrastructure and services by providing and combining useful metrics in Grafana dashboards is needed for our uptime guarantees.



What we expect from you in this role



  • From design to operation, you develop our highly available microservices handling billions of events and TBs of data per day.

  • Developing features for our low latency distributed services using Java, Vertx and Spring Boot.

  • Working together with our Machine Learning department to integrate recommender algorithms for better user targeting.

  • Managing and extending deployment and provisioning pipelines using Terraform, Packer, Consul, and Elastic Beanstalk.

  • Improve and evolve our existing services and architecture to stay on the latest edge of technologies.

  • Work in an agile environment and contribute your ideas to continuously improve our products, processes, and tools.

  • Team up with talented developers, learn as much as you can, apply your expertise to build great products and have fun while doing so.


In order to be able to give your best in this role, you should have already accomplished or possess the following



  • 4 years of relevant working experience

  • Expert knowledge on handling and scaling high-performance networks and big data cluster

  • Excellent skills in software development preferably in Java

  • Great know-how on cloud services and automating the infrastructure

  • Good knowledge of SQL/NoSQL Databases, Queues, and In-Memory Storage

  • Deep understanding of software engineering best practices

  • Have a passion for continuous learning and improving yourself



Technologies we are using


Java, Vertx, Spring/Spring Boot, Maven, Kafka, Hadoop, Presto, Druid, Spark, Cassandra, MongoDB, Aerospike, Redis, Docker, AWS, Hybrid Cloud, CI/CD, Ansible, Packer, Consul, Terraform, Grafana, Scalyr

Accenture
  • Atlanta, GA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of designing, implementing large scale data solutions operating in production environments using Spark, Hadoop and NoSQL ecosystem on premise or on Cloud (AWS, Google or Azure) using many of the relevant technologies such as Nifi, Spark, Kafka, HBase, Hive, Cassandra, Impala, GraphDB etc.
    • Minimum 1 year of architecting data and building performant data models at scale for Hadoop/NoSQL/GraphDB ecosystem of data stores to support different business consumption patterns (using technologies such as Hive, Impala, Cassandra, HBase, Neo4j, DataStax Graph).
    • Minimum 1+ years of Spark data processing using Java, Python, Scala; for data curation and analysis of large scale production deployed solution.
    • Minimum 1 year of data integration, curation in a Big Data environment, using Talend Big Data Integration or Informatica BDE; for data curation and analysis of large scale production deployed solutions.
    • Minimum 2 years designing and implementing relational or data warehousing models working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza) and understanding of the challenges and limitations of these traditional solutions.
Preferred Skills
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience integration large scale BI/Visualization solutions (e.g. Tableau, Qlik, Spotfire) with Big Data platforms.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for modern data platforms that use Hadoop and NoSQL on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of experience securing Hadoop/NoSQL based modern data platforms on-premise or on AWS, Google, Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Experience integration enterprise data management toolsets (e.g. Informatica, Talend etc.) with Big data platforms.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Accenture
  • Minneapolis, MN
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of designing, implementing large scale data solutions operating in production environments using Spark, Hadoop and NoSQL ecosystem on premise or on Cloud (AWS, Google or Azure) using many of the relevant technologies such as Nifi, Spark, Kafka, HBase, Hive, Cassandra, Impala, GraphDB etc.
    • Minimum 1 year of architecting data and building performant data models at scale for Hadoop/NoSQL/GraphDB ecosystem of data stores to support different business consumption patterns (using technologies such as Hive, Impala, Cassandra, HBase, Neo4j, DataStax Graph).
    • Minimum 1+ years of Spark data processing using Java, Python, Scala; for data curation and analysis of large scale production deployed solution.
    • Minimum 1 year of data integration, curation in a Big Data environment, using Talend Big Data Integration or Informatica BDE; for data curation and analysis of large scale production deployed solutions.
    • Minimum 2 years designing and implementing relational or data warehousing models working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza) and understanding of the challenges and limitations of these traditional solutions.
Preferred Skills
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience integration large scale BI/Visualization solutions (e.g. Tableau, Qlik, Spotfire) with Big Data platforms.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for modern data platforms that use Hadoop and NoSQL on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of experience securing Hadoop/NoSQL based modern data platforms on-premise or on AWS, Google, Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Experience integration enterprise data management toolsets (e.g. Informatica, Talend etc.) with Big data platforms.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women.