OnlyDataJobs.com

ING
  • Amsterdam, Netherlands
ING is looking for experienced hires to help build on our Global Analytics ambition


About ING


Think Forward! Our purpose is to empower people to stay a step ahead in life and in business. We are an industry recognized strong brand with positive recognition from customers in many countries, a strong financial position, Omni-channel distribution strategy and international network. If you want to work at a place where we believe that you can make the difference by using machine learning to generate data driven products and solve the most pressing business challenges, please read on.


We are incredibly excited about Data Analytics and the great potential for progress and innovation. We believe that analytics is a key differentiator in bringing “anytime, anywhere, personalized” services to our customers.  We wish to improve our operational processes and create new and innovative data driven products that go beyond traditional banking, such as the platform models.  Achieving this vision will require us to build and expand on our analytics effort and organize ourselves around focused value buckets with strong coordination capabilities of data, technology, customer journey, UX, as well as external partnerships. 



Global Analytics


Global Analytics is a new unit responsible for realizing this vision for ING, differentiating ING as a leader in data-driven organization, within the banking sector and beyond. The team consists of a number of Global Analytics Center of Excellences around the bank’s key capabilities (such as Pricing, Risk Management, Financial Crime, Customer Intelligence, etc.) as well as strong coordination areas around data management, technology, customer journey, UX, as well as external partnerships.



Financial Crime & RegTech CoE (Center of Excellences)


To be a compliant and safe bank is non-negotiable precondition of everything we do. 


Purpose of the Financial Crime and RegTech center of excellence is to define the strategy and drive the development, implementation and adoption of analytics capabilities in the financial crime domain to make ING a safer and more compliant bank



Role Profile


As part of the center of excellence you will help creating innovative scalable data driven solutions in the space of Financial Crime. You proactively work with teams to implement these solutions across the organization.


You will work collaboratively with an extended group of stakeholders, including but not limited to Operations, Compliance, Engineering, Legal, Corporate Audit.


A background in anti-money laundering services or fraud is therefore a plus



About you



  • You Data Science knowledge enables you build analytics solutions for mitigation of Financial Crime related risks.

  • Willing and able to learn and to improve your technical as well your interpersonal skills.

  • You are a creative and curious Data Scientist who looking forward to work on a wide variety of Financial Economic Crime related problems.

  • You have a thorough understanding of the machine learning algorithms and tooling and are able to pass your knowledge to others.

  • Your are proficient in coding and you are able to deliver production ready code.

  • You have extensive experience with transforming data to added value for your stakeholders. You are able to see things from a different perspective and to make original solutions work in practice; when suitable you propose such endeavors to stakeholders.

  • You are able to see where ING can set further steps towards becoming a truly data driven bank. You’re always thinking one step ahead, for example in advising about the best way of implementation.

  • Your communication skills enable you to work together with many different parties throughout our organization.

  • You have extensive experience with stakeholder management from within data science projects.

  • Your enthusiasm is visible and you are good with mobilizing people for our data driven purpose.




  • You like working in cross-functional teams to realize an ambitious goal.

  • You are not shy to ask for help of other Data Scientists in the team, and you are happy help them out by sharing your knowledge and capabilities. You are able and willing to guide junior Data Scientists and interns in their work; you also have experience doing so in your previous roles.

  • You are a team player who strikes effective balance between independence and acting in the interest of the team.

  • You are perseverant and you do not give up when a problem is hard; you know how to deal with set-backs.

  • Your enthusiasm is contagious and inspires others to act; your act sets an example for others.


Candidate Profile



  • MSc or PhD with excellent academic results in the field of Computer Science, Mathematics, Engineering, Econometrics or similar.

  • At least 3/4 years work related experience in a similar role and/or environment.

  • Machine Learning: Classification, Regression, Clustering, Unsupervised methods, Text Mining. You have an excellent understanding of Random Forests, Gradient Boosting, Neural Networks, Logistic Regression, SVM, KNN, K-Means, etc. Parametric and non-parametric statistics is a pre.

  • Programming Languages: Python and R. Scala is a pre

  • Tools: Spark, Hadoop.

  • Database handling: SQL, Hive. Familiarity with Oracle, Netezza, HBase, Cassandra, Graph databases is a pre.

  • Visualisation tools: D3.js, Shiny, Angular.



Do you recognize a call upon You in our description? Then please apply and join us to change the future of banking!

Acxiom
  • Austin, TX
As a Senior Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze the latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You are able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You are also a self-starter able to continuously evaluate new technologies, innovate and deliver solutions for business critical applications


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Lead the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Lead and review Hadoop log files with the help of log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 6+ years of Big Data Administration Experience
  • Extensive knowledge and Hands-on Experience of Hadoop based data manipulation/storage technologies like HDFS, MapReduce, Yarn, Spark/Kafka, HBASE, HIVE, Pig, Impala, R and Sentry/Ranger/Knox
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Experience supporting Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems
  • Great operational expertise such as excellent troubleshooting skills, understanding of system's capacity, bottlenecks, core resource utilizations (CPU, OS, Storage, and Networks)
  • Experience in Hadoop cluster migrations or upgrades
  • Strong scripting skills in Perl, Python, shell scripting, and/or Ruby on Rails
  • Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera, HortonWorks, and/or MapR versions along with monitoring/alerting tools (Nagios, Ganglia, Zenoss, Cloudera Manager)
  • Strong problem solving and critical thinking skills
  • Excellent verbal and written communication skills


What will set you apart:


  • Solid understanding and hands-on experience of Big Data on private/public cloud technologies(AWS/GCP/Azure)
  • DevOps experience (CHEF, Puppet and Ansible)
  • Strong knowledge of JAVA/J2EE and other web technologies

 
TalentBurst, an Inc 5000 company
  • Austin, TX

Position: Hadoop Administrator

Location: Austin, TX

Duration: 12+ months

Interview: Skype

Job Description:
Person will be responsible to work as part of 24x7 shifts (US hours) to provide Hadoop Platform Support and Perform Adminstrative on Production Hadoop clusters.

Must have skills:
3+ yrs. hands-on administration experience on Large Distributed Hadoop System on Linux

Technical Knowledge on YARN, MapReduce, HDFS, HBase, Zookeeper, Pig and Hive

Hands-on Experience as a Linux Sys Admin

Nice to have skills:
Knowledge on Spark and Kafka is a plus / Hadoop Certification is preferred

Roles and responsibilities:
Hadoop cluster set up, performance fine-tuning, monitoring and administration

Skill Requirements:
Minimum 3 yrs. hands-on experience on Large Distributed Hadoop System on Linux.
Strong Technical Knowledge on Hadoop Eco System such as YARN, MapReduce, HDFS, HBase, Zookeeper, Pig and Hive.
Hands on experience in Hadoop cluster set up, performance fine-tuning, monitoring and administration.
Hands-on Experience as a Linux Sys Admin
Knowledge on Spark and Kafka is a plus.
Hadoop Certification is preferred

Impetus
  • Phoenix, AZ

      Multiple Positions I Multiple Locations : Phoenix, AZ/ Tampa, FL

      Emplyment Type :: Full time || Contract


      As a Big Data Engineer, you will have the opportunity to make a significant impact, both to our organization and those of our Fortune 500 clients. You will work directly with clients at the intersection of business and technology. You will leverage your experience with Hadoop and software engineering to help our clients use insights gleaned from their data to drive Value.


      You will also be given substantial opportunities to develop and expand your technical skillset with emerging Big Data technologies so you can continually innovate, learn, and hit the gas pedal on your career.



Required:
  • 4+ years of IT experience
  • Very good experience in Hadoop, Hive, Spark Batch. (Streaming exp is good to have)
  • Good to have experience with 1 NoSQL - HBase/ Cassandra.
  • Experience with Java/J2EE & Web Services, Scala
  • Writing utilities/program to enhance product capability to fulfill specific customer requirement
  • Learning new technology/solution to solve customer problems
  • Provide feedback/learning to product team
Aspirent
  • Atlanta, GA

MUST BE A US CITIZEN


Responsibilities

·       Help to create and deliver project proposals and guide our strategic thinking on offering analytics consulting services to our clients

·       Consult with clients to gain an understanding of the current state of the business area(s), their immediate and long term needs, any KPI goals, and to determine what success looks like

·       Consult heavily with business users and stakeholders to 1) identify, capture and leverage appropriate data sources across areas of the business, and 2) ensure that analytical solutions are tailored to business needs and will support or result in actionable customer strategies, and 3) determine delivery & implementation options

·       Develop & help implement strategic analytical and data mining solutions to understand key business behaviors such as customer acquisition, product up-sell, customer retention, lifetime value, channel preferences, customer satisfaction, and loyalty drivers, etc.

·       Lead analytics and strategy engagements; Provide project-specific guidance to our team members in performing analyses and delivering strategic recommendations; Create and maintain project plans, project schedules, and other project documentation

·       Provide statistical methodology and project management support for commercial deliverables as well as custom studies

Qualifications:  

·       5+ years experience in any of the following areas:

·       Statistical/Data Model development leveraging SAS, Python, and R.

·       Data Mining/Financial Analysis

·       Programming strength in a variety of languages: SQL, C/C++, Java, Python, Perl

  • Optional programming strength in the following Hadoop tools: MapReduce, Pig, Hive, Hbase

Knowledge, Skills, & Abilities:  

·       Strong written/verbal communication and presentation skills

·       The ability to work with all levels of staff & leadership

·       Ability to self-motivate, adapt, and multi-task in a fast-paced environment

·       Regression (linear, multiplicative, logistic, censored, Cox, etc)

·       Test Design/Design of Experiments

·       Segmentation and clustering

·       Decision tree analysis

·       Neural networks, genetic algorithms and other computational methods

·       Mathematical programming and optimization

·       Structural equations modeling

·       Conjoint analysis

·       Time series analysis and forecasting, smoothing techniques

·       Information design, info-graphics, scorecard/dashboard/presentation development

EDUCATION REQUIREMENTS:

·       BA or BS required; MS, MBA or PhD preferred

·       Formal training Economics/Econometrics, Statistics, Operations Research, Finance or Mathematics is a plus, familiarity is necessary.

Expedia, Inc.
  • Bellevue, WA

We are seeking a deeply experienced technical leader to lead the next generation of engineering investments, and culture for the GCO Customer Care Platform (CCP). The technical leader in this role will help design, engineer and drive implementation of critical pieces of the EG-wide architecture (platform and applications) for customer care - these areas include, but limited to unified voice support, partner on boarding with configurable rules, Virtual agent programming model for all partners, and intelligent fulfillment. In addition, a key focus of this leader's role will also be to grow and mentor junior software engineers in GCO with a focus on building out a '2020 world-class engineering excellence' vision / culture.


What you’ll do:



  • Deep Technology Leadership (Design, Implementation, and Execution for the follow);

  • Ship next-gen EG-wide architecture (platform and applications) that enable 90% of automated self-service journeys with voice as a first-class channel from day zero

  • Design and ship a VA (Virtual Agent) Programming Model that enables partners standup intelligent virtual agents on CCP declaratively in minutes

  • Enable brand partners to onboard their own identity providers onto CCP

  • Enable partners to configure their workflows and business rules for their Virtual Agents

  • Programming Model for Intelligent actions in the Fulfillment layer

  • Integration of Context and Query as first-class entities into the Virtual Agent

  • Cross-Group Collaboration and Influence

  • Work with company-wide initiatives across AI Labs, BeX to build out a Best of Breed

  • Conversational Platform for EG-wide apps

  • Engage with and translate internal and external partner requirements into platform investments for effective on boarding of customers

  • Represent GCO's Technical Architecture at senior leadership meetings (eCP and EG) to influence and bring back enhancements to improve CCP



Help land GCO 2020 Engineering and Operational Excellence Goals

Mentor junior developers on platform engineering excellence dimensions (re-usable patterns, extensibility, configurability, scalability, performance, and design / implementation of core platform pieces)

Help develop a level of engineering muscle across GCO that becomes an asset for EG (as a provider of platform service as well as for talent)

Who you are:



  • BS or MS in Computer Science

  • 20 years of experience designing and developing complex, mission-critical, distributed software systems on a variety of platforms in high-tech industries

  • Hands on experience in designing, developing, and delivering (shipping) V1 (version one) MVP enterprise software products and solutions in a technical (engineering and architecture) capacity

  • Experience in building strong relationships with technology partners, customers, and getting closure on issues including delivering on time and to specification

  • Skills: Linux/ Windows/VMS, Scala, Java, Python, C#, C++, Object Oriented Design (OOD), Spark, Kafka, REST/Web Services, Distributed Systems, Reliable and scalable transaction processing systems (HBase, Microsoft SQL, Oracle, Rdb)

  • Nice to have: Experience in building highly scalable real-time processing platforms that hosts machine learning algorithms for Guided / Prescriptive Learning
    Identifies and solves problems at the company level while influencing product lines

  • Provides technical leadership in difficult times or serious crises

  • Key strategic player to long-term business strategy and vision

  • Recognized as an industry expert and is a recognized mentor and leader at the company Provides strategic influence across groups, projects and products

  • Provides long term product strategy and vision through group level efforts

  • Drive for results: Is sought out to lead company-wide initiatives that deliver cross-cutting lift to the organization and provides leadership in a crisis and is a key player in long-term business strategy and vision

  • Technical/Functional skills: Proves credentials as industry experts by inventing and delivering transformational technology/direction and helps drive change beyond the company and across the industry

  • Has the vision to impact long-term product/technology horizon to transform the entire industry



Why join us:

Expedia Group recognizes our success is dependent on the success of our people.  We are the world's travel platform, made up of the most knowledgeable, passionate, and creative people in our business.  Our brands recognize the power of travel to break down barriers and make people's lives better – that responsibility inspires us to be the place where exceptional people want to do their best work, and to provide them the tools to do so. 


Whether you're applying to work in engineering or customer support, marketing or lodging supply, at Expedia Group we act as one team, working towards a common goal; to bring the world within reach.  We relentlessly strive for better, but not at the cost of the customer.  We act with humility and optimism, respecting ideas big and small.  We value diversity and voices of all volumes. We are a global organization but keep our feet on the ground, so we can act fast and stay simple.  Our teams also have the chance to give back on a local level and make a difference through our corporate social responsibility program, Expedia Cares.


If you have a hunger to make a difference with one of the most loved consumer brands in the world and to work in the dynamic travel industry, this is the job for you.


Our family of travel brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Egencia®, trivago®, HomeAway®, Orbitz®, Travelocity®, Wotif®, lastminute.com.au®, ebookers®, CheapTickets®, Hotwire®, Classic Vacations®, Expedia® Media Solutions, CarRentals.com™, Expedia Local Expert®, Expedia® CruiseShipCenters®, SilverRail Technologies, Inc., ALICE and Traveldoo®.



Expedia is committed to creating an inclusive work environment with a diverse workforce.   All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.  This employer participates in E-Verify. The employer will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS) with information from each new employee's I-9 to confirm work authorization.

Ultra Tendency
  • Berlin, Deutschland

Big Data Software Engineer


Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership. 


At Ultra Tendency you would:



  • Work in our office in Berlin/Magdeburg and on-site at our customer's offices

  • Make Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)

  • Develop outstanding Big Data application following the latest trends and methodologies

  • Be a role model and strong leader for your team and oversee the big picture

  • Prioritize tasks efficiently, evaluating and balancing the needs of all stakeholders


Ideally you have:



  • Strong experience in developing software using Python, Scala or a comparable language

  • Proven experience with data ingestion, analysis, integration, and design of Big Data applications using Apache open-source technologies

  • Profound knowledge about with data engineering technology, e.g. Kafka, SPARK, HBase, Kubernetes

  • Strong background in developing on Linux

  • Solid computer science fundamentals (algorithms, data structures and programming skills in distributed systems)

  • Languages: Fluent English and German is a plus


We offer:



  • Fascinating tasks and unique Big Data challenges of major players from various industries (automotive, insurance, telecommunication, etc.)

  • Fair pay and bonuses

  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • International diverse team

  • Possibility to work with the open-source community and become a contributor

  • Work with cutting edge equipment and tools


Confidentiality guaranteed

eBay
  • Kleinmachnow, Germany

About the team:



Core Product Technology (CPT) is a global team responsible for the end-to-end eBay product experience and technology platform. In addition, we are working on the strategy and execution of our payments initiative, transforming payments management on our Marketplace platform which will significantly improve the overall customer experience.


The opportunity

At eBay, we have started a new chapter in our iconic internet history of being the largest online marketplace in the world. With more than 1 billion listings (more than 80% of them selling new items) in over 400 markets, eBay is providing a robust platform where merchants of all sizes can compete and win. Every single day millions of users come to eBay to search for items in our diverse inventory of over a billion items.



eBay is starting a new Applied Research team in Germany and we are looking for a senior technologist to join the team. We’re searching for a hands-on person who has an applied research background with strong knowledge in machine learning and natural language processing (NLP). The German team’s mission is to improve the German and other European language search experience as well as to enhance our global search platform and machine learned ranking systems in partnership with our existing teams in San Jose California and Shanghai China.



This team will help customers find what they’re shopping for by developing full-stack solutions from indexing, to query serving and applied research to solve core ranking, query understanding and recall problems in our highly dynamic marketplace. The global search team works closely with the product management and quality engineering teams along with the Search Web and Native Front End and Search services, and Search Mobile. We build our systems using C++, Scala, Java, Hadoop/Spark/HBase, TensorFlow/Caffe, Kafka and other standard technologies. The team believes in agile development with autonomous and empowered teams.






Diversity and inclusion at eBay goes well beyond a moral necessity – it’s the foundation of our business model and absolutely critical to our ability to thrive in an increasingly competitive global landscape. To learn about eBay’s Diversity & Inclusion click here: https://www.ebayinc.com/our-company/diversity-inclusion/.
Impetus
  • Phoenix, AZ

      Multiple Positions I Multiple Locations : Phoenix, AZ/ Richmond, VA/ Tampa, FL

      Emplyment Type :: Full time || Contract


      As a Big Data Engineer, you will have the opportunity to make a significant impact, both to our organization and those of our Fortune 500 clients. You will work directly with clients at the intersection of business and technology. You will leverage your experience with Hadoop and software engineering to help our clients use insights gleaned from their data to drive Value.


      You will also be given substantial opportunities to develop and expand your technical skillset with emerging Big Data technologies so you can continually innovate, learn, and hit the gas pedal on your career.



Required:
  • 4+ years of IT experience
  • Very good experience in Hadoop, Hive, Spark Batch. (Streaming exp is good to have)
  • Good to have experience with 1 NoSQL - HBase/ Cassandra.
  • Experience with Java/J2EE & Web Services, Scala/ Python is good to have
  • AWS (ETL implementation with AWS on Hadoop) good to have
  • Writing utilities/program to enhance product capability to fulfill specific customer requirement
  • Learning new technology/solution to solve customer problems
  • Provide feedback/learning to product team


Soft Skills:

    A team player who understands the roles and responsibilities of all the team members and facilitates a one team culture
    Strong communication skills both verbal and written
    Quick learner who can work independently on the tasks assigned after initial hand holding
American Express
  • Phoenix, AZ

Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every single day. Today, creative ideas, insight and new points of view are at the core of how we craft a more powerful, personal and fulfilling experience for all our customers. So if youre passionate about a career building breakthrough software and making an impact on an audience of millions, look no further.

There are hundreds of chances for you to make your mark on Technology and life at American Express. Heres just some of what youll be doing:

    • Take your place as a core member of an Agile team driving the latest application development practices.
    • Find your opportunity to execute new technologies, write code and perform unit tests, as well as working with data science, algorithms and automation processing
    • Engage your collaborative spirit by Collaborate with fellow engineers to craft and deliver recommendations to Finance, Business, and Technical users on Finance Data Management. 


Qualifications:

  

Are you up for the challenge?


    • 4+ years of Software Development experience.
    • BS or MS Degree in Computer Science, Computer Engineering, or other Technical discipline including practical experience effectively interpreting Technical and Business objectives and challenges and designing solutions.
    • Ability to effectively collaborate with Finance SMEs and partners of all levels to understand their business processes and take overall ownership of Analysis, Design, Estimation and Delivery of technical solutions for Finance business requirements and roadmaps, including a deep understanding of Finance and other LOB products and processes. Experience with regulatory reporting frameworks, is preferred.
    • Hands-on expertise with application design and software development across multiple platforms, languages, and tools: Java, Hadoop, Python, Streaming, Flink, Spark, HIVE, MapReduce, Unix, NoSQL and SQL Databases is preferred.
    • Working SQL knowledge and experience working with relational databases, query authoring (SQL), including working familiarity with a variety of databases(DB2, Oracle, SQL Server, Teradata, MySQL, HBASE, Couchbase, MemSQL).
    • Experience in architecting, designing, and building customer dashboards with data visualization tools such as Tableau using accelerator database Jethro.
    • Extensive experience in application, integration, system and regression testing, including demonstration of automation and other CI/CD efforts.
    • Experience with version control softwares like git, svn and CI/CD testing/automation experience.
    • Proficient with Scaled Agile application development methods.
    • Deals well with ambiguous/under-defined problems; Ability to think abstractly.
    • Willingness to learn new technologies and exploit them to their optimal potential, including substantiated ability to innovate and take pride in quickly deploying working software.
    • Ability to enable business capabilities through innovation is a plus.
    • Ability to get results with an emphasis on reducing time to insights and increased efficiency in delivering new Finance product capabilities into the hands of Finance constituents.
    • Focuses on the Customer and Client with effective consultative skills across a multi-functional environment.
    • Ability to communicate effectively verbally and in writing, including effective presentation skills. Strong analytical skills, problem identification and resolution.
    • Delivering business value using creative and effective approaches
    • Possesses strong business knowledge about the Finance organization, including industry standard methodologies.
    • Demonstrates a strategic/enterprise viewpoint and business insights with the ability to identify and resolve key business impediments.


Employment eligibility to work with American Express in the U.S. is required as the company will not pursue visa sponsorship for these positions.

MINDBODY Inc.
  • Irvine, CA
  • Salary: $96k - 135k

The Senior Data Engineer focuses on designing, implementing and supporting new and existing data solutions- data processing, and data sets to support various advanced analytical needs. You will be designing, building and supporting data pipelines consuming data from multiple different source systems and transforming it into valuable and insightful information. You will have the opportunity to contribute to end-to-end platform design for our cloud architecture and work multi-functionally with operations, data science and the business segments to build batch and real-time data solutions. The role will be part of a team supporting our Corporate, Sales, Marketing, and Consumer business lines.


 
MINIMUM QUALIFICATIONS AND REQUIREMENTS:



  • 7+ years of relevant experience in one of the following areas: Data engineering, business intelligence or business analytics

  • 5-7 years of supporting a large data platform and data pipelining

  • 5+ years of experience in scripting languages like Python etc.

  • 5+ years of experience with AWS services including S3, Redshift, EMR andRDS

  • 5+ years of experience with Big Data Technologies (Hadoop, Hive, HBase, Pig, Spark, etc.)

  • Expertise in database design and architectural principles and methodologies

  • Experienced in Physical data modeling

  • Experienced in Logical data modeling

  • Technical expertise should include data models, database design and data mining



PRINCIPAL DUTIES AND RESPONSIBILITIES:



  • Design, implement, and support a platform providing access to large datasets

  • Create unified enterprise data models for analytics and reporting

  • Design and build robust and scalable data integration (ETL) pipelines using SQL, Python, and Spark.

  • As part of an Agile development team contribute to architecture, tools and development process improvements

  • Work in close collaboration with product management, peer system and software engineering teams to clarify requirements and translate them into robust, scalable, operable solutions that work well within the overall data architecture

  • Coordinate data models, data dictionaries, and other database documentation across multiple applications

  • Leads design reviews of data deliverables such as models, data flows, and data quality assessments

  • Promotes data modeling standardization, defines and drives adoption of the standards

  • Work with Data Management to establish governance processes around metadata to ensure an integrated definition of data for enterprise information, and to ensure the accuracy, validity, and reusability of metadata

SafetyCulture
  • Surry Hills, Australia
  • Salary: A$120k - 140k

The Role



  • Be an integral member on the team responsible for design, implement and maintain distributed big data capable system with high-quality components (Kafka, EMR + Spark, Akka, etc).

  • Embrace the challenge of dealing with big data on a daily basis (Kafka, RDS, Redshift, S3, Athena, Hadoop/HBase), perform data ETL, and build tools for proper data ingestion from multiple data sources.

  • Collaborate closely with data infrastructure engineers and data analysts across different teams, find bottlenecks and solve the problem

  • Design, implement and maintain the heterogeneous data processing platform to automate the execution and management of data-related jobs and pipelines

  • Implement automated data workflow in collaboration with data analysts, continue to improve, maintain and improve system in line with growth

  • Collaborate with Software Engineers on application events, and ensuring right data can be extracted

  • Contribute to resources management for computation and capacity planning

  • Diving deep into code and constantly innovating


Requirements



  • Experience with AWS data technologies (EC2, EMR, S3, Redshift, ECS, Data Pipeline, etc) and infrastructure.

  • Working knowledge in big data frameworks such as Apache Spark, Kafka, Zookeeper, Hadoop, Flink, Storm, etc

  • Rich experience with Linux and database systems

  • Experience with relational and NoSQL database, query optimization, and data modelling

  • Familiar with one or more of the following: Scala/Java, SQL, Python, Shell, Golang, R, etc

  • Experience with container technologies (Docker, k8s), Agile development, DevOps and CI tools.

  • Excellent problem-solving skills

  • Excellent verbal and written communication skills 

Ultra Tendency
  • Riga, Lettland

You are a developer that loves to take a look at infrastructure as well? You are a systems engineer that likes to write code? Ultra Tendency is looking for you! 


Your Responsibilities:



  • Support our customers and development teams in transitioning capabilities from development and testing to operations

  • Deploy and extend large-scale server clusters for our clients

  • Fine-tune and optimize our clusters to process millions of records every day 

  • Learn something new every day and enjoy solving complex problems


Job Requirements:



  • You know Linux like the back of your hand

  • You love to automate all the things – SaltStack, Ansible, Terraform and Puppet are your daily business

  • You can write code in Python, Java, Ruby or similar languages.

  • You are driven by high quality standards and attention to detail

  • Understanding of the Hadoop ecosystem and knowledge of Docker is a plus


We offer:



  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • Work on the open-source community and become a contributor. Learn from open-source enthusiasts which you will find nowhere else in Germany!

  • Work in an English-speaking, international environment

  • Work with cutting edge equipment and tools

Ultra Tendency
  • Berlin, Deutschland

You love writing high quality code? You enjoy designing algorithms for large-scale Hadoop clusters? Spark is your daily business? We have new challenges for you!


Your Responsibilities:



  • Solve Big Data problems for our customers in all phases of the project life cycle

  • Build program code, test and deploy to various environments (Cloudera, Hortonworks, etc.)

  • Enjoy being challenged and solve complex data problems on a daily basis

  • Be part of our newly formed team in Berlin and help driving its culture and work attitude


Job Requirements



  • Strong experience developing software using Java or a comparable language

  • At least 2 years of experience with data ingestion, analysis, integration, and design of Big Data applications using Apache open-source technologies

  • Strong background in developing on Linux

  • Solid computer science fundamentals (algorithms, data structures and programming skills in distributed systems)

  • Sound knowledge of SQL, relational concepts and RDBMS systems is a plus

  • Computer Science (or equivalent degree) preferred or comparable years of experience

  • Being able to work in an English-speaking, international environment 


We offer:



  • Fascinating tasks and unique Big Data challenges in various industries

  • Benefit from 10 years of delivering excellence to our customers

  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • Work on the open-source community and become a contributor

  • Fair pay and bonuses

  • Work with cutting edge equipment and tools

  • Enjoy our additional benefits such as a free BVG ticket and fresh fruits in the office

Visa
  • Austin, TX
Company Description
Common Purpose, Uncommon
Opportunity. Everyone at Visa works with one goal in mind making sure that Visa is the best way to pay and be paid, for everyone everywhere. This is our global vision and the common purpose that unites the entire Visa team. As a global payments technology company, tech is at the heart of what we do: Our VisaNet network processes over 13,000 transactions per second for people and businesses around the world, enabling them to use digital currency instead of cash and checks. We are also global advocates for financial inclusion, working with partners around the world to help those who lack access to financial services join the global economy. Visas sponsorships, including the Olympics and FIFA World Cup, celebrate teamwork, diversity, and excellence throughout the world. If you have a passion to make a difference in the lives of people around the
world, Visa offers an uncommon opportunity to build a strong, thriving career. Visa is fueled by our team of talented employees who continuously raise the bar on delivering the convenience and security of digital currency to people all over the world. Join our team and find out how Visa is everywhere you want to
be.
Job Description
The ideal candidate will be responsible for the following to:
  • Perform Hadoop Administration on Production Hadoop clusters
  • Perform Tuning and Increase Operational efficiency on a continuous basis
  • Monitor health of the platforms and Generate Performance Reports and Monitor and provide continuous improvements
  • Working closely with development, engineering and operation teams, jointly work on key deliverables ensuring production scalability and stability
  • Develop and enhance platform best practices
  • Ensure the Hadoop platform can effectively meet performance & SLA requirements
  • Responsible for support of Hadoop Production environment which includes Hive, YARN, Spark, Impala, Kafka, SOLR, Oozie, Sentry, Encryption, Hbase, etc.
  • Perform optimization capacity planning of a large multi-tenant cluster
Qualifications
  • Minimum 3 years of work experience in maintaining, optimization, issue resolution of Hadoop clusters, supporting Business users and Batch
  • Experience in Configuring and setting up Hadoop clusters and provide support for - aggregation, lookup & fact table creation criteria
  • Map Reduce tuning, data node, NN recovery etc.
  • Experience in Linux / Unix OS Services, Administration, Shell, awk scripting
  • Experience in building and scalable Hadoop applications
  • Experience in Core Java, Hadoop (Map Reduce, Hive, Pig, HDFS, H-catalog, Zookeeper and OOzie)
  • Hands-on Experience in SQL (Oracle ) and No SQL Databases (HBASE/Cassandra/Mongo DB)
  • Excellent oral and written communication and presentation skills, analytical and problem solving skills
  • Self-driven, Ability to work independently and as part of a team with proven track record developing and launching products at scale
  • Minimum of four year technical degree required
  • Experience on Cloudera distribution preferred
  • Hands-on Experience as a Linux Sys Admin is a plus
  • Knowledge on Spark and Kafka is a plus.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Job Number: REF15232V
phData, Inc.
  • Minneapolis, MN

Title: Big Data Solutions Architect (Minneapolis or US Remote)


Join the Game-Changers in Big Data  


Are you inspired by innovation, hard work and a passion for data?    


If so, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.  


As a Solution Architect on our Big Data Consulting team, your responsibilities include:


    • Design, develop, and innovative Big Data solutions; partner with our internal Managed Services Architects and Data Engineers to build creative solutions to solve tough big data problems.  
    • Determine the project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions
    • Work across a broad range of technologies from infrastructure to applications to ensure the ideal Big Data solution is implemented and optimized
    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources
    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

Qualifications

  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics - combined with an expertise in Hadoop Technologies and Java programming
  • Technical Leadership experience leading/mentoring junior software/data engineers, as well as scoping activities on large scale, complex technology projects
  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  
  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc
  • Expert programming experience in Java, Scala, or other statically typed programming language
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Excellent communication skills including proven experience working with key stakeholders and customers
  • Ability to translate big picture business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics
  • Customer relationship management including project escalations, and participating in executive steering meetings
  • Ability to learn new technologies in a quickly changing field
phData, Inc.
  • Minneapolis, MN

Title: Big Data Solutions Architect (Minneapolis or US Remote)


Join the Game-Changers in Big Data  


Are you inspired by innovation, hard work and a passion for data?    


If so, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.  


As a Solution Architect on our Big Data Consulting team, your responsibilities include:


    • Design, develop, and innovative Big Data solutions; partner with our internal Managed Services Architects and Data Engineers to build creative solutions to solve tough big data problems.  
    • Determine the project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions
    • Work across a broad range of technologies from infrastructure to applications to ensure the ideal Big Data solution is implemented and optimized
    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources
    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

Qualifications

  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics - combined with an expertise in Hadoop Technologies and Java programming
  • Technical Leadership experience leading/mentoring junior software/data engineers, as well as scoping activities on large scale, complex technology projects
  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  
  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc
  • Expert programming experience in Java, Scala, or other statically typed programming language
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Excellent communication skills including proven experience working with key stakeholders and customers
  • Ability to translate big picture business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics
  • Customer relationship management including project escalations, and participating in executive steering meetings
  • Ability to learn new technologies in a quickly changing field
IT People Corporation
  • Raleigh, NC

Senior Big Data Platform Architect w/Data Migration- Direct Hire- Raleigh, NC

Want to take your career to the next level and work for a company that truly cares about their employees and the community around them?

We have a great a direct hire career opportunity for a Senior Big Data Platform Architect w/Data Migration expertise.

Our client is one of the most revolutionary and trusted resources for IT and information services. They play a vital role in supporting business processes and provide business intelligence that their clients can truly rely upon to increase productivity and achieve better operational efficiency.

With a generous benefits package- our client is one of the best places to work in the area.  They offer:
Competitive Compensation, Annual Review and Bonus, Employee Assistance Program On-Site Workout Facility, Recreational Activities, Flexible Work Arrangements, Ergonomic Work Stations, Medical Coverage Dental Coverage, Vision Coverage, 401(k) Retirement Program with matching, 12 paid holidays, Generous allowance for Vacation and Sick Days , Flexible Spending Accounts, Dependent Care Life Insurance, Short-Term and Long-Term Disability Insurance, and Supplemental Long-Term Disability Insurance .

Position Summary:

The Senior Big Data Platform Architect will provide thought leadership and technical direction for the data engineering team and work with the lead of the advanced analytics capability to develop technical strategies and mature the technical stack towards improving operational outcomes and usability, as well as, keeping current with new emerging technologies. Will lead project teams through POC efforts related to new technologies or new use of existing technologies.  

Minimum Requirements

  • Extensive experience troubleshooting issues in complex, distributed systems
  • 5+ years experience architecting, developing, releasing, and maintaining large-scale enterprise data platforms both on premise as well as cloud. 5+ years of experience analyzing data with SQL and implementing large-scale RDBMS. 5+ years experience designing software for performance, reliability and scalability.
  • 5+ years of programming proficiency in a subset of Python, R, Java, and Scala.
  • 2+ years of experience with building solutions leveraging NoSQL and highly distributed databases such as HBase and Cassandra.
  • 2+ years of experience implementing cloud-based systems (AWS/Azure/GCP)
  • 3+ years proficiency in configuring and deploying applications on Linux-based systems
  • 5+ years of experience implementing data pipelines in large-scale data analysis systems such as Hadoop or MPP databases. 3+ years of experience Spark or similar engines. 5+ years of experience in data flow and systems integration. 3+ Experience operationalizing and integrating analytics models and solutions within products and applications
  • Experience of hands-on platform architecture and solutions design and implementation (5+ years).
  • Deep understanding of algorithms, data structures, performance optimization techniques, and design patterns for building highly scalable Big Data Solutions and distributed applications
  • Machine Learning is a big plus
  • Experience collaborating with business and IT counterparts, as well summarizing and presenting complex technical architectures and solutions to a wide variety of stakeholders
  • Ability to manage multiple activities in a deadline-oriented environment
  • Superior problem-solving skills
  • Ability to work independently in unstructured environments in a self-directed way, with accuracy and attention to detail. Ability to take a leadership role on engagements and with customers.
  • Strong teamwork skills and ability to work effectively with multiple internal customers
  • Ability to provide technical expertise to others and explain concepts to technical staff and leadership team
  • Ability to quickly learn and master recent technologies and various business applications
  • Ability to build business acumen and understand business domain. Experience mentoring other technical resources and leading technical implementations.  

Education

Bachelors degree in Computer Science or equivalent field and 10+ years of technical experience or Masters Degree in Computer Science or equivalent field and 7+ years of technical experience

Responsibilities

Provide thought leadership and technical direction for the data engineering team and work with the lead of the advanced analytics capability to develop technical strategies and mature the technical stack towards improving operational outcomes and usability, as well as, keeping current with new emerging technologies. Will lead project teams through POC efforts related to new technologies or new use of existing technologies.  

Responsible for assisting product managers and the analytics teams in translating business requirements into solutions that meet business value objectives and are aligned with best practices and industry standards. Document architectural decisions through the depiction of concepts, relationships, constraints, and operations

 

Salary range is negotiable and is contingent upon level of expertise and years of experience.


For immediate consideration for this consulting opportunity, please submit your resume attachment to:  Dianne Lancaster, Technical Recruiter at IT People, the appropriate email is: dianne.lancaster@itpeoplecorp.com .

NO 3rd parties please!


NO Sponsorship available at this time