OnlyDataJobs.com

LPL Financial
  • San Diego, CA

As a Development Manager for Digital Experience Technology, responsible for strategy, analysis, design and implementation of modern, scalable, cloud native digital and artificial intelligence solutions. This role primarily will lead Intelligent Content Delivery program with focus on improving content search engine, implementing personalization engine and AI driven content ranking engine to deliver relevant and personalized intelligent content across Web, Mobile and AI Chat channels.  This role requires a successful and experienced digital, data and artificial intelligence practitioner, able to lead discussions with business and technology partners to identify business problems and delivering breakthrough solutions.

The successful candidate will have excellent verbal and written communication skills along with a demonstrated ability to mentor and manage a digital team. You must possess a unique blend of business and technical savvy; a big-picture vision, and the drive to make a vision a reality.


Key Responsibilities:
 

·       Define innovative Digital Content, AI and Automation offerings solving business problems with inputs from customers, business and product teams

·       Partner with business, product and technology teams to define problems, develop business case, build prototypes and create new offerings.

·       Evangelize and promote Cloud based micro-services architecture and Digital Technology capabilities adoption across organization

·       Lead and own technical design and development aspects of knowledge graphs, search engines, personalization engine and intelligent content engine platforms by applying digital technologies, machine learning and deep learning frameworks

·       Responsible for research, design and prototype robust and scalable models based on machine learning, data mining, and statistical modeling to answer key business problems

·       Manage onsite and offshore development teams implementing products and platforms in Agile

·       Collaborate with business, product , enterprise architecture and cross-functional teams to ensure strategic and tactical goals of project efforts are met

·       Work collaboratively with QA, DevOPS teams to adopt CI/CD tool chain and develop automation

·       Work with technical teams to ensure overall support and stability of platforms and assist with troubleshooting when production incidents arise

·       Be a mentor and leader on the team and within the organization



Basic Qualifications:

·       Overall 10+ years of experience, with 6+ years of development experience in implementing Digital and Artificial Intelligence (NPU-NLP-ML) platforms.

·       Solid working experience of Python, R and knowledge graphs

·       Expertise in any one AI related frameworks (NLTK, Spacy, Scikit-Learn, Tensor flow)

·       Experience with platforms Cloud Platforms and products including Amazon AWS, LEX Bots, Lambda , Microsoft Azure or similar cloud technologies

·       Solid working experience of implementing Solr search engine, SQL, Elastic Search,  Neo4J

·       Experience in data analysis , modelling, reporting using Power BI / similar tools

·       Experience with Enterprise Content Management systems like Adobe AEM / Any Enterprise CMS

·       Experience in implementing knowledge graphs using Schema.org, Facebook Open graph, Google AMP pages is added advantage

·       Excellent collaboration and negotiation skills

·       Results driven with a positive can do attitude

·       Experience in implementing Intelligent Automation tools like Work fusion , UIPath / Automation Anywhere is added advantage

Qualifications:

·       Ms or Ph.D degree in Computer Science / Statistics / Mathematics / Data Science / Any

·       Previous industry experience or research experience in solving business problems applying machine learning and deep learning algorithms

·       Must be hands on technologist with prior experience in similar role

·       Good experience in practicing and executing projects in Agile Scrum or Agile Safe iterative methodologies

Avaloq Evolution AG
  • Zürich, Switzerland

The position


Are you passionate about data architecture? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

Responsible for selecting appropriate technologies from open source, commercial on-premises and cloud-based offerings. Integrating a new generation of tools within the existing environment to ensure access to accurate and current data. Consider not only the functional requirements, but also the non-functional attributes of platform quality such as security, usability, and stability.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.


Your profile


  • PhD, Master or Bachelor degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, data lakes, stream processing)

  • Practical experience with Container Platforms (OpenShift) and/or containerization software (Kubernetes, Dockers)

  • Hands-on experience developing data extraction and transformation pipelines (ETL process)

  • Expert knowledge in RDBMS, NoSQL and Data Warehousing

  • Familiar with information retrieval software such as Elastic Search/Lucene/SOLR

  • Firm understanding of major programming/scripting languages like Java/Scala, Linux, PHP, Python and/or R

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Fluent in English; German, Italian and French a plus





 Professional requirements


  • Be a thought leader for best practice how to develop and deploy data science products & services

  • Provide an infrastructure to make data driven insights scalable and agile

  • Liaise and coordinate with stakeholders regarding setting up and running a BigData and analytics platform

  • Lead the evaluation of business and technical requirements

  • Support data-driven activities and a data-driven mindset where needed



Main place of work
Zurich

Contact
Avaloq Evolution AG
Anna Drozdowska, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

www.avaloq.com/en/open-positions

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
Visa
  • Austin, TX
Company Description
Common Purpose, Uncommon
Opportunity. Everyone at Visa works with one goal in mind making sure that Visa is the best way to pay and be paid, for everyone everywhere. This is our global vision and the common purpose that unites the entire Visa team. As a global payments technology company, tech is at the heart of what we do: Our VisaNet network processes over 13,000 transactions per second for people and businesses around the world, enabling them to use digital currency instead of cash and checks. We are also global advocates for financial inclusion, working with partners around the world to help those who lack access to financial services join the global economy. Visas sponsorships, including the Olympics and FIFA World Cup, celebrate teamwork, diversity, and excellence throughout the world. If you have a passion to make a difference in the lives of people around the
world, Visa offers an uncommon opportunity to build a strong, thriving career. Visa is fueled by our team of talented employees who continuously raise the bar on delivering the convenience and security of digital currency to people all over the world. Join our team and find out how Visa is everywhere you want to
be.
Job Description
The ideal candidate will be responsible for the following to:
  • Perform Hadoop Administration on Production Hadoop clusters
  • Perform Tuning and Increase Operational efficiency on a continuous basis
  • Monitor health of the platforms and Generate Performance Reports and Monitor and provide continuous improvements
  • Working closely with development, engineering and operation teams, jointly work on key deliverables ensuring production scalability and stability
  • Develop and enhance platform best practices
  • Ensure the Hadoop platform can effectively meet performance & SLA requirements
  • Responsible for support of Hadoop Production environment which includes Hive, YARN, Spark, Impala, Kafka, SOLR, Oozie, Sentry, Encryption, Hbase, etc.
  • Perform optimization capacity planning of a large multi-tenant cluster
Qualifications
  • Minimum 3 years of work experience in maintaining, optimization, issue resolution of Hadoop clusters, supporting Business users and Batch
  • Experience in Configuring and setting up Hadoop clusters and provide support for - aggregation, lookup & fact table creation criteria
  • Map Reduce tuning, data node, NN recovery etc.
  • Experience in Linux / Unix OS Services, Administration, Shell, awk scripting
  • Experience in building and scalable Hadoop applications
  • Experience in Core Java, Hadoop (Map Reduce, Hive, Pig, HDFS, H-catalog, Zookeeper and OOzie)
  • Hands-on Experience in SQL (Oracle ) and No SQL Databases (HBASE/Cassandra/Mongo DB)
  • Excellent oral and written communication and presentation skills, analytical and problem solving skills
  • Self-driven, Ability to work independently and as part of a team with proven track record developing and launching products at scale
  • Minimum of four year technical degree required
  • Experience on Cloudera distribution preferred
  • Hands-on Experience as a Linux Sys Admin is a plus
  • Knowledge on Spark and Kafka is a plus.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Job Number: REF15232V
phData, Inc.
  • Minneapolis, MN

Title: Big Data Solutions Architect (Minneapolis or US Remote)


Join the Game-Changers in Big Data  


Are you inspired by innovation, hard work and a passion for data?    


If so, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.  


As a Solution Architect on our Big Data Consulting team, your responsibilities include:


    • Design, develop, and innovative Big Data solutions; partner with our internal Managed Services Architects and Data Engineers to build creative solutions to solve tough big data problems.  
    • Determine the project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions
    • Work across a broad range of technologies from infrastructure to applications to ensure the ideal Big Data solution is implemented and optimized
    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources
    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

Qualifications

  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics - combined with an expertise in Hadoop Technologies and Java programming
  • Technical Leadership experience leading/mentoring junior software/data engineers, as well as scoping activities on large scale, complex technology projects
  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  
  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc
  • Expert programming experience in Java, Scala, or other statically typed programming language
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Excellent communication skills including proven experience working with key stakeholders and customers
  • Ability to translate big picture business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics
  • Customer relationship management including project escalations, and participating in executive steering meetings
  • Ability to learn new technologies in a quickly changing field
phData, Inc.
  • Minneapolis, MN

Title: Big Data Solutions Architect (Minneapolis or US Remote)


Join the Game-Changers in Big Data  


Are you inspired by innovation, hard work and a passion for data?    


If so, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.  


As a Solution Architect on our Big Data Consulting team, your responsibilities include:


    • Design, develop, and innovative Big Data solutions; partner with our internal Managed Services Architects and Data Engineers to build creative solutions to solve tough big data problems.  
    • Determine the project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions
    • Work across a broad range of technologies from infrastructure to applications to ensure the ideal Big Data solution is implemented and optimized
    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources
    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

Qualifications

  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics - combined with an expertise in Hadoop Technologies and Java programming
  • Technical Leadership experience leading/mentoring junior software/data engineers, as well as scoping activities on large scale, complex technology projects
  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  
  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc
  • Expert programming experience in Java, Scala, or other statically typed programming language
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Excellent communication skills including proven experience working with key stakeholders and customers
  • Ability to translate big picture business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics
  • Customer relationship management including project escalations, and participating in executive steering meetings
  • Ability to learn new technologies in a quickly changing field
BTI360
  • Ashburn, VA

Our customers are inundated with information from news articles, video feeds, social media and more. We’re looking to help them parse through it faster and focus on the information that’s most important that allows them to make better decisions. We're in the process of building a next-generation analysis and exploitation platform for video, audio, documents, and social media data. This platform will help users identify, discover and triage information via a UI that leverages best in class speech-to-text, machine translation, image recognition, OCR, and entity extraction services.


We're looking for data engineers to develop the infrastructure and systems behind our platform.  The ideal contributor should have experience building and maintaining data and ETL pipelines. They will be expected to work in a collaborative environment, able to communicate well with their teammates and customers. This is a great opportunity to work with a high-performing team in a fun environment.


At BTI360, we’re passionate about building great software and developing our people. Software doesn't build itself; teams of people do. That's why our primary focus is on developing better engineers, better teammates, and better leaders. By putting people first, we give our teammates more opportunities to grow and raise the bar of the software we develop.


Interested in learning more? Apply today!


Required Skills/Experience:



  • U.S. Citizenship - Must be able to obtain a security clearance

  • Bachelors Degree in Computer Science, Computer Engineering, Electrical Engineering or related field

  • Experience with Java, Kotlin, or Scala

  • Experience with scripting languages (Python, Bash, etc.)

  • Experience with object-oriented software development

  • Experience working within a UNIX/Linux environment

  • Experience working with a message-driven architecture (JMS, Kafka, Kinesis, SNS/SQS, etc.)

  • Ability to determine the right tool or technology for the task at hand

  • Works well in a team environment

  • Strong communication skills


Desired Skills:



  • Experience with massively parallel processing systems like Spark or Hadoop

  • Familiarity with data pipeline orchestration tools (Apache Airflow, Apache NiFi, Apache Oozie, etc.)

  • Familiarity in the AWS ecosystem of services (EMR, EKS, RDS, Kinesis, EC2, Lambda, CloudWatch)

  • Experience working with recommendation engines (Spark MLlib, Apache Mahout, etc.)

  • Experience building custom machine learning models with TensorFlow

  • Experience with natural language processing tools and techniques

  • Experience with Kubernetes and/or Docker container environment

  • Ability to identify external data specifications for common data representations

  • Experience building monitoring and alerting mechanisms for data pipelines

  • Experience with search technologies (Solr, ElasticSearch, Lucene)


BTI360 is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status, or any other protected class. 

Farfetch UK
  • London, UK

About the team:



We are a multidisciplinary team of Data Scientists and Software Engineers with a culture of empowerment, teamwork and fun. Our team is responsible for large-scale and complex machine learning projects directly providing business critical functionality to other teams and using the latest technologies in the field



Working collaboratively as a team and with our business colleagues, both here in London and across our other locations, you’ll be shaping the technical direction of a critically important part of Farfetch. We are a team that surrounds ourselves with talented colleagues and we are looking for brilliant Software Engineers who are open to taking on plenty of new challenges.



What you’ll do:



Our team works with vast quantities of messy data, such as unstructured text and images collected from the internet, applying machine learning techniques, such as deep learning, natural language processing and computer vision, to transform it into a format that can be readily used within the business. As an Engineer within our team you will help to shape and deliver the engineering components of the services that our team provides to the business. This includes the following:




  • Work with Project Lead to help design and implement new or existing parts of the system architecture.

  • Work on surfacing the team’s output through the construction of ETLs, APIs and web interfaces.

  • Work closely with the Data Scientists within the team to enable them to produce clean production quality code for their machine learning solutions.



Who you are:



First and foremost, you’re passionate about solving complex, challenging and interesting business problems. You have solid professional experience with Python and its ecosystem, with a  thorough approach to testing.



To be successful in this role you have strong experience with:



  • Python 3

  • Web frameworks, such as Flask or Django.

  • Celery, Airflow, PySpark or other processing frameworks.

  • Docker

  • ElasticSearch, Solr or a similar technology.



Bonus points if you have experience with:



  • Web scraping frameworks, such as Scrapy.

  • Terraform, Packer

  • Google Cloud Platform, such as Google BigQuery or Google Cloud Storage.



About the department:



We are the beating heart of Farfetch, supporting the running of the business and exploring new and exciting technologies across web, mobile and instore to help us transform the industry. Split across three main offices - London, Porto and Lisbon - we are the fastest growing teams in the business. We're committed to turning the company into the leading multi-channel platform and are constantly looking for brilliant people who can help us shape tomorrow's customer experience.





We are committed to equality of opportunity for all employees. Applications from individuals are encouraged regardless of age, disability, sex, gender reassignment, sexual orientation, pregnancy and maternity, race, religion or belief and marriage and civil partnerships.

Apple Inc.
  • Cupertino, CA
Job Summary:
Imagine what you could do here. At Apple, new ideas have a way of becoming phenomenal products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish.

Apple Pay Engineering is looking for an experienced Datastax Cassandra Architect who can contribute towards making strategic decisions about scaling, maintainability and reliability of the existing Datastax Casssandra Cluster(s) and then lead the team in hands-on implementation. You would join a team responsible for providing scalability solutions for our ever growing business and data processing needs.

Are you interested in solving the most complex and high scale Big Data challenges in the world today? Do you like the idea of running critical financial services that are used by millions of people all over the world? Do you want to help change how the world uses their wallet and money? If you love to solve internet scale challenges on critical financial systems then this is the right job for you.

Key Qualifications:
* A desire to constantly learn new technologies and stay abreast of the distributed computing landscape and drive digital transformation
* Drive high levels of engagement through discovery sessions, enablement and solution positioning with executives and technical stakeholders
* Ability to configure and tune performance of Cassandra, solr and spark clusters
* Experience with developing and debugging Enterprise J2EE applications integrated with Datastax Cassandra
* Advanced understanding and expertise of Cassandra or equivalent NOSQL development
* Experience with development using Datastax's Enterprise driver for Java
* In depth knowledge of Cassandra data model design
* Experience with developing monitoring, automation solutions
* Experience building, scaling and maintaining high volume systems
* Excellent debugging and system analysis skills
* Computer science fundamentals in object-oriented design, data structures, algorithm design, problem solving, and complexity analysis
* Proven track record of taking ownership of challenging problems and successfully delivering results
* Excellent communication and collaboration skills
* Excellent problem solving and analytical thinking skills
* Fast learner who is generous with their knowledge
* Self-directed, demonstrates leadership potential, and is a great teammate
* Experience in getting the sizing projection and recommendation/capacity planning
* Experience with developing Java / J2EE Applications integrating with Cassandra
* Experience with implementation of PCI/SOX Security on Cassandra, (optionally Spark and Solr) is an add-on.

Description:
This position requires a highly motivated individual who likes large-scale challenges in a fast paced environment. The successful candidate will think outside the box and come up with innovative solutions or architecture to meet business requirements.

Education:
BS/MS in Computer Science or Equivalent

Apple is an Equal Opportunity Employer that is committed to inclusion and diversity. We also take affirmative action to offer employment and advancement opportunities to all applicants, including minorities, women, protected veterans, and individuals with disabilities. Apple will not discriminate or retaliate against applicants who inquire about, disclose, or discuss their compensation or that of other applicants.

Google
  • Tokyo, Japan

Minimum qualifications:


  • Bachelor's degree in Computer Science, Mathematics, or a related technical field, or equivalent practical experience.

  • Experience with any of the following: distributed compute environments, application development, mobile development, big data analytics, cloud computing including virtualization, hosted services, multi tenant cloud infrastructures, storage systems and content delivery networks.

  • Experience in writing software in one or more languages, such as Java, C++, Python, Go, JavaScript.


Preferred qualifications:


  • Experience in big data, information retrieval, data mining or machine learning, as well as experiences in building multi-tier high availability applications with modern web technologies (e.g. Hadoop, NoSQL, MongoDB, Spark, Tensorflow).

  • Experience with cloud storage solutions, SQL/NoSQL datastores, and/or distributed computing technology (e.g. MySQL, Cassandra, MongoDB, Hadoop, Redis, Elasticsearch/Solr).

  • Experience with scalable networking technologies (e.g. Load Balancers, Firewalls) and web standards (e.g. REST APIs, web security mechanisms).

  • Knowledge of container technologies (e.g. Kubernetes, Docker), services, and API models (e.g. Swagger, OpenAPI).

  • Ability to speak and write in English and Japanese, fluently and idiomatically.

About the job

The Google Cloud Platform team helps customers transform and evolve their business through the use of Googles global network, web-scale data centers and software infrastructure. As part of an entrepreneurial team in this rapidly growing business, you will help shape the future of businesses of all sizes use technology to connect with customers, employees and partners.



As a Strategic Cloud Engineer, you'll guide customers on how to ingest, store, process, analyze and explore/visualize data on the Google Cloud Platform. You will work on data migrations and transformational projects, collaborating with customers to design large-scale data processing systems, to develop data pipelines optimized for scaling, and to troubleshoot potential platform challenges. You will travel to customer sites to deploy solutions and deliver workshops to educate and empower customers. Additionally, you will work closely with Product Management and Product Engineering to build and constantly drive excellence in our products.

Google Cloud helps millions of employees and organizations empower their employees, serve their customers, and build whats next for their business all with technology built in the cloud. Our products are engineered for security, reliability and scalability, running the full stack from infrastructure to applications to devices and hardware. And our teams are dedicated to helping our customers developers, small and large businesses, educational institutions and government agencies see the benefits of our technology come to life.

Responsibilities


  • Be a trusted technical advisor to customers and solve complex Cloud technical challenges.

  • Create and deliver best practice recommendations, tutorials, blog articles, sample code, and technical presentations adapting to the different levels of key business and technical stakeholders.

  • Travel regularly up to 30% of the time (we also frequently use video conferencing) for in-region meetings, technical reviews and onsite delivery activities.

At Google, we dont just accept differencewe celebrate it, we support it, and we thrive on it for the benefit of our employees, our products and our community. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by emailing candidateaccommodations@google.com.
PredictSpring
  • Los Altos, CA

As member of the Backend Engineering team, you will conceptualize, design and develop our cloud platform services that enable world's leading brands and retailers to building their mobile first omni channel ecommerce experience. You will play an integral role in shaping the direction of our product and bringing new features to market.


Skills & Experience




  • Passion for building highly scalable cloud services






  • At least 10 years of object-oriented software development experience






  • Proficiency in server side Java programming






  • Proven skills and programming experience in Java, J2EE, REST, Data Caching Services, DB schema design and data access technologies




  • Deep Understanding of Apache SOLR and other open source frameworks






  • Must be able to brainstorm and communicate technology ideas and issues with peers and IT management






  • Strong problem solving and analytical skills.




  • Self Driven and self motivated to deliver more with minimal input






  • B.S. or M.S. in computer science or related field




Nice To Have




  • Experience working on eCommerce platforms and solutions.




  • Experience with AWS solutions and deployment process



Primotus, LLC
  • No office location

Job Description



Primotus is developing a unique Enterprise scale, user-configurable mobile BPM (Business Process Management) platform. We’re looking for an experienced Scala developer with functional programming experience who has architectural expertise in data-driven asynchronous applications and a drive to learn Business Process Management (BPM).


Our stack includes many of the most-desired technologies, including:




  • Scala/Akka/CATS, Play Framework, Slick




  • BPMN (Java), DMN (Java)




  • Postgres, ElasticSearch




  • Kafka




  • Restful API




  • Angular6, CSS3, React, Ionic Mobile




  • Unit, end-to-end, API and performance testing tools




  • Jenkins continuous integration, GIT




  • AWS




Our Development Team:




  • Is small and growing with 8 members, so your contribution is immediately appreciated




  • Is divided into frontend and backend teams




  • Separates code into distinct modules and services




  • Uses JSON API for backend/frontend integration




  • Applies Agile programming and paired development methodology in 3-week sprints




  • Works in a virtual environment




You’d Be:




  • Helping with ElasticSearch upgrade




  • Extending web sockets architecture to new Progressive Web App (PWA)




  • Architecting BPM, CMMN (Case Management) and supporting systems




  • Adding additional components to core modules including BPM modeler and engine, Form, Mobile, and Reporting Builders, and Entitlements




  • Extending Business Rules module using DMN




  • Enhancing Kafka messaging pipelines




  • Building BI backend tools for maps, charts and graphs




  • Utilizing backend test tools for unit testing




  • Assisting in DevOps (availability, scalability, and security) in our AWS environment




  • Working on Eastern Time (EST)




If you think you’re a good fit and are interested in building something highly configurable and really innovative, please shoot us an email.


Skills & Requirements


You're Expected To Have:




  • Bachelor degree in computer science or other related field






  • 5 years experience in backend Enterprise software development






  • 3+ years experience with Scala functional development




  • Strong knowledge of Java, J2EE, REST, and JSON




  • 2+ years experience in cloud DevOps, ideally with AWS




  • Strong knowledge in version control using GIT.




  • Ability to work well under pressure




  • Experience working in a virtual team environment.




  • Knowledge of Agile methodology




  • Strong written and verbal communication skills and willingness to share knowledge




Nice To Have Some of the Following:




  • Background in BPM (i.e. Activiti, Camunda) and Enterprise workflows




  • DMN exposure or decision management with Drools or other platforms




  • ElasticSearch (or Solr), Kibana, BI tools




  • Postgres




  • Kafka




  • Jenkins




  • WebSockets




  • Mobile, PWA, and/or embedded development



Apple Inc.
  • Cupertino, CA
Job Summary:
Would you like to play a part in the next revolution in human-computer interaction? Contribute to a product that is redefining mobile and desktop computing, and work with the people who built the intelligent assistant that helps millions of people get things done just by asking?

The vision for the Siri Analytics, Evaluation, and Data Engineering organization is to improve Siri by using data as the voice of our customers. Within this organization the mission of the Analytics team is to inform the evolution of Siri through measurement and analysis of the user experience. As an Embedded Data Scientist on the team, you will be an ambassador of analytics to product and engineering teams with the ultimate purpose of improving the Siri experience for Apple customers.

Key Qualifications:
* You think about data in terms of statistical distributions and have a big enough analytics toolbox to know how to find patterns in data, identify targets for performance, and identify sources of variance about those targets.
* You use good judgment balancing art and science when visually communicating information (e.g. Tableau, Superset, ggplot, D3).
* You have engineered information out of massive and complex datasets (e.g. Hive, Spark, Druid, Solr, Oozie).
* You have proven experience with at least one programming language (e.g. Python, R, Scala) and are comfortable developing code in a team environment (e.g. git, notebooks, testing).
* You are self-motivated and curious with demonstrated creative and critical thinking capabilities and an innate drive to improve how things work.
* You have a high tolerance for ambiguity. You find a way through. You anticipate. You connect and synthesize.
* You have excellent verbal and written communications skills and experience in influencing decisions with information.
* Your academic background is in a quantitative field such as Computer Science, Statistics, Engineering, Economics or Physics. Advanced degree preferred.

 

Description:
* You will partner closely with Siri engineering teams to develop ways of characterizing the user experience in how Siri executes on requests and connecting those characterizations into Siri's development.
* Define metrics that we will measure our efforts against as well as defining the instrumentation required for yielding sufficient data.
* Communicate with and advocate to a wide audience of engineers, managers and executives to inform decisions on how Siri evolves.
* Build analytic, visualization, and other information products with a drive to automate and scale the insights available to science and engineering teams.
* Perform exploratory data analyses to enrich our mental models about Siri usage and identify new questions to pursue.
* Extract information from structured and unstructured data coming from Siri's computational architecture.

We are looking for people with a track record in building insights and relationships to affect decisions. Join us, and impact hundreds of millions of customers across billions of their interactions with a personal, intelligent assistant who is available on iPhone, iPad, HomePod, Watch, TV, and Mac across more than 30 languages.

Apple is an Equal Opportunity Employer that is committed to inclusion and diversity. We also take affirmative action to offer employment and advancement opportunities to all applicants, including minorities, women, protected veterans, and individuals with disabilities. Apple will not discriminate or retaliate against applicants who inquire about, disclose, or discuss their compensation or that of other applicants.

Otravo
  • Amsterdam, Netherlands

Otravo is at the forefront of providing travelers around then world (soon) an enjoyable, hassle free, and affordable Travel Experience. We work by empowering small autonomous teams with the ability to influence technical and product directions in an open and friendly culture. We believe better is good enough and focus on continuous improvement in all things Travel.


Are you passionate about software development? Do you believe in making a difference? We have a challenging role within one of our highly skilled team of Vakantiedicounter.nl!


What will you be doing?


Because we build all our systems in-house with our own integrated development teams, we could really use your knowledge and skills, helping us leverage technologies such as



Apache Spark, for distributed data processing;
Scala and the Typelevel stack for a scalable stateless REST back-end;
Leading open source frameworks: Kafka, Solr, HDFS, etc. to solve our problems;
A/B experiments, for sensible decision making.

You will be pairing to write lots of (clean) code that goes to production continuously and will be spending ± 10% on private projects to find the next big thing


What are we looking for?


You are a seasoned software developer who has a solid experience with one of the technologies we use. You know functional programming and have a passion for learning and sharing your knowledge.


We consider it a plus if you have experience with:



Big Data, Spark, Solr/ElasticSearch;
Cats / Scalaz / Shapeless functional programming libraries;
Kubernetes, Docker, container based deployments;
Open Source; because we contribute back to the ecosystem.

What do we offer?


Otravo is an informal yet professional company, beautifully situated at the Amsterdam canals. We use agile development and best practices to deliver working software with little overhead, fast time to market and high quality.


The package we offer contains the following benefits:



A good salary 40.000 - 65.000 gross per year;
30 paid holidays;
Opportunities to attend conferences, training sessions, workshops, etc.;
Discount on your holiday trips (of course :)
Vibrant company culture;
Support with relocation (visa, housing, 30% tax ruling)
Free snacks and amenities; cake every sprint;
Free use of company gym.

About Otravo


Otravo means Online Travel Organisation and is market leader in the Benelux, Scandinavia and South Europe of online air travel sales. Travellers can book airline tickets to worldwide destinations via self-service and online, and for both leisure and business trips. Otravo offers travellers a wide selection of options, everything from worldwide flights, rental cars, hotels and sun holidays to all possible dynamic travel combinations.


Otravo is the parent company of several well-known travel brands. Brands like Vliegtickets.nl, Vliegtickets.be, WTC.nl, Schipholtickets.nl, de Vakantiediscounter, Flygstolen, Tripmonster and Greitai and most recently the brand Travelgenio are all part of Otravo. Every day over hundred thousands of visitors are welcomed to our websites and since 1983 millions of people have travelled through our brands. Otravo is a very healthy and innovative company with a growing yearly turnover of over 1,5 billion euro’s and a team of more than 500 professionals.


Questions?


Is this the job for you? Respond quickly to this vacancy! We would like to receive your motivation and CV via the application button below. Do you have questions about this vacancy? Then contact our Recruiter Mirjam Hellinga at recruitment@otravo.eu or +31 6 46376246


N.B. Acquisition based on this vacancy is not appreciated. We do not work with recruiters.

X-Mode Social
  • Reston, VA

X-Mode Social, Inc. is looking for a full-time back-end developer to work on X-Mode's data platform and join our rapidly growing team. For this position, you can either work remotely OR in our Reston, VA Headquarters.


WHAT YOU'LL DO:




    • Use big data technologies, processing frameworks, and platforms to solve complex problems related to location

    • Build, improve, and maintain data pipelines that ingest billions of data points on a daily basis

    • Efficiently query data and provide data sets to help Sales and Client Success teams' with any data evaluation requests

    • Ensure high data quality through analysis, testing, and usage of machine learning algorithms



WHO YOU ARE:




    • 1+ years of Spark and Scala experience

    • Experience working with very large databases and batch processing datasets with hundreds of millions of records

    • Experience with Hadoop ecosystem, e.g. Spark, Hive, or Presto/Athena

    • Real-time streaming with Kinesis, Kafka or similar libraries

    • 4+ years working with SQL and relational databases

    • 4+ years Linux experience

    • 2 years working with cloud services, ideally in AWS

    • Self-motivated learner who is willing to self-teach

    • Self-starter who can maintain a team-centered outlook

    • BONUS: Experience with Python, Machine Learning, and Apache Elasticsearch or Apache Solr

    • BONUS: GIS/Geospatial tools/analysis and any past experience with geolocation data



WHAT WE OFFER:




    • Competitive Salary

    • Medical, Dental and Vision

    • 15 Days of PTO (Paid Time Off)

    • Lunch provided 2x a week 

    • Snacks, snacks, snacks!

    • Casual dress code

    • Free Parking on-site


phData
  • Minneapolis, MN

If you're inspired by innovation, hard work and a passion for data, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of global and enterprise clients.  


At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth at our company headquarters conveniently located in Downtown Minneapolis and expanding throughout the US. Notably we've also been voted Best Company to Work For in Minneapolis for the last 2 years.   


As the world’s largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.


In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and employee equity. 


As a Solution Architect on our Big Data Consulting Team, your responsibilities will include:



  • Design, develop, and innovative Hadoop solutions; partner with our internal Infrastructure Architects and Data Engineers to build creative solutions to tough big data problems.  


  • Determine the technical project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions.  Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews


  • Work across a broad range of technologies – from infrastructure to applications – to ensure the ideal Hadoop solution is implemented and optimized


  • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources




  • Design and implement streaming, data lake, and analytics big data solutions




  • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines




  • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths




  • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)




  • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software




  • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala





Technical Leadership Qualifications




  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics




  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  




  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc




  • Expert programming experience in Java, Scala, or other statically typed programming language




  • Ability to learn new technologies in a quickly changing field




  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries




  • Excellent communication skills including proven experience working with key stakeholders and customers




Leadership




  • Ability to translate “big picture” business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics




  • Experience scoping activities on large scale, complex technology infrastructure projects




  • Customer relationship management including project escalations, and participating in executive steering meetings



  • Coaching and mentoring data or software engineers 

Oracle
  • Austin, TX

DevOps System Administrator


We are a small, dynamic R&D team, and we work on the fun parts of big data processing and social media. We are part of Oracle, a large company with global reach and the resources to fund our exciting work. Our team consists of engineers who ask questions, think critically, and take the initiative to solve problems.

We are looking for a modern Systems Administrator: someone who enjoys building, tuning, and maintaining complex environments in multiple datacenters, and communicating with different and diverse teams across time zones.  Our goal is to build on a solid foundation of systems administration, with a desire to incorporate best practices from site reliability engineering and cloud automation, and continue to scale our platform and evolve into a GitOps driven team.

The ideal candidate would have a strong Redhat/CentOS Linux systems administration background and experience working in a production environment. Excellent communication skills, and a desire to work with different teams effectively (e.g., QA, IT, development) are a requirement. Experience with Chef, automated deployment and containerization technologies is a plus.  Comfort working with Linux CLI tools is critical.

The Product

Our platform helps clients understand what's being said about them online. This means that we ingest and analyze huge quantities of social media data. We've developed a custom data processing pipeline and a proprietary text-analytics engine, but to meet customer demands, we're continually evolving these technologies. As we increase the amount of data processed and the sophistication of our analytics services, exciting challenges lie ahead.

The Technology Stack                                                                  

Our team depends on multiple environments running in multiple datacenters using technologies like Solr/Lucene, a variety of NLP and ML tools, HBase, YARN, Kafka and other common "big data" tools.  You'll need to be able to get up to speed on supporting Java and Scala applications to reliably process lots of data, 24x7, in a Linux environment.

Qualifications


Linux Administration using the CLI and basic scripting


·Redhat based Linux distributions (CentOS, Etc.)


·        Chef (or other patch/configuration tools), automated deployment


·        TCP/IP networking, load balancers, and proxies


·        GIT based version control


Preferred Qualifications:


·        RPM & YUM expertise


·        Java / Scala / Ruby / Jruby runtime tuning and debugging.


Jenkins / Hudson and CI/CD pipelines, managing deployment artifacts. Virtualization/containerization (Vagrant, Docker, VirtualBox, Xen)



As part of Oracles employment process candidates will be required to complete a pre-employment screening process, after a conditional offer has been extended

phData, Inc.
  • Minneapolis, MN

Are you inspired by innovation, hard work and a passion for data?    


If so, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.  


At phData, our proven success has skyrocketed the demand for our services, resulting in quality growth and an expanded presence at our company headquarters conveniently located in Downtown Minneapolis and in Bangalore, India


As the worlds largest pure-play Big Data services firm, our team includes Apache committers, Spark experts and the most knowledgeable Scala development team in the industry. phData has earned the trust of customers by demonstrating our mastery of Hadoop services and our commitment to excellence.


In addition to a phenomenal growth and learning opportunity, we offer competitive compensation and excellent perks including base salary, annual bonus, extensive training, paid Cloudera certifications - in addition to generous PTO and employee equity.


As a Solution Architect on our Consulting Services Team, your responsibilities will include:


    • Design, develop, and innovative Hadoop solutions; partner with our internal Infrastructure Architects and Data Engineers to build creative solutions to tough big data problems.  
    • Determine the project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions
    • Work across a broad range of technologies from infrastructure to applications to ensure the ideal Hadoop solution is implemented and optimized
    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources
    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

Qualifications

  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics - combined with an expertise in Hadoop Technologies and Java programming
  • Technical Leadership experience leading/mentoring junior software/data engineers, as well as scoping activities on large scale, complex technology projects
  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  
  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc
  • Expert programming experience in Java, Scala, or other statically typed programming language
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Excellent communication skills including proven experience working with key stakeholders and customers
  • Ability to translate big picture business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics
  • Customer relationship management including project escalations, and participating in executive steering meetings
  • Ability to learn new technologies in a quickly changing field


Keywords:  Techical Leadership, Software Engineer, Big Data, Cloudera, Hive, Apache Spark, Java, Apache Kafka, Big Data, Spark, Solution Architecture, Apache Pig, Hadoop, NoSQL, Cloudera Impala, Scala, Python, Data Engineering, Big Data Analytics, Large Scale Data Analysis, ETL, Linux, Kudu

Apple Inc.
  • Cupertino, CA
Job Summary:
Imagine what you could do here. At Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish.

The Apple News team is looking for a Sr. Machine Learning Engineer with a real passion for building scalable services to enable an excellent user experience that delight millions of users every single day.

Key Qualifications:
* 5+ years of industry experience
* 5+ years experience developing in Java
* Experience building scalable backend services on platforms like Mesos and Hadoop
* Proficiency with scalable technologies such as Cassandra, Solr, and Kafka
* Knowledge of machine learning techniques for problems such as named entity labeling, classification, and clustering is a plus
* Experience implementing complex algorithms in software that shipped to production
* Experience building stable, high performance server-side systems using distributed processing algorithms
* Excellent oral and written English communication skills

Description:
Apple's News team is seeking a high-energy and self-driven Sr. Machine Learning Engineer who will play a central role in the delivery of scalable services. The team leverages Machine Learning to tackle challenging problems in the News domain, including text extraction, named entity recognition, duplicate detection, search and ranking. As a member of our dynamic group, youll have the rare and rewarding opportunity to craft upcoming products that will delight and encourage millions of Apples customers every day.

Responsibilities
* Design, develop and deploy systems and algorithms that process data in real time
* Design and build stable and scalable production systems
* Design and develop services using technologies such as Mesos, Cassandra, Solr, and Kafka
* Collaborate with engineering and operations teams to deliver scalable, robust, and high-performance systems

Education:
BS in Computer Science or Equivalent

Additional Requirements:
A strong sense of responsibility and obsession with quality

Apple is an Equal Opportunity Employer that is committed to inclusion and diversity. We also take affirmative action to offer employment and advancement opportunities to all applicants, including minorities, women, protected veterans, and individuals with disabilities. Apple will not discriminate or retaliate against applicants who inquire about, disclose, or discuss their compensation or that of other applicants.

Apple
  • Cupertino, CA
Job Summary:
Imagine what you could do here. At Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish.

The Apple News team is looking for a Sr. Machine Learning Engineer with a real passion for building scalable services to enable an excellent user experience that delight millions of users every single day.

Key Qualifications:
* 5+ years of industry experience
* 5+ years experience developing in Java
* Experience building scalable backend services on platforms like Mesos and Hadoop
* Proficiency with scalable technologies such as Cassandra, Solr, and Kafka
* Knowledge of machine learning techniques for problems such as named entity labeling, classification, and clustering is a plus
* Experience implementing complex algorithms in software that shipped to production
* Experience building stable, high performance server-side systems using distributed processing algorithms
* Excellent oral and written English communication skills

Description:
Apple's News team is seeking a high-energy and self-driven Sr. Machine Learning Engineer who will play a central role in the delivery of scalable services. The team leverages Machine Learning to tackle challenging problems in the News domain, including text extraction, named entity recognition, duplicate detection, search and ranking. As a member of our dynamic group, youll have the rare and rewarding opportunity to craft upcoming products that will delight and encourage millions of Apples customers every day.

Responsibilities
* Design, develop and deploy systems and algorithms that process data in real time
* Design and build stable and scalable production systems
* Design and develop services using technologies such as Mesos, Cassandra, Solr, and Kafka
* Collaborate with engineering and operations teams to deliver scalable, robust, and high-performance systems

Education:
BS in Computer Science or Equivalent

Additional Requirements:
A strong sense of responsibility and obsession with quality

Apple is an Equal Opportunity Employer that is committed to inclusion and diversity. We also take affirmative action to offer employment and advancement opportunities to all applicants, including minorities, women, protected veterans, and individuals with disabilities. Apple will not discriminate or retaliate against applicants who inquire about, disclose, or discuss their compensation or that of other applicants.