OnlyDataJobs.com

FCA Fiat Chrysler Automobiles
  • Detroit, MI

Fiat Chrysler Automobiles is looking to fill the full-time position of a Data Scientist. This position is responsible for delivering insights to the commercial functions in which FCA operates.


The Data Scientist is a role in the Business Analytics & Data Services (BA) department and reports through the CIO. They will play a pivotal role in the planning, execution  and delivery of data science and machine learning-based projects. The bulk of the work with be in areas of data exploration and preparation, data collection and integration, machine learning (ML) and statistical modelling and data pipe-lining and deployment.

The newly hired data scientist will be a key interface between the ICT Sales & Marketing team, the Business and the BA team. Candidates need to be very much self-driven, curious and creative.

Primary Responsibilities:

    • Problem Analysis and Project Management:
      • Guide and inspire the organization about the business potential and strategy of artificial intelligence (AI)/data science
      • Identify data-driven/ML business opportunities
      • Collaborate across the business to understand IT and business constraints
      • Prioritize, scope and manage data science projects and the corresponding key performance indicators (KPIs) for success
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
      • Generate and test hypotheses about the underlying mechanics of the business process.
      • Network with domain experts to better understand the business mechanics that generated the data.
    • Data Collection and Integration:
      • Understand new data sources and process pipelines. Catalog and document their use in solving business problems.
      • Create data pipelines and assets the enable more efficiency and repeatability of data science activities.
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
    • Machine Learning and Statistical Modelling:
      • Apply various ML and advanced analytics techniques to perform classification or prediction tasks
      • Integrate domain knowledge into the ML solution; for example, from an understanding of financial risk, customer journey, quality prediction, sales, marketing
      • Testing of ML models, such as cross-validation, A/B testing, bias and fairness
    • Operationalization:
      • Collaborate with ML operations (MLOps), data engineers, and IT to evaluate and implement ML deployment options
      • (Help to) integrate model performance management tools into the current business infrastructure
      • (Help to) implement champion/challenger test (A/B tests) on production systems
      • Continuously monitor execution and health of production ML models
      • Establish best practices around ML production infrastructure
    • Other Responsibilities:
      • Train other business and IT staff on basic data science principles and techniques
      • Train peers on specialist data science topics
      • Promote collaboration with the data science COE within the organization.

Basic Qualifications:

    • A bachelors  in computer science, data science, operations research, statistics, applied mathematics, or a related quantitative field [or equivalent work experience such as, economics, engineering and physics] is required. Alternate experience and education in equivalent areas such as economics, engineering or physics, is acceptable. Experience in more than one area is strongly preferred.
    • Candidates should have three to six years of relevant project experience in successfully launching, planning, executing] data science projects. Preferably in the domains of automotive or customer behavior prediction.
    • Coding knowledge and experience in several languages: for example, R, Python, SQL, Java, C++, etc.
    • Experience of working across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service, and others.
    • Experience with distributed data/computing and database tools: MapReduce, Hadoop, Hive, Kafka, MySQL, Postgres, DB2 or Greenplum, etc.
    • All candidates must be self-driven, curious and creative.
    • They must demonstrate the ability to work in diverse, cross-functional teams.
    • Should be confident, energetic self-starters, with strong moderation and communication skills.

Preferred Qualifications:

    • A master's degree or PhD in statistics, ML, computer science or the natural sciences, especially physics or any engineering disciplines or equivalent.
    • Experience in one or more of the following commercial/open-source data discovery/analysis platforms: RStudio, Spark, KNIME, RapidMiner, Alteryx, Dataiku, H2O, SAS Enterprise Miner (SAS EM) and/or SAS Visual Data Mining and Machine Learning, Microsoft AzureML, IBM Watson Studio or SPSS Modeler, Amazon SageMaker, Google Cloud ML, SAP Predictive Analytics.
    • Knowledge and experience in statistical and data mining techniques: generalized linear model (GLM)/regression, random forest, boosting, trees, text mining, hierarchical clustering, deep learning, convolutional neural network (CNN), recurrent neural network (RNN), T-distributed Stochastic Neighbor Embedding (t-SNE), graph analysis, etc.
    • A specialization in text analytics, image recognition, graph analysis or other specialized ML techniques such as deep learning, etc., is preferred.
    • Ideally, the candidates are adept in agile methodologies and well-versed in applying DevOps/MLOps methods to the construction of ML and data science pipelines.
    • Knowledge of industry standard BA tools, including Cognos, QlikView, Business Objects, and other tools that could be used for enterprise solutions
    • Should exhibit superior presentation skills, including storytelling and other techniques to guide and inspire and explain analytics capabilities and techniques to the organization.
FlixBus
  • Berlin, Germany

Your Tasks – Paint the world green



  • Holistic cloud-based infrastructure automation

  • Distributed data processing clusters as well as data streaming platforms based on Kafka, Flink and Spark

  • Microservice platforms based on Docker

  • Development infrastructure and QA automation

  • Continuous Integration/Delivery/Deployment


Your Profile – Ready to hop on board



  • Experience in building and operating complex infrastructure

  • Expert-level: Linux, System Administration

  • Experience with Cloud Services, Expert-Level with either AWS or GCP  

  • Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShift

  • Mindset: "Automate Everything", "Infrastructure as Code", "Pipelines as Code", "Everything as Code"

  • Hands-on experience with "Infrastructure as Code" tools: TerraForm, CloudFormation, Packer

  • Experience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)

  • Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELK

  • At least basic knowledge in designing and implementing Service Level Agreements

  • Solid knowledge of Network and general Security Engineering

  • At least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, Bamboo

  • At least basic hands-on DBA experience, experience with data backup and recovery

  • Experience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory

AXA Schweiz
  • Winterthur, Switzerland

Dich sprechen Agilität, Product driven IT, Cloud Computing und Machine Learning an?
Du bist leistungsorientiert und hast den Mut Neues auszuprobieren?

Wir haben den digitalen Wandel in unserer DNA verankert!


Dein Beitrag:



  • Das Aufgabenset umfasst vor allem Engineering (IBM MQ Linux, z/OS) und Betrieb von Middleware-Komponenten (File Transfer, Web Service Infrastruktur).

  • Im Detail heisst das Komponentenverantwortung (u.A. Lifecycling, Zur Verfügungstellung von API's und Self-Services, Automatisierung der Abläufe, Erstellung und Pflege der Dokumentation), Sicherstellung des Betriebs (Du ergreifst autonom die notwendigen Massnahmen, Bereitschaft zu sporadischen Wochenendeinsätzen/Pikett), als auch Wissenspflege und -vermittlung.

  • In einem agilen Umfeld, mithilfe bei der Migration unserer Komponenten in die Cloud.


Deine Fähigkeiten und Talente:



  • Du bringst ein abgeschlossenes Informatikstudium oder vergleichbare Erfahrung mit.

  • Dein Know-How umfasst Messaging Middleware-Komponenten, idealerweise IBM MQ auf Linux angereichert mit z/OS Knowhow, cool wären zudem noch Kenntnisse von RabbitMQ und Kafka.

  • Andere Middleware Komponenten (File Transfer und Web Service) sind Dir nicht gänzlich unbekannt und Übertragungsprotokolle als auch die Linux-Welt im Speziellen sind Dir vertraut.

  • Du bringst fundierte Erfahrung in der Automatisierung an den Tisch (Bash, Python) und auch REST, API's sowie Java(-script) sind keine Fremdwörter für Dich. Erste Programmiererfahrung in einer objektorientierten Sprache, vorzugsweise Java, runden dein Profil ab.

  • Du bist integrativ, betrachtest Herausforderungen aus verschiedenen Perspektiven und stellst unbequeme Fragen, wenn es darauf ankommt.

  • Du bist sicher in der deutschen und englischen Sprache.

The HT Group
  • Austin, TX

Full Stack Engineer, Java/Scala Direct Hire Austin

Do you have a track record of building both internal- and external-facing software services in a dynamic environment? Are you passionate about introducing disruptive and innovative software solutions for the shipping and logistics industry? Are you ready to deliver immediate impact with the software you create?

We are looking for Full Stack Engineers to craft, implement and deploy new features, services, platforms, and products. If you are curious, driven, and naturally explore how to build elegant and creative solutions to complex technical challenges, this may be the right fit for you. If you value a sense of community and shared commitment, youll collaborate closely with others in a full-stack role to ship software that delivers immediate and continuous business value. Are you up for the challenge?

Tech Tools:

  • Application stack runs entirely on Docker frontend and backend
  • Infrastructure is 100% Amazon Web Services and we use AWS services whenever possible. Current examples: EC2 Elastic Container Service (Docker), Kinesis, SQS, Lambda and Redshift
  • Java and Scala are the languages of choice for long-lived backend services
  • Python for tooling and data science
  • Postgres is the SQL database of choice
  • Actively migrating to a modern JavaScript-centric frontend built on Node, React/Relay, and GraphQL as some of our core UI technologies

Responsibilities:

  • Build both internal and external REST/JSON services running on our 100% Docker-based application stack or within AWS Lambda
  • Build data pipelines around event-based and streaming-based AWS services and application features
  • Write deployment, monitoring, and internal tooling to operate our software with as much efficiency as we build it
  • Share ownership of all facets of software delivery, including development, operations, and test
  • Mentor junior members of the team and coach them to be even better at what they do

Requirements:

  • Embrace the AWS + DevOps philosophy and believe this is an innovative approach to creating and deploying products and technical solutions that require software engineers to be truly full-stack
  • Have high-quality standards, pay attention to details, and love writing beautiful, well-designed and tested code that can stand the test of time
  • Have built high-quality software, solved technical problems at scale and believe in shipping software iteratively and often
  • Proficient in and have delivered software in Java, Scala, and possibly other JVM languages
  • Developed a strong command over Computer Science fundamentals
ettain group
  • Raleigh, NC

Role: Network Engineer R/S

Location: RTP, primarily onsite but some flexibility for remote after initial rampup

Pay Rate: 35-60/hr depending on experience.

Interview Process:
Video WebEx (30 min screen)
Panel Interview with 3-4 cpoc engineers- in depth technical screen

Personality:

·         Customer facing

·         Experience dealing with high pressure situations

·         Be able to hand technology at the level the customer will throw at them

·         Customers test the engineers to see if tech truly is working

·         Have to be able to figure out how to make it work

Must have Tech:

·         Core r/s

·         Vmware


Who You'll Work With:

The POV Services Team (dCloud, CPOC, CXC, etc) provides services, tools and content for Cisco field sales and channel partners, enabling them to highlight Cisco solutions and technologies to customers.

What You'll Do

As a Senior Engineer, you are responsible for the development, delivery, and support of a wide range of Enterprise Networking content and services for Cisco Internal, Partner and Customer audiences.

Content Roadmap, Design and Project Management 25%

    • You will document and scope all projects prior to entering project build phase.
    • Youll work alongside our platform/automation teams to review applicable content to be hosted on Cisco dCloud.
    • You specify and document virtual and hardware components, resources, etc. required for content delivery.
    • You can identify and prioritize all project-related tasks while working with Project Manager to develop a timeline with high expectations to meet project deadlines.\
    • You will successfully collaborate and work with a globally-dispersed team using collaboration tools, such as email, instant messaging (Cisco Jabber/Spark), and teleconferencing (WebEx and/or TelePresence).

Content Engineering and Documentation 30%

    • Document device connectivity requirements of all components (virtual and HW) and build as part of pre-work.
    • Work with the Netops team to rack, cabling, imaging, and access required for the content project.
    • As part of the development cycle, the developer will work collaboratively with the business unit technical marketing engineers (TME) and WW EN Sales engineers to configure solution components, including Cisco routers, switches, wireless LAN controllers (WLC), SD-Access, DNA Center, Meraki, SD-WAN (Viptela), etc.
    • Work with BU, WW EN Sales and marketing resources to draft, test and troubleshoot compelling demo/lab/story guides that contribute to the field sales teams and generate high interest and utilization.
    • Work with POV Services Technical Writer to format/edit/publish content and related documents per POV Services standards.
    • Work as the liaison to the operations and support teams to resolve issues identified during the development and testing process, providing technical support and making design recommendations for fixes.
    • Perform resource studies using VMware vCenter to ensure an optimal balance of content performance, efficiency and stability before promoting/publishing production content.

Content Delivery 25%

    • SD-Access POV, SD-WAN POV Presentations, Webex and Video recordings, TOI, SE Certification Proctor, etc.
    • Customer engagement at customer location, Cisco office, remote delivering proof of value and at Cisco office delivering Test Drive and or Technical Solutions Workshop content.
    • Deliver training, TOI, and presentations at events (Cisco Live, GSX, SEVT, Partner VT, etc).
    • Work with the POV Services owners, architects, and business development team to market, train, and increase global awareness of new/revised content releases.

Support and Other 20%

    • You provide transfer of information and technical support to Level 1 & 2 support engineers, program managers and others ensuring that content is understood and in working order.
    • You will test and replicate issues, isolate the root cause, and provide timely workarounds and/or short/long term fixes.
    • You will be monitoring any support trends for assigned content. Track and log critical issues effectively using Jira.
    • You provide Level 3 user support directly/indirectly to Cisco and Partner sales engineers while supporting and mentoring peer/junior engineers as required.

Who You Are

    • You are well versed in the use of standard design templates and tools (Microsoft Office including Visio, Word, Excel, PowerPoint, and Project).
    • You bring an uncanny ability to multitask between multiple projects, user support, training, events, etc. and shifting priorities.
    • Demonstrated, in-depth working knowledge/certification of routing, switching and WLAN design, configuration and deployment. Cisco Certifications including CCNA, CCNP and or CCIE (CCIE preferred) in R&S.
    • You possess professional or expert knowledge/experience with Cisco Service Provider solutions.
    • You are an Associate or have professional knowledge with Cisco Security including Cisco ISE, Stealthwatch, ASA, Firepower, AMP, etc.
    • You have the ability to travel to Cisco internal, partner and customer events, roadshows, etc. to train and raise awareness to drive POV Services adoption and sales. Up to 40% travel.
    • You bring VMWare/ESXi experience building servers, install VMware, deploying virtual appliances, etc.
    • You have Linux experience or certifications including CompTIA Linux+, Red Hat, etc.
    • Youre experience using Tool Command Language (Tcl), PERL, Python, etc. as well as Cisco and 3rd party traffic, event and device generation applications/tools/hardware. IXIA, Sapro, Pagent, etc.
    • Youve used Cisco and 3rd party management/monitoring/troubleshooting solutions; Cisco: DNA Center, Cisco Prime, Meraki, Viptela, CMX.
    • 3rd party solutions: Solarwinds, Zenoss, Splunk, LiveAction or other to monitor and/or manage an enterprise network.
    • Experience using Wireshark and PCAP files.

Why Cisco

At Cisco, each person brings their unique talents to work as a team and make a difference.

Yes, our technology changes the way the world works, lives, plays and learns, but our edge comes from our people.

    • We connect everything people, process, data and things and we use those connections to change our world for the better.
    • We innovate everywhere - From launching a new era of networking that adapts, learns and protects, to building Cisco Services that accelerate businesses and business results. Our technology powers entertainment, retail, healthcare, education and more from Smart Cities to your everyday devices.
    • We benefit everyone - We do all of this while striving for a culture that empowers every person to be the difference, at work and in our communities.
GrubHub Seamless
  • New York, NY

Got a taste for something new?

We’re Grubhub, the nation’s leading online and mobile food ordering company. Since 2004 we’ve been connecting hungry diners to the local restaurants they love. We’re moving eating forward with no signs of slowing down.

With more than 90,000 restaurants and over 15.6 million diners across 1,700 U.S. cities and London, we’re delivering like never before. Incredible tech is our bread and butter, but amazing people are our secret ingredient. Rigorously analytical and customer-obsessed, our employees develop the fresh ideas and brilliant programs that keep our brands going and growing.

Long story short, keeping our people happy, challenged and well-fed is priority one. Interested? Let’s talk. We’re eager to show you what we bring to the table.

About the Opportunity: 

Senior Site Reliability Engineers are embedded in Big Data specific Dev teams to focus on the operational aspects of our services, and our SREs run their respective products and services from conception to continuous operation.  We're looking for engineers who want to be a part of developing infrastructure software, maintaining it and scaling. If you enjoy focusing on reliability, performance, capacity planning, and the automation everything, you’d probably like this position.





Some Challenges You’ll Tackle





TOOLS OUR SRE TEAM WORKS WITH:



  • Python – our primary infrastructure language

  • Cassandra

  • Docker (in production!)

  • Splunk, Spark, Hadoop, and PrestoDB

  • AWS

  • Python and Fabric for automation and our CD pipeline

  • Jenkins for builds and task execution

  • Linux (CentOS and Ubuntu)

  • DataDog for metrics and alerting

  • Puppet





You Should Have






  • Experience in AWS services like Kinesis, IAM, EMR, Redshift, and S3

  • Experience managing Linux systems

  • Configuration management tool experiences like Puppet, Chef, or Ansible

  • Continuous integration, testing, and deployment using Git, Jenkins, Jenkins DSL

  • Exceptional communication and troubleshooting skills.


NICE TO HAVE:



  • Python or Java / Scala development experience

  • Bonus points for deploying/operating large-ish Hadoop clusters in AWS/GCP and use of EMR, DC/OS, Dataproc.

  • Experience in Streaming data platforms, (Spark streaming, Kafka)

  • Experience developing solutions leveraging Docker

Avaloq Evolution AG
  • Zürich, Switzerland

The position


Are you passionate about data architecture? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

Responsible for selecting appropriate technologies from open source, commercial on-premises and cloud-based offerings. Integrating a new generation of tools within the existing environment to ensure access to accurate and current data. Consider not only the functional requirements, but also the non-functional attributes of platform quality such as security, usability, and stability.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.


Your profile


  • PhD, Master or Bachelor degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, data lakes, stream processing)

  • Practical experience with Container Platforms (OpenShift) and/or containerization software (Kubernetes, Dockers)

  • Hands-on experience developing data extraction and transformation pipelines (ETL process)

  • Expert knowledge in RDBMS, NoSQL and Data Warehousing

  • Familiar with information retrieval software such as Elastic Search/Lucene/SOLR

  • Firm understanding of major programming/scripting languages like Java/Scala, Linux, PHP, Python and/or R

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Fluent in English; German, Italian and French a plus





 Professional requirements


  • Be a thought leader for best practice how to develop and deploy data science products & services

  • Provide an infrastructure to make data driven insights scalable and agile

  • Liaise and coordinate with stakeholders regarding setting up and running a BigData and analytics platform

  • Lead the evaluation of business and technical requirements

  • Support data-driven activities and a data-driven mindset where needed



Main place of work
Zurich

Contact
Avaloq Evolution AG
Anna Drozdowska, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

www.avaloq.com/en/open-positions

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
SafetyCulture
  • Surry Hills, Australia
  • Salary: A$120k - 140k

The Role



  • Be an integral member on the team responsible for design, implement and maintain distributed big data capable system with high-quality components (Kafka, EMR + Spark, Akka, etc).

  • Embrace the challenge of dealing with big data on a daily basis (Kafka, RDS, Redshift, S3, Athena, Hadoop/HBase), perform data ETL, and build tools for proper data ingestion from multiple data sources.

  • Collaborate closely with data infrastructure engineers and data analysts across different teams, find bottlenecks and solve the problem

  • Design, implement and maintain the heterogeneous data processing platform to automate the execution and management of data-related jobs and pipelines

  • Implement automated data workflow in collaboration with data analysts, continue to improve, maintain and improve system in line with growth

  • Collaborate with Software Engineers on application events, and ensuring right data can be extracted

  • Contribute to resources management for computation and capacity planning

  • Diving deep into code and constantly innovating


Requirements



  • Experience with AWS data technologies (EC2, EMR, S3, Redshift, ECS, Data Pipeline, etc) and infrastructure.

  • Working knowledge in big data frameworks such as Apache Spark, Kafka, Zookeeper, Hadoop, Flink, Storm, etc

  • Rich experience with Linux and database systems

  • Experience with relational and NoSQL database, query optimization, and data modelling

  • Familiar with one or more of the following: Scala/Java, SQL, Python, Shell, Golang, R, etc

  • Experience with container technologies (Docker, k8s), Agile development, DevOps and CI tools.

  • Excellent problem-solving skills

  • Excellent verbal and written communication skills 

Riccione Resources
  • Dallas, TX

Sr. Data Engineer Hadoop, Spark, Data Pipelines, Growing Company

One of our clients is looking for a Sr. Data Engineer in the Fort Worth, TX area! Build your data expertise with projects centering on large Data Warehouses and new data models! Think outside the box to solve challenging problems! Thrive in the variety of technologies you will use in this role!

Why should I apply here?

    • Culture built on creativity and respect for engineering expertise
    • Nominated as one of the Best Places to Work in DFW
    • Entrepreneurial environment, growing portfolio and revenue stream
    • One of the fastest growing mid-size tech companies in DFW
    • Executive management with past successes in building firms
    • Leader of its technology niche, setting the standards
    • A robust, fast-paced work environment
    • Great technical challenges for top-notch engineers
    • Potential for career growth, emphasis on work/life balance
    • A remodeled office with a bistro, lounge, and foosball

What will I be doing?

    • Building data expertise and owning data quality for the transfer pipelines that you create to transform and move data to the companys large Data Warehouse
    • Architecting, constructing, and launching new data models that provide intuitive analytics to customers
    • Designing and developing new systems and tools to enable clients to optimize and track advertising campaigns
    • Using your expert skills across a number of platforms and tools such as Ruby, SQL, Linux shell scripting, Git, and Chef
    • Working across multiple teams in high visibility roles and owning the solution end-to-end
    • Providing support for existing production systems
    • Broadly influencing the companys clients and internal analysts

What skills/experiences do I need?

    • B.S. or M.S. degree in Computer Science or a related technical field
    • 5+ years of experience working with Hadoop and Spark
    • 5+ years of experience with Python or Ruby development
    • 5+ years of experience with efficient SQL (Postgres, Vertica, Oracle, etc.)
    • 5+ years of experience building and supporting applications on Linux-based systems
    • Background in engineering Spark data pipelines
    • Understanding of distributed systems

What will make my résumé stand out?

    • Ability to customize an ETL or ELT
    • Experience building an actual data warehouse schema

Location: Fort Worth, TX

Citizenship: U.S. citizens and those authorized to work in the U.S. are encouraged to apply. This company is currently unable to provide sponsorship (e.g., H1B).

Salary: 115 130k + 401k Match

---------------------------------------------------


~SW1317~

Gravity IT Resources
  • Miami, FL

Overview of Position:

We undertaking an ambitious digital transformation across Sales, Service, Marketing, and eCommerce. We are looking for a web data analytics wizard with prior experience in digital data preparation, discovery, and predictive analytics.

The data scientist/web analyst will work with external partners, digital business partners, enterprise analytics, and technology team to strategically plan and develop datasets, measure web analytics, and execute on predictive and prescriptive use cases. The role demands the ability to (1) Learn quickly (2) Work in a fast-paced, team-driven environment (3) Manage multiple efforts simultaneously (4) Adept at using large datasets and using models to test effectiveness of different courses of action (5) Promote data driven decision making throughout the organization (6) Define and measure success of capabilities we provide the organization.


Primary Duties and Responsibilities

    Analy
    • ze data captured through Google Analytics and develop meaningful actionable insights on digital behavior. Put t
    • ogether a customer 360 data frame by connecting CRM Sales, Service, Marketing cloud data with Commerce Web behavior data and wrangle the data into a usable form. Use p
    • redictive modelling to increase and optimize customer experiences across online & offline channels. Evalu
    • ate customer experience and conversions to provide insights & tactical recommendations for web optimization
    • Execute on digital predictive use cases and collaborate with enterprise analytics team to ensure use of best tools and methodologies.
    • Lead support for enterprise voice of customer feedback analytics.
    • Enhance and maintain digital data library and definitions.

Minimum Qualifications

  • Bachelors degree in Statistics, Computer Science, Marketing, Engineering or equivalent
  • 3 years or more of working experience in building predictive models.
  • Experience in Google Analytics or similar web behavior tracking tools is required.
  • Experience in R is a must with working knowledge of connecting to multiple data sources such as amazon redshift, salesforce, google analytics, etc.
  • Working knowledge in machine learning algorithms such as Random Forest, K-means, Apriori, Support Vector machine, etc.
  • Experience in A/B testing or multivariate testing.
  • Experience in media tracking tags and pixels, UTM, and custom tracking methods.
  • Microsoft Office Excel & PPT (advanced).

Preferred Qualifications

  • Masters degree in statistics or equivalent.
  • Google Analytics 360 experience/certification.
  • SQL workbench, Postgres.
  • Alteryx experience is a plus.
  • Tableau experience is a plus.
  • Experience in HTML, JavaScript.
  • Experience in SAP analytics cloud or SAP desktop predictive tool is a plus
Signify Health
  • Dallas, TX

Position Overview:

Signify Health is looking for a savvy Data Engineer to join our growing team of deep learning specialists. This position would be responsible for evolving and optimizing data and data pipeline architectures, as well as, optimizing data flow and collection for cross-functional teams. The Data Engineer will support software developers, database architects, data analysts, and data scientists. The ideal candidate would be self-directed, passionate about optimizing data, and comfortable supporting the Data Wrangling needs of multiple teams, systems and products.

If you enjoy providing expert level IT technical services, including the direction, evaluation, selection, configuration, implementation, and integration of new and existing technologies and tools while working closely with IT team members, data scientists, and data engineers to build our next generation of AI-driven solutions, we will give you the opportunity to grow personally and professionally in a dynamic environment. Our projects are built on cooperation and teamwork, and you will find yourself working together with other talented, passionate and dedicated team member, all working towards a shared goal.

Essential Job Responsibilities:

  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data models for greater scalability, etc.
  • Leverage Azure for extraction, transformation, and loading of data from a wide variety of data sources in support of AI/ML Initiatives
  • Design and implement high performance data pipelines for distributed systems and data analytics for deep learning teams
  • Create tool-chains for analytics and data scientist team members that assist them in building and optimizing AI workflows
  • Work with data and machine learning experts to strive for greater functionality in our data and model life cycle management capabilities
  • Communicate results and ideas to key decision makers in a concise manner
  • Comply with applicable legal requirements, standards, policies and procedures including, but not limited to the Compliance requirements and HIPAA.


Qualifications:Education/Licensing Requirements:
  • High school diploma or equivalent.
  • Bachelors degree in Computer Science, Electrical Engineer, Statistics, Informatics, Information Systems, or another quantitative field. or related field or equivalent work experience.


Experience Requirements:
  • 5+ years of experience in a Data Engineer role.
  • Experience using the following software/tools preferred:
    • Experience with big data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with AWS or Azure cloud services.
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C#, etc.
  • Strong work ethic, able to work both collaboratively, and independently without a lot of direct supervision, and solid problem-solving skills
  • Must have strong communication skills (written and verbal), and possess good one-on-one interpersonal skills.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
  • 2 years of experience in data modeling, ETL development, and Data warehousing
 

Essential Skills:

  • Fluently speak, read, and write English
  • Fantastic motivator and leader of teams with a demonstrated track record of mentoring and developing staff members
  • Strong point of view on who to hire and why
  • Passion for solving complex system and data challenges and desire to thrive in a constantly innovating and changing environment
  • Excellent interpersonal skills, including teamwork and negotiation
  • Excellent leadership skills
  • Superior analytical abilities, problem solving skills, technical judgment, risk assessment abilities and negotiation skills
  • Proven ability to prioritize and multi-task
  • Advanced skills in MS Office

Essential Values:

  • In Leadership Do whats right, even if its tough
  • In Collaboration Leverage our collective genius, be a team
  • In Transparency Be real
  • In Accountability Recognize that if it is to be, its up to me
  • In Passion Show commitment in heart and mind
  • In Advocacy Earn trust and business
  • In Quality Ensure what we do, we do well
Working Conditions:
  • Fast-paced environment
  • Requires working at a desk and use of a telephone and computer
  • Normal sight and hearing ability
  • Use office equipment and machinery effectively
  • Ability to ambulate to various parts of the building
  • Ability to bend, stoop
  • Work effectively with frequent interruptions
  • May require occasional overtime to meet project deadlines
  • Lifting requirements of
DISYS
  • Minneapolis, MN
Client: Banking/Financial Services
Location: 100% Remote
Duration: 12 month contract-to-hire
Position Title: NLU/NLP Predictive Modeling Consultant


***Client requirements will not allow OPT/CPT candidates for this position, or any other visa type requiring sponsorship. 

This is a new team within the organization set up specifically to perform analyses and gain insights into the "voice of the customer" through the following activities:
Review inbound customer emails, phone calls, survey results, etc.
Review data that is unstructured "natural language" text and speech data
Maintain focus on customer complaint identification and routing
Build machine learning models to scan customer communication (emails, voice, etc)
Identify complaints from non-complaints.
Classify complaints into categories
Identify escalated/high-risk complaints, e.g. claims of bias, discrimination, bait/switch, lying, etc...
Ensure routed to appropriate EO for special

Responsible for:
Focused on inbound retail (home mortgage/equity) emails
Email cleansing: removal of extraneous information (disclaimers, signatures, headers, PII)
Modeling: training models using state of art techniques
Scoring: "productionalizing" models to be consumed by the business
Governance: model documentation and Q/A with model risk group.
Implementation of model monitoring processes

Desired Qualifications:
Real-world experience building/deploying predictive models, any industry (must)
SQL background (must)
Self-starter, able to excel in fast-paced environment w/o a ton of direction (must)
Good communication skills (must)
Experience in text/speech analytics (preferred)
Python, SAS background (preferred)
Linux (nice to have)
Spark (Scala or PySpark) (nice to have)

Sentek Global
  • San Diego, CA

Sentek Global is seeking a Software Engineer to provide support to PMW 150 in San Diego, CA!


Responsibilities
  • Design, build and maintain software, develop software infrastructure and development environments, and transition older products and capabilities to the new architectures.
  • Produce effective and powerful solutions to complex problems in areas such assoftware engineering, data analytics, automation,and cybersecurity.
  • Perform analysis of existing and emerging operational and functional requirements to support the current and future systems capabilities and requirements.
  • Provide technical expertise, guidance, architecture, development and support in many different technologies directly to government customers.
  • Perform schedule planning and program management tasks as required.
  • Perform Risk Analysis for implementation of program requirements.
  • Assist in the development of requirements documents.
  • Other duties as required.


Qualifications
  • A current active secret clearance is required to be considered for this role.
  • A Bachelors Degree in data science, data analytics, computer science, or a related technical discipline is required.
  • Three to five (3-5) years providing software engineering support to a DoD program office.
  • Experience working with data rich problems through research or programs.
  • Experience with computer programming or user experience/user interface.
  • Demonstrated knowledge completing projects with large or incomplete data and ability to recommend solutions.
  • Experience with Machine Learning algorithms including convolutional neural networks (CNN), regression, classification, clustering, etc.
  • Experience using deep learning frameworks (preferably TensorFlow).
  • Experience designing and developing professional software using Linux, Python, C++, JAVA, etc.
    • Experience applying Deep/Machine Learning technology to solve real-world problems:
    • Selecting features, building and optimizing classifiers using machine learning techniques.
    • Data mining using state-of-the-art methods.
    • Extending companys data with third party sources of information when needed.
    • Enhancing data collection procedures to include information that is relevant for building analytic systems.
  • Experience processing, cleansing, and verifying the integrity of data used for analysis.
  • Experience performing ad-hoc analyses and presenting results in a clear manner.
  • Experience creating automated anomaly detection systems and constant tracking of its performance.
  • Must be able to travel one to three (1-3) times per year.
ettain group
  • Raleigh, NC

Role: R/S Network Engineer

Pay: 50-60/hr

Location: Raleigh, NC (some flexibility with remote after inital rampup)

18 month contract


Who You'll Work With:

The POV Services Team (dCloud, CPOC, CXC, etc) provides services, tools and content for Cisco field sales and channel partners, enabling them to highlight Cisco solutions and technologies to customers.

What You'll Do

As a Senior Engineer, you are responsible for the development, delivery, and support of a wide range of Enterprise Networking content and services for Cisco Internal, Partner and Customer audiences.

Content Roadmap, Design and Project Management 25%

  • You will document and scope all projects prior to entering project build phase.
  • Youll work alongside our platform/automation teams to review applicable content to be hosted on Cisco dCloud.
  • You specify and document virtual and hardware components, resources, etc. required for content delivery.
  • You can identify and prioritize all project-related tasks while working with Project Manager to develop a timeline with high expectations to meet project deadlines.\
  • You will successfully collaborate and work with a globally-dispersed team using collaboration tools, such as email, instant messaging (Cisco Jabber/Spark), and teleconferencing (WebEx and/or TelePresence).

Content Engineering and Documentation 30%

  • Document device connectivity requirements of all components (virtual and HW) and build as part of pre-work.
  • Work with the Netops team to rack, cabling, imaging, and access required for the content project.
  • As part of the development cycle, the developer will work collaboratively with the business unit technical marketing engineers (TME) and WW EN Sales engineers to configure solution components, including Cisco routers, switches, wireless LAN controllers (WLC), SD-Access, DNA Center, Meraki, SD-WAN (Viptela), etc.
  • Work with BU, WW EN Sales and marketing resources to draft, test and troubleshoot compelling demo/lab/story guides that contribute to the field sales teams and generate high interest and utilization.
  • Work with POV Services Technical Writer to format/edit/publish content and related documents per POV Services standards.
  • Work as the liaison to the operations and support teams to resolve issues identified during the development and testing process, providing technical support and making design recommendations for fixes.
  • Perform resource studies using VMware vCenter to ensure an optimal balance of content performance, efficiency and stability before promoting/publishing production content.

Content Delivery 25%

  • SD-Access POV, SD-WAN POV Presentations, Webex and Video recordings, TOI, SE Certification Proctor, etc.
  • Customer engagement at customer location, Cisco office, remote delivering proof of value and at Cisco office delivering Test Drive and or Technical Solutions Workshop content.
  • Deliver training, TOI, and presentations at events (Cisco Live, GSX, SEVT, Partner VT, etc).
  • Work with the POV Services owners, architects, and business development team to market, train, and increase global awareness of new/revised content releases.

Support and Other 20%

  • You provide transfer of information and technical support to Level 1 & 2 support engineers, program managers and others ensuring that content is understood and in working order.
  • You will test and replicate issues, isolate the root cause, and provide timely workarounds and/or short/long term fixes.
  • You will be monitoring any support trends for assigned content. Track and log critical issues effectively using Jira.
  • You provide Level 3 user support directly/indirectly to Cisco and Partner sales engineers while supporting and mentoring peer/junior engineers as required.

Who You Are

  • You are well versed in the use of standard design templates and tools (Microsoft Office including Visio, Word, Excel, PowerPoint, and Project).
  • You bring an uncanny ability to multitask between multiple projects, user support, training, events, etc. and shifting priorities.
  • Demonstrated, in-depth working knowledge/certification of routing, switching and WLAN design, configuration and deployment. Cisco Certifications including CCNA, CCNP and or CCIE (CCIE preferred) in R&S.
  • You possess professional or expert knowledge/experience with Cisco Service Provider solutions.
  • You are an Associate or have professional knowledge with Cisco Security including Cisco ISE, Stealthwatch, ASA, Firepower, AMP, etc.
  • You have the ability to travel to Cisco internal, partner and customer events, roadshows, etc. to train and raise awareness to drive POV Services adoption and sales. Up to 40% travel.
  • You bring VMWare/ESXi experience building servers, install VMware, deploying virtual appliances, etc.
  • You have Linux experience or certifications including CompTIA Linux+, Red Hat, etc.
  • Youre experience using Tool Command Language (Tcl), PERL, Python, etc. as well as Cisco and 3rd party traffic, event and device generation applications/tools/hardware. IXIA, Sapro, Pagent, etc.
  • Youve used Cisco and 3rd party management/monitoring/troubleshooting solutions; Cisco: DNA Center, Cisco Prime, Meraki, Viptela, CMX.
  • 3rd party solutions: Solarwinds, Zenoss, Splunk, LiveAction or other to monitor and/or manage an enterprise network.
  • Experience using Wireshark and PCAP files.

Why Cisco

At Cisco, each person brings their unique talents to work as a team and make a difference.

Yes, our technology changes the way the world works, lives, plays and learns, but our edge comes from our people.

  • We connect everything people, process, data and things and we use those connections to change our world for the better.
  • We innovate everywhere - From launching a new era of networking that adapts, learns and protects, to building Cisco Services that accelerate businesses and business results. Our technology powers entertainment, retail, healthcare, education and more from Smart Cities to your everyday devices.
  • We benefit everyone - We do all of this while striving for a culture that empowers every person to be the difference, at work and in our communities.
KELZAL (QELZAL CORPORATION)
  • San Diego, CA

Challenge:

As Kelzals Machine Learning Engineer, youwill be part of an innovative team that designs and develops algorithms and software for the next generation of AI-enabled visual systems. You will develop power-efficient machine learning and adaptive signal processing algorithms to solve real-world imaging and video classification problems.


Responsibilities:

  • Develop algorithms for the fast, low-complexity and accurate detection and tracking of objects in real-world environments
  • Develop algorithms for event-based spatio-temporal signal processing
  • Contribute to our machine learning tool sets for curating data and training models
  • Inform sensor decisions for optimal approaches to classification for product requirements
  • Follow and drive research on state-of-the-art approaches in the areas described above, as applied to the problems we're solving


Requirements:

·      Experience in event-based signal processing

·      Experience in continuous-time signal processing techniques

·      Experience in some deep neural network packages (e.g. TensorFlow, NVIDIA Digits,             Caffe/Caffe2)

·      Experience with OpenCV

·      Experience with traditional computer vision approaches to image processing

·      Experience with developing machine-learning algorithms for multi-modal object detection,         scene understanding, semantic classification, face verification, human pose estimation, activity recognition, or anomaly detection

·      Strong experience with classification and regression algorithms

·      Strong coding skills with Python and/or C/C++ in Linux environment

·      Track record of research excellence or/and experience converting publications to actual implementations

·      Experience with commercial development processes such as continuous integration, deployment and release management tools a plus.

·      Experience launching products containing machine learning algorithms a plus

·      Experience with fixed point implementation a plus

·      3+ years hands-on experience working in industry

·      MS or PhD Degree in Computer Science, Electrical Engineering or a related field

.      Current US work authorization

Ultra Tendency
  • Riga, Lettland

You are a developer that loves to take a look at infrastructure as well? You are a systems engineer that likes to write code? Ultra Tendency is looking for you! 


Your Responsibilities:



  • Support our customers and development teams in transitioning capabilities from development and testing to operations

  • Deploy and extend large-scale server clusters for our clients

  • Fine-tune and optimize our clusters to process millions of records every day 

  • Learn something new every day and enjoy solving complex problems


Job Requirements:



  • You know Linux like the back of your hand

  • You love to automate all the things – SaltStack, Ansible, Terraform and Puppet are your daily business

  • You can write code in Python, Java, Ruby or similar languages.

  • You are driven by high quality standards and attention to detail

  • Understanding of the Hadoop ecosystem and knowledge of Docker is a plus


We offer:



  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • Work on the open-source community and become a contributor. Learn from open-source enthusiasts which you will find nowhere else in Germany!

  • Work in an English-speaking, international environment

  • Work with cutting edge equipment and tools

Ultra Tendency
  • Berlin, Deutschland

You love writing high quality code? You enjoy designing algorithms for large-scale Hadoop clusters? Spark is your daily business? We have new challenges for you!


Your Responsibilities:



  • Solve Big Data problems for our customers in all phases of the project life cycle

  • Build program code, test and deploy to various environments (Cloudera, Hortonworks, etc.)

  • Enjoy being challenged and solve complex data problems on a daily basis

  • Be part of our newly formed team in Berlin and help driving its culture and work attitude


Job Requirements



  • Strong experience developing software using Java or a comparable language

  • At least 2 years of experience with data ingestion, analysis, integration, and design of Big Data applications using Apache open-source technologies

  • Strong background in developing on Linux

  • Solid computer science fundamentals (algorithms, data structures and programming skills in distributed systems)

  • Sound knowledge of SQL, relational concepts and RDBMS systems is a plus

  • Computer Science (or equivalent degree) preferred or comparable years of experience

  • Being able to work in an English-speaking, international environment 


We offer:



  • Fascinating tasks and unique Big Data challenges in various industries

  • Benefit from 10 years of delivering excellence to our customers

  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • Work on the open-source community and become a contributor

  • Fair pay and bonuses

  • Work with cutting edge equipment and tools

  • Enjoy our additional benefits such as a free BVG ticket and fresh fruits in the office

inovex GmbH
  • Karlsruhe, Germany

Gemeinsam mit deinem Betreuer arbeitest du an deinem Projekt aus einem der oben genannten Themengebiete und fertigst deine Abschlussarbeit selbstständig bei uns an. An deiner Seite stehen dir des Weiteren verschiedene Experten aus dem Team IT Engineering & Operations zur Verfügung.

Du kannst gerne im Laufe des Bewerbungsprozesses Themen mit dem zukünftigen Betreuer erarbeiten oder wählst aus einem der folgenden Themen:



  • Zusammenspiel klassischer Netzwerke und SDNs

  • VM-Workloads auf Container-Clustern (Scheduling, Orchestrierung)

  • Trusted Container Computing

  • Verschlüsselte Persistent-Volumes für Container

  • Service-Scaleout auf Anycast-Basis

  • Automatisierte Performance-Optimierung von Hadoop-Clustern

  • Multi-Tenant-Container-Cluster (Over-, Underlay-Cluster)

  • Hadoop in der Cloud (Container, HaaS, PaaS)

  • Kubernetes-as-a-Service (KaaS)

  • Hadoop-Lastspitzen in die Cloud

  • Hadoop Hybrid-Cloud (Data on Premise, Compute on Cloud)

  • Intel QuickAssist im OpenSource-Kontext

  • Auswirkung agiler Entwicklung auf den IT-Betrieb und das Unternehmen (DevOps)

  • Application Performance Management im Kontext von Microservice-Architekturen


Wer gut zu uns passen würde:



  • Erste private und schulische bzw. Studiums bezogene Erfahrungen und Kenntnisse im Bereich Linux und Netzwerke im Allgemeinen

  • Große Lern- und Leistungsbereitschaft

  • Leidenschaft und Begeisterung für neue Technologien und Themen rund um Linux

  • Gute kommunikative Fähigkeiten und sehr gute Deutsch- und Englischkenntnisse in Wort und Schrift

Visa
  • Austin, TX
Company Description
Common Purpose, Uncommon
Opportunity. Everyone at Visa works with one goal in mind making sure that Visa is the best way to pay and be paid, for everyone everywhere. This is our global vision and the common purpose that unites the entire Visa team. As a global payments technology company, tech is at the heart of what we do: Our VisaNet network processes over 13,000 transactions per second for people and businesses around the world, enabling them to use digital currency instead of cash and checks. We are also global advocates for financial inclusion, working with partners around the world to help those who lack access to financial services join the global economy. Visas sponsorships, including the Olympics and FIFA World Cup, celebrate teamwork, diversity, and excellence throughout the world. If you have a passion to make a difference in the lives of people around the
world, Visa offers an uncommon opportunity to build a strong, thriving career. Visa is fueled by our team of talented employees who continuously raise the bar on delivering the convenience and security of digital currency to people all over the world. Join our team and find out how Visa is everywhere you want to
be.
Job Description
The ideal candidate will be responsible for the following to:
  • Perform Hadoop Administration on Production Hadoop clusters
  • Perform Tuning and Increase Operational efficiency on a continuous basis
  • Monitor health of the platforms and Generate Performance Reports and Monitor and provide continuous improvements
  • Working closely with development, engineering and operation teams, jointly work on key deliverables ensuring production scalability and stability
  • Develop and enhance platform best practices
  • Ensure the Hadoop platform can effectively meet performance & SLA requirements
  • Responsible for support of Hadoop Production environment which includes Hive, YARN, Spark, Impala, Kafka, SOLR, Oozie, Sentry, Encryption, Hbase, etc.
  • Perform optimization capacity planning of a large multi-tenant cluster
Qualifications
  • Minimum 3 years of work experience in maintaining, optimization, issue resolution of Hadoop clusters, supporting Business users and Batch
  • Experience in Configuring and setting up Hadoop clusters and provide support for - aggregation, lookup & fact table creation criteria
  • Map Reduce tuning, data node, NN recovery etc.
  • Experience in Linux / Unix OS Services, Administration, Shell, awk scripting
  • Experience in building and scalable Hadoop applications
  • Experience in Core Java, Hadoop (Map Reduce, Hive, Pig, HDFS, H-catalog, Zookeeper and OOzie)
  • Hands-on Experience in SQL (Oracle ) and No SQL Databases (HBASE/Cassandra/Mongo DB)
  • Excellent oral and written communication and presentation skills, analytical and problem solving skills
  • Self-driven, Ability to work independently and as part of a team with proven track record developing and launching products at scale
  • Minimum of four year technical degree required
  • Experience on Cloudera distribution preferred
  • Hands-on Experience as a Linux Sys Admin is a plus
  • Knowledge on Spark and Kafka is a plus.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Job Number: REF15232V
Applied Resource Group
  • Atlanta, GA

Applied Resource Group is seeking a talented and experienced Data Engineer for our client, an emerging leader in the transit solutions space. As an experienced Data Engineer on the Data Services team, you will lead the design, development and maintenance of comprehensible data pipelines and distributed systems for data extraction, analysis, transformation, modelling and visualization. They're looking for independent thinkers that are passionate about technology and building solutions that continually improve the customer experience. Excellent communication skills and the ability to work collaboratively with teams is critical.
 

Job Duties/Responsibilities:

    • Building a unified data services platform from scratch, leveraging the most suitable Big Data tools following technical requirements and needs
    • Exploring and working with cutting edge data processing technologies
    • Work with distributed, scalable cloud-based technologies
    • Collaborating with a talented team of Software Engineers working on product development
    • Designing and delivering BI solutions to meet a wide range of reporting needs across the organization
    • Providing and maintaining up to date documentation to enable a clear outline of solutions
    • Managing task lists and communicating updates to stakeholders and team members following Agile Scrum methodology
    • Working as a key member of the core team to support the timely and efficient delivery of critical data solutions

 
Experience Needed:
 

    • Experience with AWS technologies are desired, especially those used for Data Analytics, including some of these: EMR, Glue, Data Pipelines, Lambda, Redshift, Athena, Kinesis, Elasticache, Aurora
    • Minimum of 5 years working in developing and building data solutions
    • Experience as an ETL/Data warehouse developer with knowledge in design, development and delivery of end-to-end data integration processes
    • Deep understanding of data storage technologies for structured and unstructured data
    • Background in programming and knowledge of programming languages such as Java, Scala, Node.js, Python.
    • Familiarity with cloud services (AWS, Azure, Google Cloud)
    • Experience using Linux as a primary development environment
    • Knowledge of Big data systems - Hadoop, pig, hive, shark/spark etc. a big plus.
    • Knowledge of BI platforms such as Tableau, Jaspersoft etc.
    • Strong communication and analytical skills
    • Capable of working independently under the direction of the Head of Data Services
    • Excellent communication, analytical and problem-solving skills
    • Ability to initially take direction and then work on own initiative
    • Experience working in AGILE

 
Nice-to-have experience and skills:

    • Masters in Computer-Science, Computer Engineering or equivalent  
    • Building data pipelines to perform real-time data processing using Spark Streaming and Kafka, or similar technologies.