OnlyDataJobs.com

Brain Corporation
  • San Diego, CA
Brain Corp is a San Diego-based AI company that specializes in the development of self-driving technology. Our AI tech represents the next generation of artificial brains for robots - it enables machines to perceive, learn, and navigate complex environments, while avoiding people and obstacles. We partner with commercial equipment manufacturers, and global consumer electronics brands, to transform their products into self-driving robots.
We are looking for candidates with advanced knowledge in the fields of machine learning and adaptive data processing.
As a core member of our AI Division, you will work with our world-class team of engineers and scientists to build a platform for the next generation of intelligent machines. Your experience will be pivotal in advancing our mission: safe and reliable robots everywhere.
What You Need
    • MS or PhD Degree in Computer Science, Software Engineering or a related field
    • Strong experience with classification and regression algorithms
    • Strong experience in machine learning techniques (unsupervised, supervised, reinforcement, demonstration) and optimization algorithms
    • Strong coding skills Python and/or C++ in Linux environment
    • Extensive experience converting publications to actual implementations

Things that make the difference
    • 2+ years experience (industry experience preferred)
    • Experience in deep neural network packages (e.g. TensorFlow, Theano)
    • Experience designing and developing robotic systems using a robotic middleware (such as ROS), and existing libraries and tools
    • Track record of research excellence or/and experience converting publications to actual implementations
    • Experience with continuous integration, deployment and release management tools
    • Experience with Agile and Scrum methodology
    • Proven system integration and software architecture skills
    • Experience launching products containing machine learning algorithms

This position is located in our San Diego headquarters and reports to the VP of Engineering & AI.
Novartis Institutes for BioMedical Research
  • Cambridge, MA

20 years of untapped data waiting for a new Principle Scientific Computing Engineer/Scientific Programmer, Imaging and Machine Learning to unlock the next breakthrough in innovative medicines for patients in need. You will be at the forefront of life sciences, tackling some incredible challenges that are curing diseases and improving patients’ lives.

Your responsibilities include, but are not limited to: 
Collaborating with scientists to create, optimize and accelerate workflows through the application of High Performance Computing Techniques. You will integrate algorithm and application development with state of the art technologies to create scalable platforms that accelerate scientific research in a reproducible and standardized manner.

Key responsibilities:
• Collaborate with scientists and Research IT peers to provide consulting services around parallel algorithm development and workflow optimization for the High Performance Computing (HPC) platform.
• Teaching and training the NIBR Scientific and Informatics community in areas of expertise
• Research, develop and integrate new technologies and computational approaches to enhance and accelerate scientific research.
• Establish and maintain the technical partnership with one or more scientific functions in NIBR.




Minimum Requirements


What you will bring to this role:
• BSc in computer science or related field; or equivalent experience with 
• 5 years minimum relevant experience including strong competencies in data structures, algorithms, and software design
• Experience with High Performance Computing and Cloud Computing
• Demonstrated proficiency in Python, C, C++, CUDA or OpenCL
• Demonstrated proficiency in Signal Processing, Advanced Imaging and Microscopy techniques.
• Solid project management skills and process analysis skills
• Demonstration of strong collaboration skills, effective communication skills, and creative problem-solving abilities

Preferred Qualifications
• MSc degree
• Demonstrated proficiency in 2 or more advanced machine learning frameworks and their application to natural language processing, action recognition-micro and macro tracking.
• Demonstrated proficiency in Signal Processing, Advanced Imaging and Microscopy techniques.
• Interest in drug discovery and knowledge of the life science is a strong plus
• Knowledge of Deep visualization, Deep transfer learning and Generative Adversarial Networks is a plus.
• Demonstrated proficiency in MPI in a large-scale Linux HPC environment
• Experience with CellProfiler, Matlab, ImageJ and R is a plus.

Position will be filled commensurate with experience

Sixt SE
  • München, Deutschland

Sixt is one of the biggest technology-driven mobility companies in the world. We will add a number of attractive services to our mobility portfolio in the near future. Our SIXT App is the central interface for our services: It allows customers to plan their journeys, open and close cars via telematics, review their usage data and much more. It is built on a state-of-the-art infrastructure: Cloud-native, microservices-based, event-driven – you name it. We are looking for an experienced Go developer (m/f/d) to drive extending and partly refactoring and rebuilding existing services and adding new ones to allow seamless integration. Scalability, very high availability and low operational cost are the key requirements. Defining slim, yet powerful interfaces and maturing them with our partners will be an important part of the work. We are engineering the future of mobility! Sixt stands for success, agility, ownership and an intercultural team. Apply now and join our journey and become a part of the Sixt family.


Do what you love:



  • You will be responsible for designing, building and evolving customer-facing distributed systems for our platform on the cloud.

  • You will work on event-driven systems and keep yourself up to date with the latest technologies to help you create scalable and resilient software.

  • You will work in a diverse, international team using agile methodologies, constantly striving to improve team collaboration and processes.

  • You will participate in on-call rotations and be available for escalations ensuring that our team's critical services are always up.

  • You will collaborate effectively with teammates, technical and business partners, internally and externally.


Come as you are:



  • You have experience in a technology environment, building cloud-native applications, solving scalability challenges, designing event-driven solutions - ideally in a dynamic startup setting.

  • You have at least 3 years development experience in Go, building distributed applications.

  • You have already designed distributed systems, worked on microservices architectures and developed large-scale interfaces used by many partners.

  • You are passionate about constantly learning new skills and improving yourself; you also bring a coaching mentality and share your knowledge with the team.

  • You have excellent communication skills and enjoy working with developers as well as business stakeholders, with college graduates as well as board members and external partners as well.


Feel Good:


In addition to the obligatory table football, driving simulators and ping-pong table, we have two dedicated Feel Good Managers who focus on the wellbeing of our Sixt Tech colleagues. Amongst other things they will support you with your relocation, finding accommodation, visa issues and help you to organize team events, Meetups and Townhalls. Our offices have different sizes, you will definitely find something according to your preferences. We follow no dress code. Fun at work and beyond: As a member of the Sixt Family you get attractive car rental and leasing offers as well as access to our large employee benefit portal where you find multitudes of offers for travelling, shopping and more. During breakfast, lunch and dinner our high-quality restaurant offers a daily changing menu. Meat, fish and vegetarian dishes are available, as well as a salad bar and freshly-baked pizza. Our coffee lounge for meeting and relaxing is opened the whole day. Come as you are, and do what you love: Join our 320 IT colleagues and apply now (English or German preferred)!

Acxiom
  • Austin, TX
As a Senior Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze the latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You are able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You are also a self-starter able to continuously evaluate new technologies, innovate and deliver solutions for business critical applications


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Lead the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Lead and review Hadoop log files with the help of log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 6+ years of Big Data Administration Experience
  • Extensive knowledge and Hands-on Experience of Hadoop based data manipulation/storage technologies like HDFS, MapReduce, Yarn, Spark/Kafka, HBASE, HIVE, Pig, Impala, R and Sentry/Ranger/Knox
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Experience supporting Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems
  • Great operational expertise such as excellent troubleshooting skills, understanding of system's capacity, bottlenecks, core resource utilizations (CPU, OS, Storage, and Networks)
  • Experience in Hadoop cluster migrations or upgrades
  • Strong scripting skills in Perl, Python, shell scripting, and/or Ruby on Rails
  • Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera, HortonWorks, and/or MapR versions along with monitoring/alerting tools (Nagios, Ganglia, Zenoss, Cloudera Manager)
  • Strong problem solving and critical thinking skills
  • Excellent verbal and written communication skills


What will set you apart:


  • Solid understanding and hands-on experience of Big Data on private/public cloud technologies(AWS/GCP/Azure)
  • DevOps experience (CHEF, Puppet and Ansible)
  • Strong knowledge of JAVA/J2EE and other web technologies

 
TalentBurst, an Inc 5000 company
  • Austin, TX

Position: Hadoop Administrator

Location: Austin, TX

Duration: 12+ months

Interview: Skype

Job Description:
Person will be responsible to work as part of 24x7 shifts (US hours) to provide Hadoop Platform Support and Perform Adminstrative on Production Hadoop clusters.

Must have skills:
3+ yrs. hands-on administration experience on Large Distributed Hadoop System on Linux

Technical Knowledge on YARN, MapReduce, HDFS, HBase, Zookeeper, Pig and Hive

Hands-on Experience as a Linux Sys Admin

Nice to have skills:
Knowledge on Spark and Kafka is a plus / Hadoop Certification is preferred

Roles and responsibilities:
Hadoop cluster set up, performance fine-tuning, monitoring and administration

Skill Requirements:
Minimum 3 yrs. hands-on experience on Large Distributed Hadoop System on Linux.
Strong Technical Knowledge on Hadoop Eco System such as YARN, MapReduce, HDFS, HBase, Zookeeper, Pig and Hive.
Hands on experience in Hadoop cluster set up, performance fine-tuning, monitoring and administration.
Hands-on Experience as a Linux Sys Admin
Knowledge on Spark and Kafka is a plus.
Hadoop Certification is preferred

ASML
  • Veldhoven, Netherlands

Introduction



When people think ‘software’, they often think of companies like Google or Microsoft. Even though ASML is classified as a hardware company, we in fact have one of the world´s largest and most pioneering Java communities. The ASML Java  environment is extremely attractive for prospective Java engineers because it combines big data with extreme complexity. From Hadoop retrieval to machine learning to full stack development, the possibilities are endless. At ASML, our Java teams create and implement software designs that run in the most modern semiconductor fabs in the world, helping our customers like Samsung, Intel and TSMC make computer chips faster, smaller, and more efficient. Here, we’re always pushing the boundaries of what’s possible.


We are always looking for talented Java developers who know how to apply the latest Java SE or Java EE technologies, to join the teams responsible for creating software for high volume manufacturing automation in semiconductor fabs. Could this be your next job? Apply now!



Job Mission



As a Java developer you will join one of our multinational Scrum teams to create state-of-the-art software solutions. Teams are composed of five to ten developer, a Scrum Master and a Product Owner. We are committed to following a (scaled) Agile way of working, with sprints and demos every two weeks, aiming for frequent releases of working software. In all teams we cooperate with internal and external experts from different knowledge domains to discover and build the
best solutions possible. We use tools like Continuous Integration with GIT, Jira and Bamboo. We move fast to help our customers reach their goals, and we strive to create reliable and well-tested software, because failures in our software stack can severely impact customers' operations.



Job Description



All these dedicated Java teams work in unison on different products and platforms across ASML. Here’s a brief description of what the different Java teams do: 



  • Create software infrastructure using Java EE, which provides access to SQL and NoSQL storage, reliably manages job queues with switch-over and fail-over features, periodically collects of information from networked systems in the Fab and offers big-data-like storage and computational capabilities;

  • Create on-site solutions that continuously monitors all scanners in a customer’s domain. The server can detect system failures before they happen and identify needed corrective actions.

  • Provide industrial automation tasks that take care of unattended complex adjustments to the manufacturing process, in order to enable highest yields in high volume manufacturing;

  • Implement and validate algorithms that give our customers the power to reach optimal results during manufacturing;

  • Create applications that help fine-tune the manufacturing process, helping process engineers to navigate the complexities of process set-up through excellent UX design.

  • Create visualization and analytics applications for visual performance monitoring, which also help to sift through huge amounts of data to pinpoint potential issues and solutions. For these applications, we use Spotfire and R as main technologies.

  • Select and manage IT infrastructure that helps us run the software on a multi-blade server with plenty of storage. In this area, we use virtualization technologies, Linux, Python and Splunk. In addition to Java.Use emerging technologies to turn vision into reality, e.g. using big data and machine learning;


Responsibilities:



  • Designing and implementing software, working on the product backlog defined by the Product Owner;

  • Ensuring the quality of personal deliverables by designing and implementing automated tests on unit and integration levels;

  • Cooperating with other teams to ensure consistent implementation of the architecture;- and agreeing on interfaces and timing of cross-team deliveries;

  • Troubleshooting, analyzing, and solving integration issues both from internal alpha and beta tests as well as those reported by our customers;

  • Writing or updating product documentation in accordance with company processes;

  • Suggesting improvements to our technical solutions and way of working, and implementing them in alignment with your team and their stakeholders.


Main technologies: Java SE and EE ranging from 6 to the latest version. Junit, Mockito, XML, SQL, Linux, Hibernate, Git, JIR.



Education



A relevant BSc or MSc in the area of IT, electronics or computer engineering.



Experience



If you already have Java software development experience in the high-tech industry you should have the following experience:



  • At least 4 years of Java SE or Java EE development experience;

  • Design and development of server-side software using object-oriented paradigm;

  • Creation of automated unit and integration tests using mocks and stubs;

  • Working with Continuous Integration;

  • Working as a part of a Scrum team;

  • Experience with the following is a plus:

  • Creation of web-based user interface;

  • Affinity with math, data science or machine learning.

  • A nice-to-have skills are experience with Hadoop, Spark, and MongoDB.




Personal skills




  • First of all, you’re passionate about technology and are excited by the idea that your work will impact millions of end-users worldwide;

  • You’re analytical, and product- and quality-oriented;

  • You like to investigate issues, and you’re a creative problem solver;

  • You’re open open-minded, you like to discuss technical challenges, and you want to push the boundaries of technology;

  • You’re an innovator and you constantly seek to improve your knowledge and your work;

  • You take ownership and you support your team - you’re the backbone of your group;

  • You’re client and quality oriented – you don’t settle for second-best solutions, but strive to find the best ways to acquire top results.


Important information


So, you’ve read the vacancy and are about to click the “apply for this job” button. That’s great! To make everything go as smoothly as possible, we would really appreciate it if you could take a look at the
following tips from us.
Although you have the option to upload your Linkedin profile during the application process, we would like to ask you to upload a detailed CV and a personalized cover letter. Adding your Linkedin, Github, or any other relevant profile in your CV is always a good idea!

Before uploading your CV, can you please make sure that the following information is present:



  • Short information about your experience, what: company / sector/ domain / product.

  • Context of the project (how big / complex were the projects / what were the goals).

  • What was the size of the team / who was involved (think of developers, testers, architect, product owners, scrum master) / what was your role in the team.

  • Which languages / tools / frameworks / versions (recent) were used during the project (full stack, server side, client side).

  • In which projects an agile methodology was used.

  • What were the results of the projects (as a team and individually)  / your contribution and achievements.

Jacobs
  • Houston, TX

FAST GN&C Modeling and Simulation Engineer


Are you passionate about human space exploration and understanding the origins of the universe? Are you seeking a position that will offer you the opportunity to work with a dynamic and diverse team where you will make a difference? If so, we need you.  


We are excited about what we do, and we need you on our team as we take on new challenges for Johnson Space Center (JSC/NASAs pursuits in deep space exploration.


Bring your GN&C modeling and simulation skills to the Flight Analysis and Simulation (FAST) tool. FAST is a generic 3- to 6-DOF multi-body ascent, aerocapture, entry, descent, and landing (A2EDL) simulation code based in the state-of-the-art, object-oriented Trick simulation environment. FAST is currently used to develop Orion trajectory designs, GN&C algorithms, and validate their implementation in flight software. FAST requires improvements to provide flight performance analysis, algorithm development, and flight software contributions for future programs including the Commercial Crew Program, Orion, and missions to the Moon and Mars.

As a FAST GN&C Modeling and Simulation Engineer you will:

    • Implement improvements and enhance FAST capabilities
      • Improve the usability of the FAST user interface to simplify prototyping of algorithms and models
      • Enhance 3- and 6-DOF powered ascent capabilities for lunar and Mars.
      • Create an improved generic optimization capability
      • Incorporate new vehicle, environment, and aerodynamic models for Commercial Crew and lunar/Mars
      • Implement heritage and new flight guidance algorithms, control systems, and navigation models
      • Perform and document model/simulation verification and validation.
    • Achieve Class C certification for use in developing/verifying flight software
      • Expand unit testing and improve model documentation.
    • Improve on-boarding of new users
      • Create a detailed users guide with example cases of historical vehicles.
    • Directly interact with the NASA customers, along with various NASA support contractors, during technical meetings and working groups.
    • Perform other duties as required.


Required Education/Experience/Skills:

    • BS degree in engineering from an accredited engineering school
    • A minimum of five (5) years of related engineering experience, or an MS degree from an accredited engineering school and a minimum of four (4) years of related engineering experience, or Ph.D. from an accredited engineering school and a minimum of zero (0) years of related experience
    • Expertise in C/C++, Linux, and Python
    • Experience with the development, verification and validation of 6-DOF GN&C spacecraft simulations
    • Ability to apply knowledge and experience towards timely completion of technical products and services
    • Excellent communication skills and the ability to work in a team environment consisting of NASA civil servants and various contractor employees
    • Strong leadership skills


Preferences:

    • MS degree in Aerospace Engineering with an emphasis in GN&C principles and orbital/flight mechanics
    • Experience with:
      • JSC TRICK Simulation Environment
      • JEOD (JSC Engineering Orbital Dynamics) modeling package
      • LaTeX
    • Previous experience with simulation visualization and graphics packages, particularly JSC EDGE or DOUG
    • Familiarity with MATLAB/Simulink
    • Experience in product delivery and management utilizing source code management tools such as Jenkins and Git/GitLab


If you have the qualifications and passion for this amazing opportunity please send your updated resume directly to Kim Cordray at, Kimberly.Cordray@jacobs.com


    • Must be a U.S. Citizen and successfully complete a U.S. government background investigation.
    • Management has the prerogative to select at any level for which this position has been advertised.

For more information on our partnership with NASA at Johnson Space Center, please visit http://www.wehavespaceforyou.com


Essential Functions

Work Environment
Generally an office environment, but can involve inside or outside work depending on task.

Physical Requirements
Work may involve sitting or standing for extended periods (90% of time). May require lifting and carrying up to 25 lbs (5% of time).

Equipment and Machines
Standard office equipment (PC, telephone, printer, etc.).

Attendance
Regular attendance in accordance with established work schedule is critical. Ability to work outside normal schedule and adjust schedule to meet peak periods and surge requirements when required.

Other Essential Functions
Must be able to work in a team atmosphere. Must put forward a professional behavior that enhances productivity and promotes teamwork and cooperation. Grooming and dress must be appropriate for the position and must not impose a safety risk/hazard to the employee or others.

USU Gruppe
  • Karlsruhe, Deutschland

Wir haben spannende Aufgaben für Dich:



  • Konzeption, Implementierung, Deployment und Betrieb von cloud-first Produkten für unsere Data-Science-Plattform

  • Du arbeitest teamorientiert in unserem selbstorganisierten, cross-funktionalen Team

  • Du setzt moderne Technologien und Methoden ein, wie z.B. - Cloud Computing mit Kubernetes, Spark, NoSQL Datenbanken, - DevOps, Continuous Integration and Deployment, Infrastructure as code




Das bringst Du mit:



  • Abgeschlossenes Studium (z.B. Informatik, Wirtschaftsinformatik, Physik) oder vergleichbare Ausbildung

  • Routinierter Umgang mit Linux-Systemen, Container-Lösungen (z.B. Docker, Kubernetes) und (No)SQL-Datenbanken

  • Erfahrung in der Automatisierung und Orchestrierung des Betriebs von verteilten Systemen, beispielsweise mit Ansible, Saltstack und Helm

  • Idealerweise verfügst Du über Kenntnisse im Software-Engineering mit Go, Scala oder Python

  • Teamgeist und Begeisterung für die Entwicklung innovativer Big Data Lösungen für die Industrie 4.0 runden dein Profil ab

Wallethub
  • No office location

Company details


WalletHub is one of the leading personal finance destinations in the US and rapidly growing. We’re looking for a highly skilled and motivated Senior Systems Administrator for a full-time, permanent position.


Requirements


You are the ideal candidate for this job if you have:



  • At least 5 years of experience in supporting AWS based production infrastructure.

  • Bachelor's or Master’s degree in Computer Science or equivalent work experience.

  • 3+ years of experience administering UNIX/Linux server required or equivalent work experience.

  • 3+ years of experience with Apache, Tomcat and any other Java application servers and relational database servers like MySQL (LAMP experience is highly preferred).

  • Experience with monitoring tools like Nagios, tripwire, aide etc and other custom monitoring tools.

  • Experience with configuring and securing mission critical production servers.

  • Experience with configuring load balancers and data.

  • Experience in Shell Scripting or Perl, with experience implementing automation and monitoring using shell scripting.

  • Experience in analysis and system performance tuning.

  • Critical thinking skills in a complex IT environment to analyze, troubleshoot, and resolve problems without direction.

  • Outstanding organizational skills and the ability to handle multiple projects simultaneously while meeting deadlines.

  • Excellent verbal and written communication skills.

  • Willingness to work hard (50 hrs per week).


Responsibilities



  • Ensure proper security, monitoring, alerting and reporting for the infrastructure and be the on-call for production servers.

  • Develop security monitoring and other tools to ensure the integrity and availability of our applications, server resources, reviewing system and application logs.

  • Work with the incident team to diagnose and recover from hardware or software failures working with or as the Incident Commander to coordinate and communicate with our internal customers.

  • Assist project teams with technical issues during development efforts.

  • Gather system requirements and support several project teams in evolving, testing and rolling-out new products and services, then transitioning the site or product to post launch operations activities throughout life of the product or service.

  • Work with the application development team and other systems engineers to make improvements to current infrastructure.

  • Document processes and procedures and follow a formal change management procedure.


Our Offer



  • Very competitive salary based on prior experience and qualifications

  • Potential for stock options after the first year

  • Raise and advancement opportunities based on periodic evaluations

  • Health benefits (in case you will be working from our office in Washington DC)

  • Visa sponsorship



Note: This position requires candidates to be living in the US. The position can be performed remotely if you don't live in the Washington DC area.



More about WalletHub


WalletHub is a high-growth fintech company based in Washington, DC that is looking for talented, hard-working individuals to help us reshape personal finance. More specifically, we are harnessing the power of data analytics and artificial intelligence to build the brain of a smart financial advisor, whose services we’re offering to everyone for free. The WalletHub brain enables users to make better financial decisions in a fraction of the time with three unique features:


1) Customized Credit-Improvement Tips: WalletHub identifies improvement opportunities and guides you through the necessary corrections.


2) Personalized Money-Saving Advice: WalletHub’s savings brain constantly scours the market for load-lightening opportunities, bringing you only the best deals.


3) Wallet Surveillance: Personal finance isn’t as scary with 24/7 credit monitoring providing backup, notifying you of important credit-report changes.


In addition to the valuable intelligence the brain provides, WalletHub is the first and only service to offer free credit scores and full credit reports that are updated on a daily basis absent of user interaction, rather than weekly or monthly and only when a user logs in. Some other services hang their hats on free credit scores and reports, yet they’re still inferior to what WalletHub considers minor pieces to a much larger puzzle.



How to Apply


To get our attention, all you need to do is send us a resume. If we believe that you will be a good match, we'll contact you to arrange the next steps. You can apply directly on Stackoverflow or email your application to jobs.dev@wallethub.com

Wallethub
  • No office location

Company details


WalletHub is one of the leading personal finance destinations in the US and rapidly growing. We’re looking for a highly skilled and motivated Senior Systems Administrator for a full-time, permanent position.


Requirements


You are the ideal candidate for this job if you have:



  • At least 5 years of experience in supporting AWS based production infrastructure.

  • Bachelor's or Master’s degree in Computer Science or equivalent work experience.

  • 3+ years of experience administering UNIX/Linux server required or equivalent work experience.

  • 3+ years of experience with Apache, Tomcat and any other Java application servers and relational database servers like MySQL (LAMP experience is highly preferred).

  • Experience with monitoring tools like Nagios, tripwire, aide etc and other custom monitoring tools.

  • Experience with configuring and securing mission critical production servers.

  • Experience with configuring load balancers and data.

  • Experience in Shell Scripting or Perl, with experience implementing automation and monitoring using shell scripting.

  • Experience in analysis and system performance tuning.

  • Critical thinking skills in a complex IT environment to analyze, troubleshoot, and resolve problems without direction.

  • Outstanding organizational skills and the ability to handle multiple projects simultaneously while meeting deadlines.

  • Excellent verbal and written communication skills.

  • Willingness to work hard (50 hrs per week).


Responsibilities



  • Ensure proper security, monitoring, alerting and reporting for the infrastructure and be the on-call for production servers.

  • Develop security monitoring and other tools to ensure the integrity and availability of our applications, server resources, reviewing system and application logs.

  • Work with the incident team to diagnose and recover from hardware or software failures working with or as the Incident Commander to coordinate and communicate with our internal customers.

  • Assist project teams with technical issues during development efforts.

  • Gather system requirements and support several project teams in evolving, testing and rolling-out new products and services, then transitioning the site or product to post launch operations activities throughout life of the product or service.

  • Work with the application development team and other systems engineers to make improvements to current infrastructure.

  • Document processes and procedures and follow a formal change management procedure.


Our Offer



  • Very competitive salary based on prior experience and qualifications

  • Potential for stock options after the first year

  • Raise and advancement opportunities based on periodic evaluations

  • Health benefits (in case you will be working from our office in Washington DC)

  • Visa sponsorship


Note: This position requires candidates to be living in the US. The position can be performed remotely if you don't live in the Washington DC area.


More about WalletHub


WalletHub is a high-growth fintech company based in Washington, DC that is looking for talented, hard-working individuals to help us reshape personal finance. More specifically, we are harnessing the power of data analytics and artificial intelligence to build the brain of a smart financial advisor, whose services we’re offering to everyone for free. The WalletHub brain enables users to make better financial decisions in a fraction of the time with three unique features:


1) Customized Credit-Improvement Tips: WalletHub identifies improvement opportunities and guides you through the necessary corrections.


2) Personalized Money-Saving Advice: WalletHub’s savings brain constantly scours the market for load-lightening opportunities, bringing you only the best deals.


3) Wallet Surveillance: Personal finance isn’t as scary with 24/7 credit monitoring providing backup, notifying you of important credit-report changes.


In addition to the valuable intelligence the brain provides, WalletHub is the first and only service to offer free credit scores and full credit reports that are updated on a daily basis absent of user interaction, rather than weekly or monthly and only when a user logs in. Some other services hang their hats on free credit scores and reports, yet they’re still inferior to what WalletHub considers minor pieces to a much larger puzzle.


How to Apply


To get our attention, all you need to do is send us a resume. If we believe that you will be a good match, we'll contact you to arrange the next steps. You can apply directly on Stackoverflow or email your application to jobs.dev@wallethub.com

Findify
  • Stockholm, Sweden

As part of our engineering team you will be responsible for building our product, an advanced machine learning algorithm within search personalization for e-commerce. As demand for our product continues to increase, we are on a journey to grow the team substantially in 2019. We’d love for you to join us.



About the role:


You will be part of a small team that moves fast and iterates. We do weekly sprints, code reviews, testing, and put a lot of emphasis on code style, cleanliness and robustness. You will get to work with amazing engineers specializing in machine learning and distributed systems.



Your responsibilities include:



  • Managing and improving Findify’s data pipelines - a crucial responsibility for us as Findify collects millions of data points every day to feed our machine learning algorithms

  • Enhancing some of the critical components of our system to successfully integrate our customers’ products and improve our search capabilities

  • Actively contributing to the overall design of our infrastructure and the application of our product vision



About you:


You are a creative problem-solver with passion for programming and building scalable architectures.



You are:



  • Initiative-taking; you are self-motivated, a doer, and can drive projects from start to finish

  • A team-player; you are comfortable working with different styles and believe (like us) that together we achieve much more than alone

  • Driven; working hard to achieve a goal you care about and running several projects in parallel is no stranger to you

  • A great communicator; you are comfortable in communicating in English both written and oral, including explaining your technical development to internal and external partners

  • Located within time zone GMT and GMT +3



You have:



  • A BSc or MSc in Computer Science or related technical discipline

  • At least 3 years of Scala work experience

  • Experience with relational database systems such as PostgreSQL

  • Familiarity with Akka Stream, Akka HTTP or Flink

  • Experience with Git

  • Working knowledge with Linux/Unix



We’d be extra impressed if you also have:



  • Experience with AWS / key-value databases such as Cassandra / data-mining / machine learning / search frameworks such as Lucene / e-commerce platforms

  • Dev-ops skills

  • Experience in working in/with remote teams

  • Experience in working in agile/lean methodologies

  • A side project or blog that showcases your passion



We believe that the more inclusive we are, the better products we build and the better we are able to serve our customers. Women and other minorities under-represented in tech are strongly encouraged to apply.

Limelight Networks
  • Phoenix, AZ

Job Purpose:

The Sr. Data Services Engineer assists in maintaining the operational aspects of Limelight Networks platforms, provides guidance to the Operations group and acts as an escalation point for advanced troubleshooting of systems issues. The Sr. Data Services Engineer assists in the execution of tactical and strategic operational infrastructure initiatives by building and managing complex computing systems and processes that facilitate the introduction of new products and services while allowing existing services to scale.


Qualifications: Experience and Education (minimums)

  • Bachelors Degree or equivalent experience.
  • 2+ years experience working with MySQL (or other relational databases: Mongo DB, Cassandra, Hadoop, etc.) in a large-scale enterprise environment.
  • 2+ years Linux Systems Administration experience.
  • 2+ years Version Control and Shell scripting and one or more scripting languages including Python, Perl, Ruby and PHP.
  • 2+ Configuration Management Systems, using Puppet, Chef or SALT.
  • Experienced w/MySQL HA/Clustering solutions; Corosync, Pacemaker and DRBD preferred.
  • Experience supporting open-source messaging solutions such as RabbitMQ or ActiveMQ preferred.

Knowledge, Skills & Abilities

  • Collaborative in a fast-paced environment while providing exceptional visibility to management and end-toend ownership of incidents, projects and tasks.
  • Ability to implement and maintain complex datastores.
  • Knowledge of configuration management and release engineering processes and methodologies.
  • Excellent coordination, planning and written and verbal communication skills.
  • Knowledge of the Agile project management methodologies preferred.
  • Knowledge of a NoSQL/Big Data platform; Hadoop, MongoDB or Cassandra preferred.
  • Ability to participate in a 24/7 on call rotation.
  • Ability to travel when necessary.

Essential Functions:

  • Develop and maintain core competencies of the team in accordance with applicable architectures and standards.
  • Participate in capacity management of services and systems.
  • Maintain plans, processes and procedures necessary for the proper deployment and operation of systems and services.
  • Identify gaps in the operation of products and services and drive enhancements.
  • Evaluate release processes and tools to find areas for improvement.
  • Contribute to the release and change management process by collaborating with the developers and other Engineering groups.
  • Participate in development meetings and implement required changes to the operational architecture, standards, processes or procedures and ensure they are in place prior to release (e.g., monitoring, documentation and metrics).
  • Maintain a positive demeanor and a high level of professionalism at all times.
  • Implement proactive monitoring capabilities that ensure minimal disruption to the user community including: early failure detection mechanisms, log monitoring, session tracing and data capture to aid in the troubleshooting process.
  • Implement HA and DR capabilities to support business requirements.
  • Troubleshoot and investigate database related issues.
  • Maintain migration plans and data refresh mechanisms to keep environments current and in sync with production.
  • Implement backup and recovery procedures utilizing various methods to provide flexible data recovery capabilities.
  • Work with management and security team to assist in implementing and enforcing security policies.
  • Create and manage user and security profiles ensuring application security policies and procedures are followed.

Vector Consulting, Inc
  • Atlanta, GA
 

Our Government client is looking for an experienced ETL Developer on a renewable contract in Atlanta, GA

Position ETL Developer

The desired candidate will be responsible for design, development, testing, maintenance and support of complex data extract, transformation and load (ETL) programs for an Enterprise Data Warehouse. An understanding of how complex data should be transformed from the source and loaded into the data warehouse is a critical part of this job.

  • Deep hands-on experience on OBIEE RPD & BIP Reporting Data models, Development for seamless cross-functional and cross-systems data reporting
  • Expertise and solid experience in BI Tools OBIEE, Oracle Data Visualization and Power BI
  • Strong Informatica technical knowledge in design, development and management of complex Informatica mappings, sessions and workflows on Informatica Designer Components -Source Analyzer, Warehouse Designer, Mapping Designer & Mapplet Designer, Transformation Developer, Workflow Manager and Workflow Monitor.
  • Strong programming skills, relational database skills with expertise in Advanced SQL and PL/SQL, indexing and query tuning
  • Having implemented Advanced Analytical models in Python or R
  • Experienced in Business Intelligence and Data warehousing concepts and methodologies.
  • Extensive experience in data analysis and root cause analysis and proven problem solving and analytical thinking capabilities.
  • Analytical capabilities to slice and dice data and display data in reports for best user experience.
  • Demonstrated ability to review business processes and translate into BI reporting and analysis solutions.
  • Ability to follow Software Development Lifecycle (SDLC) process and should be able to work under any project management methodologies used.
  • Ability to follow best practices and standards.
  • Ability to identify BI application performance bottlenecks and tune.
  • Ability to work quickly and accurately under pressure and project time constraints
  • Ability to prioritize workload and work with minimal supervision
  • Basic understanding of software engineering principles and skills working on Unix/Linux/Windows Operating systems, Version Control and Office software
  • Exposure Data Modeling using Star/Snowflake Schema Design, Data Marts, Relational and Dimensional Data Modeling, Slowly Changing Dimensions, Fact and Dimensional tables, Physical and Logical data modeling and in big data technologies
  • Experience with Big Data Lake / Hadoop implementations

 Required Qualifications:

  • A bachelors degree in Computer Science or related field
  • 6 to 10 years of experience working with OBIEE / Data Visualization / Informatica / Python
  • Ability to design and develop complex Informatica mappings, sessions, workflows and identify areas of optimizations
  • Experience with Oracle RDBMS 12g
  • Effective communication skills (both oral and written) and the ability to work effectively in a team environment are required
  • Proven ability and desire to mentor/coach others in a team environment
  • Strong analytical, problem solving and presentation skills.

Preferred Qualifications:

  • Working knowledge with Informatica Change Data Capture installed on DB2 z/OS
  • Working knowledge of Informatica Power Exchange
  • Experience with relational, multidimensional and OLAP techniques and technology
  • Experience with OBIEE tools version 10.X
  • Experience with Visualization tools like MS Power BI, Tableau, Oracle DVD
  • Experience with Python building predictive models

Soft Skills:

  • Strong written and oral communication skills in English Language
  • Ability to work with Business and communicate technical solution to solve business problems

About Vector:

Vector Consulting, Inc., (Headquartered in Atlanta) is an IT Talent Acquisition Solutions firm committed to delivering results. Since our founding in 1990, we have been partnering with our customers, understanding their business, and developing solutions with a commitment to quality, reliability and value. Our continuing growth has been and continues to be built around successful relationships that are based on our organization's operating philosophy and commitment to ** People, Partnerships, Purpose and Performance - THE VECTOR WAY

Expedia, Inc.
  • Bellevue, WA

We are seeking a deeply experienced technical leader to lead the next generation of engineering investments, and culture for the GCO Customer Care Platform (CCP). The technical leader in this role will help design, engineer and drive implementation of critical pieces of the EG-wide architecture (platform and applications) for customer care - these areas include, but limited to unified voice support, partner on boarding with configurable rules, Virtual agent programming model for all partners, and intelligent fulfillment. In addition, a key focus of this leader's role will also be to grow and mentor junior software engineers in GCO with a focus on building out a '2020 world-class engineering excellence' vision / culture.


What you’ll do:



  • Deep Technology Leadership (Design, Implementation, and Execution for the follow);

  • Ship next-gen EG-wide architecture (platform and applications) that enable 90% of automated self-service journeys with voice as a first-class channel from day zero

  • Design and ship a VA (Virtual Agent) Programming Model that enables partners standup intelligent virtual agents on CCP declaratively in minutes

  • Enable brand partners to onboard their own identity providers onto CCP

  • Enable partners to configure their workflows and business rules for their Virtual Agents

  • Programming Model for Intelligent actions in the Fulfillment layer

  • Integration of Context and Query as first-class entities into the Virtual Agent

  • Cross-Group Collaboration and Influence

  • Work with company-wide initiatives across AI Labs, BeX to build out a Best of Breed

  • Conversational Platform for EG-wide apps

  • Engage with and translate internal and external partner requirements into platform investments for effective on boarding of customers

  • Represent GCO's Technical Architecture at senior leadership meetings (eCP and EG) to influence and bring back enhancements to improve CCP



Help land GCO 2020 Engineering and Operational Excellence Goals

Mentor junior developers on platform engineering excellence dimensions (re-usable patterns, extensibility, configurability, scalability, performance, and design / implementation of core platform pieces)

Help develop a level of engineering muscle across GCO that becomes an asset for EG (as a provider of platform service as well as for talent)

Who you are:



  • BS or MS in Computer Science

  • 20 years of experience designing and developing complex, mission-critical, distributed software systems on a variety of platforms in high-tech industries

  • Hands on experience in designing, developing, and delivering (shipping) V1 (version one) MVP enterprise software products and solutions in a technical (engineering and architecture) capacity

  • Experience in building strong relationships with technology partners, customers, and getting closure on issues including delivering on time and to specification

  • Skills: Linux/ Windows/VMS, Scala, Java, Python, C#, C++, Object Oriented Design (OOD), Spark, Kafka, REST/Web Services, Distributed Systems, Reliable and scalable transaction processing systems (HBase, Microsoft SQL, Oracle, Rdb)

  • Nice to have: Experience in building highly scalable real-time processing platforms that hosts machine learning algorithms for Guided / Prescriptive Learning
    Identifies and solves problems at the company level while influencing product lines

  • Provides technical leadership in difficult times or serious crises

  • Key strategic player to long-term business strategy and vision

  • Recognized as an industry expert and is a recognized mentor and leader at the company Provides strategic influence across groups, projects and products

  • Provides long term product strategy and vision through group level efforts

  • Drive for results: Is sought out to lead company-wide initiatives that deliver cross-cutting lift to the organization and provides leadership in a crisis and is a key player in long-term business strategy and vision

  • Technical/Functional skills: Proves credentials as industry experts by inventing and delivering transformational technology/direction and helps drive change beyond the company and across the industry

  • Has the vision to impact long-term product/technology horizon to transform the entire industry



Why join us:

Expedia Group recognizes our success is dependent on the success of our people.  We are the world's travel platform, made up of the most knowledgeable, passionate, and creative people in our business.  Our brands recognize the power of travel to break down barriers and make people's lives better – that responsibility inspires us to be the place where exceptional people want to do their best work, and to provide them the tools to do so. 


Whether you're applying to work in engineering or customer support, marketing or lodging supply, at Expedia Group we act as one team, working towards a common goal; to bring the world within reach.  We relentlessly strive for better, but not at the cost of the customer.  We act with humility and optimism, respecting ideas big and small.  We value diversity and voices of all volumes. We are a global organization but keep our feet on the ground, so we can act fast and stay simple.  Our teams also have the chance to give back on a local level and make a difference through our corporate social responsibility program, Expedia Cares.


If you have a hunger to make a difference with one of the most loved consumer brands in the world and to work in the dynamic travel industry, this is the job for you.


Our family of travel brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Egencia®, trivago®, HomeAway®, Orbitz®, Travelocity®, Wotif®, lastminute.com.au®, ebookers®, CheapTickets®, Hotwire®, Classic Vacations®, Expedia® Media Solutions, CarRentals.com™, Expedia Local Expert®, Expedia® CruiseShipCenters®, SilverRail Technologies, Inc., ALICE and Traveldoo®.



Expedia is committed to creating an inclusive work environment with a diverse workforce.   All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.  This employer participates in E-Verify. The employer will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS) with information from each new employee's I-9 to confirm work authorization.

Ultra Tendency
  • Berlin, Deutschland

Big Data Software Engineer


Lead your own development team and our customers to success! Ultra Tendency is looking for someone who convinces not just by writing excellent code, but also through strong presence and leadership. 


At Ultra Tendency you would:



  • Work in our office in Berlin/Magdeburg and on-site at our customer's offices

  • Make Big Data useful (build program code, test and deploy to various environments, design and optimize data processing algorithms for our customers)

  • Develop outstanding Big Data application following the latest trends and methodologies

  • Be a role model and strong leader for your team and oversee the big picture

  • Prioritize tasks efficiently, evaluating and balancing the needs of all stakeholders


Ideally you have:



  • Strong experience in developing software using Python, Scala or a comparable language

  • Proven experience with data ingestion, analysis, integration, and design of Big Data applications using Apache open-source technologies

  • Profound knowledge about with data engineering technology, e.g. Kafka, SPARK, HBase, Kubernetes

  • Strong background in developing on Linux

  • Solid computer science fundamentals (algorithms, data structures and programming skills in distributed systems)

  • Languages: Fluent English and German is a plus


We offer:



  • Fascinating tasks and unique Big Data challenges of major players from various industries (automotive, insurance, telecommunication, etc.)

  • Fair pay and bonuses

  • Work with our open-source Big Data gurus, such as our Apache HBase committer and Release Manager

  • International diverse team

  • Possibility to work with the open-source community and become a contributor

  • Work with cutting edge equipment and tools


Confidentiality guaranteed

Man AHL
  • London, UK

The Role


As a Quant Platform Developer at AHL you will be building the tools, frameworks, libraries and applications which power our Quantitative Research and Systematic Trading. This includes responsibility for the continued success of “Raptor”, our in-house Quant Platform, next generation Data Engineering, and evolution of our production Trading System as we continually expand the markets and types of assets we trade, and the styles in which we trade them. Your challenges will be varied and might involve building new high performance data acquisition and processing pipelines, cluster-computing solutions, numerical algorithms, position management systems, visualisation and reporting tools, operational user interfaces, continuous build systems and other developer productivity tools.


The Team


Quant Platform Developers at AHL are all part of our broader technology team, members of a group of over sixty individuals representing eighteen nationalities. We have varied backgrounds including Computer Science, Mathematics, Physics, Engineering – even Classics - but what unifies us is a passion for technology and writing high-quality code.



Our developers are organised into small cross-functional teams, with our engineering roles broadly of two kinds: “Quant Platform Developers” otherwise known as our “Core Techs”, and “Quant Developers” which we often refer to as “Sector Techs”. We use the term “Sector Tech” because some of our teams are aligned with a particular asset class or market sector. People often rotate teams in order to learn more about our system, as well as find the position that best matches their interests.


Our Technology


Our systems are almost all running on Linux and most of our code is in Python, with the full scientific stack: numpy, scipy, pandas, scikit-learn to name a few of the libraries we use extensively. We implement the systems that require the highest data throughput in Java. For storage, we rely heavily on MongoDB and Oracle.



We use Airflow for workflow management, Kafka for data pipelines, Bitbucket for source control, Jenkins for continuous integration, Grafana + Prometheus for metrics collection, ELK for log shipping and monitoring, Docker for containerisation, OpenStack for our private cloud, Ansible for architecture automation, and HipChat for internal communication. But our technology list is never static: we constantly evaluate new tools and libraries.


Working Here


AHL has a small company, no-attitude feel. It is flat structured, open, transparent and collaborative, and you will have plenty of opportunity to grow and have enormous impact on what we do.  We are actively engaged with the broader technology community.



  • We host and sponsor London’s PyData and Machine Learning Meetups

  • We open-source some of our technology. See https://github.com/manahl

  • We regularly talk at leading industry conferences, and tweet about relevant technology and how we’re using it. See @manahltech



We’re fortunate enough to have a fantastic open-plan office overlooking the River Thames, and continually strive to make our environment a great place in which to work.



  • We organise regular social events, everything from photography through climbing, karting, wine tasting and monthly team lunches

  • We have annual away days and off-sites for the whole team

  • We have a canteen with a daily allowance for breakfast and lunch, and an on-site bar for in the evening

  • As well as PC’s and Macs, in our office you’ll also find numerous pieces of cool tech such as light cubes and 3D printers, guitars, ping-pong and table-football, and a piano.



We offer competitive compensation, a generous holiday allowance, various health and other flexible benefits. We are also committed to continuous learning and development via coaching, mentoring, regular conference attendance and sponsoring academic and professional qualifications.


Technology and Business Skills


At AHL we strive to hire only the brightest and best and most highly skilled and passionate technologists.



Essential



  • Exceptional technology skills; recognised by your peers as an expert in your domain

  • A proponent of strong collaborative software engineering techniques and methods: agile development, continuous integration, code review, unit testing, refactoring and related approaches

  • Expert knowledge in one or more programming languages, preferably Python, Java and/or C/C++

  • Proficient on Linux platforms with knowledge of various scripting languages

  • Strong knowledge of one or more relevant database technologies e.g. Oracle, MongoDB

  • Proficient with a range of open source frameworks and development tools e.g. NumPy/SciPy/Pandas, Pyramid, AngularJS, React

  • Familiarity with a variety of programming styles (e.g. OO, functional) and in-depth knowledge of design patterns.



Advantageous



  • An excellent understanding of financial markets and instruments

  • Experience of front office software and/or trading systems development e.g. in a hedge fund or investment bank

  • Expertise in building distributed systems with service-based or event-driven architectures, and concurrent processing

  • A knowledge of modern practices for data engineering and stream processing

  • An understanding of financial market data collection and processing

  • Experience of web based development and visualisation technology for portraying large and complex data sets and relationships

  • Relevant mathematical knowledge e.g. statistics, asset pricing theory, optimisation algorithms.


Personal Attributes



  • Strong academic record and a degree with high mathematical and computing content e.g. Computer Science, Mathematics, Engineering or Physics from a leading university

  • Craftsman-like approach to building software; takes pride in engineering excellence and instils these values in others

  • Demonstrable passion for technology e.g. personal projects, open-source involvement

  • Intellectually robust with a keenly analytic approach to problem solving

  • Self-organised with the ability to effectively manage time across multiple projects and with competing business demands and priorities

  • Focused on delivering value to the business with relentless efforts to improve process

  • Strong interpersonal skills; able to establish and maintain a close working relationship with quantitative researchers, traders and senior business people alike

  • Confident communicator; able to argue a point concisely and deal positively with conflicting views.

TRA Robotics
  • Berlin, Germany

We are engineers, designers and technologists, united by the idea of shaping the future. Our mission is to reimagine the manufacturing process. It will be fully software-defined and driven entirely by AI. This means new products will get to market much quicker.


Now we are working on creating a flexible robotic factory managed by AI. We are developing and integrating a stack of products that will facilitate the whole production process from design to manufacturing. Our goal is complex and deeply rooted in science. We understand that it is only achievable in collaboration across diverse disciplines and knowledge domains.


Currently, we are looking for a Senior Software Engineer, experienced in Java or Scala, to join our Operations Control System team.


About the project:


The main goal is to create Distributed Fault-Tolerant Middleware. It will coordinate all the robotic operations on the factory: from the interaction between a robotic arm and various sensors to the requirements projection onto the real distributed sequence of robots’ actions.


In our work, we use cutting-edge technologies and approaches: Scala/Akka, Apache Ignite, Apache Spark, Akka Streams. As for data analysis, it is entirely up to the team to choose which methods and tools to use. We are in continuous research and development, nothing written in stone yet. You can have an influence on the decisions and technologies to use.


Up to now, OS supports several architecture patterns:



  • BlackBoard - storage, based on modern in-memory approaches

  • A multi-agent system for managing operations

  • A knowledge base of technological operations with declarative semantics

  • Rule engine based system for an algorithm selection

  • Factory monitoring tools (Complex Event Processing)


Therefore, as a part of the team, you will create a new core technology for all the projects and for the whole industry.


Your Qualifications:



  • Proficiency with Java or Scala Core (3+ years)

  • Strong knowledge of SQL (2+ years)

  • Extensive experience in enterprise development (2+ years)

  • Excellent knowledge of algorithms and data structures

  • Experience with git/maven

  • Fluency in English


Will be an advantage:



  • Experience in Akka, GridGain/Ignite, Hadoop/Spark

  • Understanding of distributed systems main concepts

  • Knowledge of multi-agent systems

  • Basic knowledge of rule-based engines

  • Experience in building high load systems

  • Experience in Linux (as a power user)


What we offer:



  • To join highly scientific-intensive culture and take part in developing a unique product

  • The ability to choose technology stack and approaches

  • Yearly educational budget - we support your ambitions to learn

  • Relocation package - we would like to make your start as smooth as possible

  • Flexible working environment - choose your working hours and equipment

  • Cosy co-working space in Berlin-Mitte with an access to a terrace

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-5-8 years of Java experience, Scala and Python experience a plus

-3+ years of experience as an analyst, data scientist, or related quantitative role.

-3+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-3-5years of Java experience, Scala and Python experience a plus

-2+ years of experience as an analyst, data scientist, or related quantitative role.

-2+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

FlixBus
  • Berlin, Germany

Your Tasks – Paint the world green



  • Holistic cloud-based infrastructure automation

  • Distributed data processing clusters as well as data streaming platforms based on Kafka, Flink and Spark

  • Microservice platforms based on Docker

  • Development infrastructure and QA automation

  • Continuous Integration/Delivery/Deployment


Your Profile – Ready to hop on board



  • Experience in building and operating complex infrastructure

  • Expert-level: Linux, System Administration

  • Experience with Cloud Services, Expert-Level with either AWS or GCP  

  • Experience server and operation-system-level virtualization is a strong plus, in particular practical experience with Docker and cluster technologies like Kubernetes, AWS ECS, OpenShift

  • Mindset: "Automate Everything", "Infrastructure as Code", "Pipelines as Code", "Everything as Code"

  • Hands-on experience with "Infrastructure as Code" tools: TerraForm, CloudFormation, Packer

  • Experience with a provisioning / configuration management tools (Ansible, Chef, Puppet, Salt)

  • Experience designing, building and integrating systems for instrumentation, metrics/log collection, and monitoring: CloudWatch, Prometheus, Grafana, DataDog, ELK

  • At least basic knowledge in designing and implementing Service Level Agreements

  • Solid knowledge of Network and general Security Engineering

  • At least basic experience with systems and approaches for Test, Build and Deployment automation (CI/CD): Jenkins, TravisCI, Bamboo

  • At least basic hands-on DBA experience, experience with data backup and recovery

  • Experience with JVM-based build automation is a plus: Maven, Gradle, Nexus, JFrog Artifactory