OnlyDataJobs.com

Limelight Networks
  • Phoenix, AZ

Job Purpose:

The Sr. Data Services Engineer assists in maintaining the operational aspects of Limelight Networks platforms, provides guidance to the Operations group and acts as an escalation point for advanced troubleshooting of systems issues. The Sr. Data Services Engineer assists in the execution of tactical and strategic operational infrastructure initiatives by building and managing complex computing systems and processes that facilitate the introduction of new products and services while allowing existing services to scale.


Qualifications: Experience and Education (minimums)

  • Bachelors Degree or equivalent experience.
  • 2+ years experience working with MySQL (or other relational databases: Mongo DB, Cassandra, Hadoop, etc.) in a large-scale enterprise environment.
  • 2+ years Linux Systems Administration experience.
  • 2+ years Version Control and Shell scripting and one or more scripting languages including Python, Perl, Ruby and PHP.
  • 2+ Configuration Management Systems, using Puppet, Chef or SALT.
  • Experienced w/MySQL HA/Clustering solutions; Corosync, Pacemaker and DRBD preferred.
  • Experience supporting open-source messaging solutions such as RabbitMQ or ActiveMQ preferred.

Knowledge, Skills & Abilities

  • Collaborative in a fast-paced environment while providing exceptional visibility to management and end-toend ownership of incidents, projects and tasks.
  • Ability to implement and maintain complex datastores.
  • Knowledge of configuration management and release engineering processes and methodologies.
  • Excellent coordination, planning and written and verbal communication skills.
  • Knowledge of the Agile project management methodologies preferred.
  • Knowledge of a NoSQL/Big Data platform; Hadoop, MongoDB or Cassandra preferred.
  • Ability to participate in a 24/7 on call rotation.
  • Ability to travel when necessary.

Essential Functions:

  • Develop and maintain core competencies of the team in accordance with applicable architectures and standards.
  • Participate in capacity management of services and systems.
  • Maintain plans, processes and procedures necessary for the proper deployment and operation of systems and services.
  • Identify gaps in the operation of products and services and drive enhancements.
  • Evaluate release processes and tools to find areas for improvement.
  • Contribute to the release and change management process by collaborating with the developers and other Engineering groups.
  • Participate in development meetings and implement required changes to the operational architecture, standards, processes or procedures and ensure they are in place prior to release (e.g., monitoring, documentation and metrics).
  • Maintain a positive demeanor and a high level of professionalism at all times.
  • Implement proactive monitoring capabilities that ensure minimal disruption to the user community including: early failure detection mechanisms, log monitoring, session tracing and data capture to aid in the troubleshooting process.
  • Implement HA and DR capabilities to support business requirements.
  • Troubleshoot and investigate database related issues.
  • Maintain migration plans and data refresh mechanisms to keep environments current and in sync with production.
  • Implement backup and recovery procedures utilizing various methods to provide flexible data recovery capabilities.
  • Work with management and security team to assist in implementing and enforcing security policies.
  • Create and manage user and security profiles ensuring application security policies and procedures are followed.

VidMob
  • Pittsfield, MA
  • Salary: $65k - 85k

Who We’re Seeking


VidMob’s Ads Integration Engineer is a highly technical position that works with our strategic ad platform partners, integrating their APIs into the VidMob platform. You enjoy digging into complex campaign management and reporting frameworks allowing you to build elegant and scalable integrations. Your experience with ad tech makes you the expert on a team when talking about metrics, dimensions, KPIs, campaigns, squads, and formats.


What You’ll Do


You will engage with some of the world's largest companies to extend their campaign and media performance API offerings through the VidMob platform. Building tools and automation to pull and report on data along with keeping those integrations up to date. You’ll work closely with our data engineers to maximize our data pipelines and write clear documentation so our front-end engineers can quickly build features around each integration.


This position is full time and is based in Pittsfield, MA.


Responsibilities



  • Define and implement API integrations with our strategic partners

  • Work closely with our strategic partners staying up to date on product changes

  • Be an ads/reporting integration technical expert, and have a strategic influence on partners and internal teams at VidMob

  • Support VidMob engineering efforts in other areas as needed


Minimum Qualifications



  • 3+ years of previous experience as a software engineer

  • Ad Tech experience a must with a strong understanding of campaign management tools across the major platforms (Facebook, Google, Snapchat, Twitter, etc)

  • Experience with DSPs a plus

  • Solid software development skills with experience building software developed in Java

  • Additional experience in (at least one) Python, PHP, C/C++, Ruby, or Scala

  • Excellent communication skills including experience presenting to technical and business audiences

  • BA/BS in Computer Science or equivalent degree/experience

Citizens Advice
  • London, UK
  • Salary: £40k - 45k

As a Database engineer in the DevOps team here at Citizens Advice you will help us develop and implement our data strategy. You will have the opportunity to work with both core database technologies and big data solutions.


Past


Starting from scratch, we have built a deep tech-stack with AWS services at its core. We created a new CRM system, migrated a huge amount of data to AWS Aurora PG and used AWS RDS to run some of our business critical databases.


You will have gained a solid background and in-depth knowledge of AWS RDS, SQL/Admin against DBMS's such as PostgreSql / MySQL / SQL Server, Dynamo / Aurora. You will have dealt with Data Warehousing, ETL, DB Mirroring/Replication, and DB Security Mechanisms & Techniques.


Present


We use AWS RDS including Aurora as the standard DB implementation for our applications. We parse data in S3 using Spark jobs and we are planning to implement a data lake solution in AWS.


Our tools and technologies include:



  • Postgres on AWS RDS

  • SQL Server for our Data Warehouse

  • Liquibase for managing the DW schema

  • Jenkins 2 for task automation

  • Spark / Parquet / AWS Glue for parsing raw data

  • Docker / docker-compose for local testing


You will be developing, supporting and maintaining automation tools to drive database, reporting and maintenance tasks.


As part of our internal engineering platform offering, R&D time will give you the opportunity to develop POC solutions to integrate with the rest of the business.


Future


You will seek continuous improvement and implement solutions to help Citizens Advice deliver digital products better and quicker.


You will be helping us implement a data lake solution to improve operations and to offer innovative services.


You will have dedicated investment time at Citizens Advice to learn new skills, technologies, research topics or work on tools that make this possible.

FCA Fiat Chrysler Automobiles
  • Detroit, MI

Fiat Chrysler Automobiles is looking to fill the full-time position of a Data Scientist. This position is responsible for delivering insights to the commercial functions in which FCA operates.


The Data Scientist is a role in the Business Analytics & Data Services (BA) department and reports through the CIO. They will play a pivotal role in the planning, execution  and delivery of data science and machine learning-based projects. The bulk of the work with be in areas of data exploration and preparation, data collection and integration, machine learning (ML) and statistical modelling and data pipe-lining and deployment.

The newly hired data scientist will be a key interface between the ICT Sales & Marketing team, the Business and the BA team. Candidates need to be very much self-driven, curious and creative.

Primary Responsibilities:

    • Problem Analysis and Project Management:
      • Guide and inspire the organization about the business potential and strategy of artificial intelligence (AI)/data science
      • Identify data-driven/ML business opportunities
      • Collaborate across the business to understand IT and business constraints
      • Prioritize, scope and manage data science projects and the corresponding key performance indicators (KPIs) for success
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
      • Generate and test hypotheses about the underlying mechanics of the business process.
      • Network with domain experts to better understand the business mechanics that generated the data.
    • Data Collection and Integration:
      • Understand new data sources and process pipelines. Catalog and document their use in solving business problems.
      • Create data pipelines and assets the enable more efficiency and repeatability of data science activities.
    • Data Exploration and Preparation:
      • Apply statistical analysis and visualization techniques to various data, such as hierarchical clustering, T-distributed Stochastic Neighbor Embedding (t-SNE), principal components analysis (PCA)
    • Machine Learning and Statistical Modelling:
      • Apply various ML and advanced analytics techniques to perform classification or prediction tasks
      • Integrate domain knowledge into the ML solution; for example, from an understanding of financial risk, customer journey, quality prediction, sales, marketing
      • Testing of ML models, such as cross-validation, A/B testing, bias and fairness
    • Operationalization:
      • Collaborate with ML operations (MLOps), data engineers, and IT to evaluate and implement ML deployment options
      • (Help to) integrate model performance management tools into the current business infrastructure
      • (Help to) implement champion/challenger test (A/B tests) on production systems
      • Continuously monitor execution and health of production ML models
      • Establish best practices around ML production infrastructure
    • Other Responsibilities:
      • Train other business and IT staff on basic data science principles and techniques
      • Train peers on specialist data science topics
      • Promote collaboration with the data science COE within the organization.

Basic Qualifications:

    • A bachelors  in computer science, data science, operations research, statistics, applied mathematics, or a related quantitative field [or equivalent work experience such as, economics, engineering and physics] is required. Alternate experience and education in equivalent areas such as economics, engineering or physics, is acceptable. Experience in more than one area is strongly preferred.
    • Candidates should have three to six years of relevant project experience in successfully launching, planning, executing] data science projects. Preferably in the domains of automotive or customer behavior prediction.
    • Coding knowledge and experience in several languages: for example, R, Python, SQL, Java, C++, etc.
    • Experience of working across multiple deployment environments including cloud, on-premises and hybrid, multiple operating systems and through containerization techniques such as Docker, Kubernetes, AWS Elastic Container Service, and others.
    • Experience with distributed data/computing and database tools: MapReduce, Hadoop, Hive, Kafka, MySQL, Postgres, DB2 or Greenplum, etc.
    • All candidates must be self-driven, curious and creative.
    • They must demonstrate the ability to work in diverse, cross-functional teams.
    • Should be confident, energetic self-starters, with strong moderation and communication skills.

Preferred Qualifications:

    • A master's degree or PhD in statistics, ML, computer science or the natural sciences, especially physics or any engineering disciplines or equivalent.
    • Experience in one or more of the following commercial/open-source data discovery/analysis platforms: RStudio, Spark, KNIME, RapidMiner, Alteryx, Dataiku, H2O, SAS Enterprise Miner (SAS EM) and/or SAS Visual Data Mining and Machine Learning, Microsoft AzureML, IBM Watson Studio or SPSS Modeler, Amazon SageMaker, Google Cloud ML, SAP Predictive Analytics.
    • Knowledge and experience in statistical and data mining techniques: generalized linear model (GLM)/regression, random forest, boosting, trees, text mining, hierarchical clustering, deep learning, convolutional neural network (CNN), recurrent neural network (RNN), T-distributed Stochastic Neighbor Embedding (t-SNE), graph analysis, etc.
    • A specialization in text analytics, image recognition, graph analysis or other specialized ML techniques such as deep learning, etc., is preferred.
    • Ideally, the candidates are adept in agile methodologies and well-versed in applying DevOps/MLOps methods to the construction of ML and data science pipelines.
    • Knowledge of industry standard BA tools, including Cognos, QlikView, Business Objects, and other tools that could be used for enterprise solutions
    • Should exhibit superior presentation skills, including storytelling and other techniques to guide and inspire and explain analytics capabilities and techniques to the organization.
BIZX, LLC / Slashdot Media / SourceForge.net
  • San Diego, CA

Job Description (your roll):


The Senior Data  Engineer position is a challenging role that bridges the gap between data management and software development. This role reports directly to and works closely with the Director of Data Management while teaming with our software development group. You will work with the team that is designing and implementing the next generation of our internal systems replacing legacy technical debt with state-of-the-art design to enable faster product and feature creation in our big data environment.


Our Industry and Company Environment:

Candidate must have the desire to work and collagerate in a fast-paced entrepreneurial environment in the B2B technology marketing and big data space working with highly motivated co-workers in our downtown San Diego office.


Responsibilities


  • Design interfaces allowing the operations department to fully utilize large data sets
  • Implement machine learning algorithms to sort and organize large data sets
  • Participate in the research, design, and development of software tools
  • Identify, design, and implement process improvements: automating manual processes
  • Optimize data delivery, re-designing infrastructure for greater scalability
  • Analyze and interpret large data sets
  • Build reliable services for gathering & ingesting data from a wide variety of sources
  • Work with peers and stakeholders to plan approach and define success
  • Create efficient methods to clean and curate large data sets


Qualifications

    • Have a B.S., M.S. or Ph.D. in Computer Science or equivalent degree and work experience

    • Deep understanding of developing high efficiency data processing systems

    • Experience with development of applications in mission-critical environments
    • Experience with our stack:
    •      3+ years experience developing in Javascript, PHP, Symfony
    •      3+ years experience developing and implementing machine learning algorithms
    •      4+ years experience with data science tool sets
    •      3+ years MySQL
    •      Experience with ElasticSearch a plus
    •      Experience with Ceph a plus
 

About  BIZX, LLC / Slashdot Media / SourceForge.net


BIZX including its SlashDot Media division is a global leader in on-line professional technology communities such as sourceforge.net serving over 40M website viewers and serving over 150M page views each month to an enthusiastic and engaged audience of IT professionals, decision makers, developers and enthusiasts around the world. Our Passport demand generation platform leverages our huge B2B database and is considered best in class by our list of Fortune 1000 customers. Our impressive growth in the demand generation space is fueled through our use of AI, big data technologies, sophisticated systems automation - and great people.  


Location - 101 W Broadway, San Diego, CA

The HT Group
  • Austin, TX

Full Stack Engineer, Java/Scala Direct Hire Austin

Do you have a track record of building both internal- and external-facing software services in a dynamic environment? Are you passionate about introducing disruptive and innovative software solutions for the shipping and logistics industry? Are you ready to deliver immediate impact with the software you create?

We are looking for Full Stack Engineers to craft, implement and deploy new features, services, platforms, and products. If you are curious, driven, and naturally explore how to build elegant and creative solutions to complex technical challenges, this may be the right fit for you. If you value a sense of community and shared commitment, youll collaborate closely with others in a full-stack role to ship software that delivers immediate and continuous business value. Are you up for the challenge?

Tech Tools:

  • Application stack runs entirely on Docker frontend and backend
  • Infrastructure is 100% Amazon Web Services and we use AWS services whenever possible. Current examples: EC2 Elastic Container Service (Docker), Kinesis, SQS, Lambda and Redshift
  • Java and Scala are the languages of choice for long-lived backend services
  • Python for tooling and data science
  • Postgres is the SQL database of choice
  • Actively migrating to a modern JavaScript-centric frontend built on Node, React/Relay, and GraphQL as some of our core UI technologies

Responsibilities:

  • Build both internal and external REST/JSON services running on our 100% Docker-based application stack or within AWS Lambda
  • Build data pipelines around event-based and streaming-based AWS services and application features
  • Write deployment, monitoring, and internal tooling to operate our software with as much efficiency as we build it
  • Share ownership of all facets of software delivery, including development, operations, and test
  • Mentor junior members of the team and coach them to be even better at what they do

Requirements:

  • Embrace the AWS + DevOps philosophy and believe this is an innovative approach to creating and deploying products and technical solutions that require software engineers to be truly full-stack
  • Have high-quality standards, pay attention to details, and love writing beautiful, well-designed and tested code that can stand the test of time
  • Have built high-quality software, solved technical problems at scale and believe in shipping software iteratively and often
  • Proficient in and have delivered software in Java, Scala, and possibly other JVM languages
  • Developed a strong command over Computer Science fundamentals
Avaloq Evolution AG
  • Zürich, Switzerland

The position


Are you passionate about data architecture? Are you interested in shaping the next generation of data science driven products for the financial industry? Do you enjoy working in an agile environment involving multiple stakeholders?

Responsible for selecting appropriate technologies from open source, commercial on-premises and cloud-based offerings. Integrating a new generation of tools within the existing environment to ensure access to accurate and current data. Consider not only the functional requirements, but also the non-functional attributes of platform quality such as security, usability, and stability.

We want you to help us to strengthen and further develop the transformation of Avaloq to a data driven product company. Make analytics scalable and accelerate the process of data science innovation.


Your profile


  • PhD, Master or Bachelor degree in Computer Science, Math, Physics, Engineering, Statistics or other technical field

  • Knowledgeable with BigData technologies and architectures (e.g. Hadoop, Spark, data lakes, stream processing)

  • Practical experience with Container Platforms (OpenShift) and/or containerization software (Kubernetes, Dockers)

  • Hands-on experience developing data extraction and transformation pipelines (ETL process)

  • Expert knowledge in RDBMS, NoSQL and Data Warehousing

  • Familiar with information retrieval software such as Elastic Search/Lucene/SOLR

  • Firm understanding of major programming/scripting languages like Java/Scala, Linux, PHP, Python and/or R

  • High integrity, responsibility and confidentiality a requirement for dealing with sensitive data

  • Strong presentation and communication skills

  • Good planning and organisational skills

  • Collaborative mindset to sharing ideas and finding solutions

  • Fluent in English; German, Italian and French a plus





 Professional requirements


  • Be a thought leader for best practice how to develop and deploy data science products & services

  • Provide an infrastructure to make data driven insights scalable and agile

  • Liaise and coordinate with stakeholders regarding setting up and running a BigData and analytics platform

  • Lead the evaluation of business and technical requirements

  • Support data-driven activities and a data-driven mindset where needed



Main place of work
Zurich

Contact
Avaloq Evolution AG
Anna Drozdowska, Talent Acquisition Professional
Allmendstrasse 140 - 8027 Zürich - Switzerland

www.avaloq.com/en/open-positions

Please only apply online.

Note to Agencies: All unsolicited résumés will be considered direct applicants and no referral fee will be acknowledged.
Riccione Resources
  • Dallas, TX

Sr. Data Engineer Hadoop, Spark, Data Pipelines, Growing Company

One of our clients is looking for a Sr. Data Engineer in the Fort Worth, TX area! Build your data expertise with projects centering on large Data Warehouses and new data models! Think outside the box to solve challenging problems! Thrive in the variety of technologies you will use in this role!

Why should I apply here?

    • Culture built on creativity and respect for engineering expertise
    • Nominated as one of the Best Places to Work in DFW
    • Entrepreneurial environment, growing portfolio and revenue stream
    • One of the fastest growing mid-size tech companies in DFW
    • Executive management with past successes in building firms
    • Leader of its technology niche, setting the standards
    • A robust, fast-paced work environment
    • Great technical challenges for top-notch engineers
    • Potential for career growth, emphasis on work/life balance
    • A remodeled office with a bistro, lounge, and foosball

What will I be doing?

    • Building data expertise and owning data quality for the transfer pipelines that you create to transform and move data to the companys large Data Warehouse
    • Architecting, constructing, and launching new data models that provide intuitive analytics to customers
    • Designing and developing new systems and tools to enable clients to optimize and track advertising campaigns
    • Using your expert skills across a number of platforms and tools such as Ruby, SQL, Linux shell scripting, Git, and Chef
    • Working across multiple teams in high visibility roles and owning the solution end-to-end
    • Providing support for existing production systems
    • Broadly influencing the companys clients and internal analysts

What skills/experiences do I need?

    • B.S. or M.S. degree in Computer Science or a related technical field
    • 5+ years of experience working with Hadoop and Spark
    • 5+ years of experience with Python or Ruby development
    • 5+ years of experience with efficient SQL (Postgres, Vertica, Oracle, etc.)
    • 5+ years of experience building and supporting applications on Linux-based systems
    • Background in engineering Spark data pipelines
    • Understanding of distributed systems

What will make my résumé stand out?

    • Ability to customize an ETL or ELT
    • Experience building an actual data warehouse schema

Location: Fort Worth, TX

Citizenship: U.S. citizens and those authorized to work in the U.S. are encouraged to apply. This company is currently unable to provide sponsorship (e.g., H1B).

Salary: 115 130k + 401k Match

---------------------------------------------------


~SW1317~

Gravity IT Resources
  • Miami, FL

Overview of Position:

We undertaking an ambitious digital transformation across Sales, Service, Marketing, and eCommerce. We are looking for a web data analytics wizard with prior experience in digital data preparation, discovery, and predictive analytics.

The data scientist/web analyst will work with external partners, digital business partners, enterprise analytics, and technology team to strategically plan and develop datasets, measure web analytics, and execute on predictive and prescriptive use cases. The role demands the ability to (1) Learn quickly (2) Work in a fast-paced, team-driven environment (3) Manage multiple efforts simultaneously (4) Adept at using large datasets and using models to test effectiveness of different courses of action (5) Promote data driven decision making throughout the organization (6) Define and measure success of capabilities we provide the organization.


Primary Duties and Responsibilities

    Analy
    • ze data captured through Google Analytics and develop meaningful actionable insights on digital behavior. Put t
    • ogether a customer 360 data frame by connecting CRM Sales, Service, Marketing cloud data with Commerce Web behavior data and wrangle the data into a usable form. Use p
    • redictive modelling to increase and optimize customer experiences across online & offline channels. Evalu
    • ate customer experience and conversions to provide insights & tactical recommendations for web optimization
    • Execute on digital predictive use cases and collaborate with enterprise analytics team to ensure use of best tools and methodologies.
    • Lead support for enterprise voice of customer feedback analytics.
    • Enhance and maintain digital data library and definitions.

Minimum Qualifications

  • Bachelors degree in Statistics, Computer Science, Marketing, Engineering or equivalent
  • 3 years or more of working experience in building predictive models.
  • Experience in Google Analytics or similar web behavior tracking tools is required.
  • Experience in R is a must with working knowledge of connecting to multiple data sources such as amazon redshift, salesforce, google analytics, etc.
  • Working knowledge in machine learning algorithms such as Random Forest, K-means, Apriori, Support Vector machine, etc.
  • Experience in A/B testing or multivariate testing.
  • Experience in media tracking tags and pixels, UTM, and custom tracking methods.
  • Microsoft Office Excel & PPT (advanced).

Preferred Qualifications

  • Masters degree in statistics or equivalent.
  • Google Analytics 360 experience/certification.
  • SQL workbench, Postgres.
  • Alteryx experience is a plus.
  • Tableau experience is a plus.
  • Experience in HTML, JavaScript.
  • Experience in SAP analytics cloud or SAP desktop predictive tool is a plus
Signify Health
  • Dallas, TX

Position Overview:

Signify Health is looking for a savvy Data Engineer to join our growing team of deep learning specialists. This position would be responsible for evolving and optimizing data and data pipeline architectures, as well as, optimizing data flow and collection for cross-functional teams. The Data Engineer will support software developers, database architects, data analysts, and data scientists. The ideal candidate would be self-directed, passionate about optimizing data, and comfortable supporting the Data Wrangling needs of multiple teams, systems and products.

If you enjoy providing expert level IT technical services, including the direction, evaluation, selection, configuration, implementation, and integration of new and existing technologies and tools while working closely with IT team members, data scientists, and data engineers to build our next generation of AI-driven solutions, we will give you the opportunity to grow personally and professionally in a dynamic environment. Our projects are built on cooperation and teamwork, and you will find yourself working together with other talented, passionate and dedicated team member, all working towards a shared goal.

Essential Job Responsibilities:

  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data models for greater scalability, etc.
  • Leverage Azure for extraction, transformation, and loading of data from a wide variety of data sources in support of AI/ML Initiatives
  • Design and implement high performance data pipelines for distributed systems and data analytics for deep learning teams
  • Create tool-chains for analytics and data scientist team members that assist them in building and optimizing AI workflows
  • Work with data and machine learning experts to strive for greater functionality in our data and model life cycle management capabilities
  • Communicate results and ideas to key decision makers in a concise manner
  • Comply with applicable legal requirements, standards, policies and procedures including, but not limited to the Compliance requirements and HIPAA.


Qualifications:Education/Licensing Requirements:
  • High school diploma or equivalent.
  • Bachelors degree in Computer Science, Electrical Engineer, Statistics, Informatics, Information Systems, or another quantitative field. or related field or equivalent work experience.


Experience Requirements:
  • 5+ years of experience in a Data Engineer role.
  • Experience using the following software/tools preferred:
    • Experience with big data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with AWS or Azure cloud services.
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C#, etc.
  • Strong work ethic, able to work both collaboratively, and independently without a lot of direct supervision, and solid problem-solving skills
  • Must have strong communication skills (written and verbal), and possess good one-on-one interpersonal skills.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
  • 2 years of experience in data modeling, ETL development, and Data warehousing
 

Essential Skills:

  • Fluently speak, read, and write English
  • Fantastic motivator and leader of teams with a demonstrated track record of mentoring and developing staff members
  • Strong point of view on who to hire and why
  • Passion for solving complex system and data challenges and desire to thrive in a constantly innovating and changing environment
  • Excellent interpersonal skills, including teamwork and negotiation
  • Excellent leadership skills
  • Superior analytical abilities, problem solving skills, technical judgment, risk assessment abilities and negotiation skills
  • Proven ability to prioritize and multi-task
  • Advanced skills in MS Office

Essential Values:

  • In Leadership Do whats right, even if its tough
  • In Collaboration Leverage our collective genius, be a team
  • In Transparency Be real
  • In Accountability Recognize that if it is to be, its up to me
  • In Passion Show commitment in heart and mind
  • In Advocacy Earn trust and business
  • In Quality Ensure what we do, we do well
Working Conditions:
  • Fast-paced environment
  • Requires working at a desk and use of a telephone and computer
  • Normal sight and hearing ability
  • Use office equipment and machinery effectively
  • Ability to ambulate to various parts of the building
  • Ability to bend, stoop
  • Work effectively with frequent interruptions
  • May require occasional overtime to meet project deadlines
  • Lifting requirements of
HelloFresh US
  • New York, NY

HelloFresh is hiring a Data Scientist to join our Supply Chain Analytics Team! In this exciting role, you will develop cutting edge insights using a wealth of data about our suppliers, ingredients, operations, and customers to improve the customer experience, drive operational efficiencies and build new supply chain capabilities. To succeed in this role, you’ll need to have a genuine interest in using data and analytic techniques to solve real business challenges, and a keen interest to make a big impact on a fast-growing organization.


You will...



  • Own the development and deployment of quantitative models to make routine and strategic operational decisions to plan the fulfillment of orders and identify the supply chain capabilities we need to build to continue succeeding in the business

  • Solve complex optimization problems with linear programming techniques

  • Collaborate across operational functions (e.g. supply chain planning, logistics, procurement, production, etc) to identify and prioritize projects

  • Communicate results and recommendations to stakeholders in a business oriented manner with clear guidelines which can be implemented across functions in the supply chain

  • Work with complex datasets across various platforms to perform descriptive, prescriptive, predictive, and exploratory analyses


At a minimum, you have...



  • Advanced degree in Statistics, Economics, Applied Mathematics, Computer Science, Data Science, Engineering or a related field

  • 2 - 5 years’ experience delivering analytical solutions to complex business problems

  • Knowledge of linear programming optimization techniques (familiarity with software like CPLEX, AMPL, etc is a plus)

  • Fluency in managing and analyzing large data sets of data with advanced tools, such as R and Python etc.

  • Experience extracting and transforming data from structured databases such as: MySQL, PostgreSQL, etc.


You are...



  • Results-oriented - You love transforming data into meaningful outcomes

  • Gritty - When you encounter obstacles you find solutions, not excuses

  • Intellectually curious – You love to understand why things are the way they are, how things work, and challenge the status quo

  • A team player – You favor team victories over individual success

  • A structured problem solver – You possess strong organizational skills and consistently demonstrate a methodical approach to all your work

  • Agile – You thrive in fast-paced and dynamic environments and are comfortable working autonomously

  • A critical thinker – You use logic to identify opportunities, evaluate alternatives, and synthesize and present critical information to solve complex problems



Our team is diverse, high-performing and international, helping us to create a truly inspiring work environment in which you will thrive!


It is the policy of HelloFresh not to discriminate against any employee or applicant for employment because of race, color, religion, sex, sexual orientation, gender identity, national origin, age, marital status, genetic information, disability or because he or she is a protected veteran.

Computer Staff
  • Fort Worth, TX

We have been retained by our client located in Fort Worth, Texas (south Ft Worth area), to deliver a Risk Modeler on a regular full-time basis.   We prefer SAS experience but are interviewing candidates with R, SPSS, WPS, MatLab or similar statistical package experience if candidate has experience from financial loan credit risk analysis industry. Enjoy all the resources of a big company, none of problems that small companies have. This company has doubled in size in 3 years. We have a keen interest in finding a business minded statistical modeling candidate with some credit risk experience to build statistical models within the marketing, direct mail areas of financial services, lending, loans. We are seeking a candidate with statistical modeling, and data analysis skills, interested in creating better ways to solve problems in order to increase loan originations, and decrease loan defaults, and more. Our client is in business to find prospective borrowers, originate loans, provide loans, service loans, process loans and collect loan payments. The team works with third party data vendors, credit reporting agencies and data service providers, data augmentation, address standardization, fraud detection; decision sciences, analytics, and this position includes create of statistical models. They support the one of the, if not the largest profile of decision management in the US.  


We require experience with statistical analysis tools such as SAS, Matlab, R, WPS or SPSS or Python if to do statistical analysis. This is a statistical modeling, risk modeling, model building, decision science, data analysis and statistical analysis type of role requiring SQL and/or SQL Server experience and critical thinking skills to solve problems.   We prefer candidates with experience with data analysis, SQL queries, joins (left, inner, outer, right), reporting from data warehouses with tools such as Tableau, COGNOS, Looker, Business Objects. We prefer candidates with financial and loan experience especially knowledge of loan originations, borrower profiles or demographics, modeling loan defaults, statistical analysis i.e. Gini Coefficients and K-S test / Kolmogorov-Smirnov test for credit scoring and default prediction and modeling.


However, primarily critical thinking skills, and statistical modeling and math / statistics skills are needed to fulfill the tasks of this very interesting and important role, including playing an important role growing your skills within this small risk/modeling team. Take on challenges in the creation and use of statistical models. There is no use for Hadoop, or any NoSQL databases in this position this is not a big data type of position. no "big data" type things needed. There is no Machine Learning or Artificial Intelligence needed in this role. Your role is to create and use those statistical models. Create statistical models for direct mail in financial lending space to reach the right customers with the right profiles / demographics / credit ratings, etc. Take credit risk, credit analysis, loan data and build a new model, or validate the existing model, or recalibrate it or rebuild it completely.   The models are focused on delivering answers to questions or solutions to problems within these areas financial loan lending: Risk Analysis, Credit Analysis, Direct Marketing, Direct Mail, and Defaults. Logistical regression in SAS or Knowledge Studio, and some light use of Looker as the B.I. tool on top of SQL Server data.   Deliver solutions or ways for this business to make improvements in these areas and help the business be more profitable. Seek answers to questions. Seek solutions to problems. Create models. Dig into the data. Explore and find opportunities to improve the business. Expected to fit within the boundaries of defaults or loan values and help drive the business with ideas to get a better models in place, or explore data sources to get better models in place. Use critical thinking to solve problems.


Answer questions or solve problems such as:

What are the statistical models needed to produce the answers to solve risk analysis and credit analysis problems?

What are customer profiles have the best demographics or credit risk for loans to send direct mail items to as direct marketing pieces?

Why are loan defaults increasing or decreasing? What is impacting the increase or decrease of loan defaults?  



Required Skills

Bachelors degree in Statistics or Finance or Economics or Management Information Systems or Math or Quantitative Business Analysis or Analytics any other related math or science or finance degree. Some loan/lending business domain work experience.

Masters degree preferred, but not required.

Critical thinking skills.

must have SQL skills (any database SQL Server, MS Access, Oracle, PostgresSQL, Postgres) and the ability to write queries, joins, inner joins, left joins, right joins, outer joins. SQL Server is highly preferred.

Any statistical analysis systems / packages experience including statistical modeling experience, and excellent math skills:   SAS, Matlab, R, WPS, SPSS or Python with R language if used in statistical analysis. Must have significant statistical modeling skills and experience.



Preferred Skills:
Loan Credit Analysis highly preferred.   SAS highly preferred.
Experience with Tableu, Cognos, Business Objects, Looker or similar data warehouse data slicing and dicing and data warehouse reporting tools.   Creating reports from data warehouse data, or data warehouse reporting. SQL Server SSAS but only to pull reports. Direct marketing, direct mail marketing, loan/lending to somewhat higher risk borrowers.



Employment Type:   Regular Full-Time

Salary Range: $85,000 130,000 / year    

Benefits:  health, medical, dental, vision only cost employee about $100 per month.
401k 4% matching after 1 year, Bonus structure, paid vacation, paid holidays, paid sick days.

Relocation assistance is an option that can be provided, for a very well qualified candidate. Local candidates are preferred.

Location: Fort Worth, Texas
(area south of downtown Fort Worth, Texas)

Immigration: US citizens and those authorized to work in the US are encouraged to apply. We are unable to sponsor H1b candidates at this time.

Please apply with resume (MS Word format preferred), and also Apply with your Resume or apply with your Linked In Profile via the buttons on the bottom of this Job Posting page:  

http://www.computerstaff.com/?jobIdDescription=314  


Please call 817-424-1411 or please send a Text to 817-601-7238 to inquire or to follow up on your resume application. Yes, we recommend you call to leave a message, or send a text with your name, at least.   Thank you for your attention and efforts.

Mercedes-Benz USA
  • Atlanta, GA

Job Overview

Mercedes-Benz USA is recruiting a Big Data Architect, a newly created position within the Information Technology Infrastructure Department. This position is responsible for refining and creating the next step in technology for our organization. In this role you will act as contact person and agile enabler for all questions regarding new IT infrastructure and services in context of Big Data solutions.

Responsibilities

  • Leverage sophisticated Big Data technologies into current and future business applications

  • Lead infrastructure projects for the implementation of new Big Data solutions

  • Design and implement modern, scalable data center architectures (on premise, hybrid or cloud) that meet the requirements of our business partners

  • Ensure the architecture is optimized for large dataset acquisition, analysis, storage, cleansing, transformation and reclamation

  • Create the requirements analysis, the platform selection and the design of the technical architecture

  • Develop IT infrastructure roadmaps and implement strategies around data science initiatives

  • Lead the research and evaluation of emerging technologies, industry and market trends to assist in project development and operational support actives

  • Work closely together with the application teams to exceed our business partners expectations

Qualifications 

Education

Bachelors Degree (accredited school) with emphasis in:

Computer/Information Science

Information Technology

Engineering

Management Information System (MIS)

Must have 5 7 years of experience in the following:

  • Architecture, design, implementation, operation and maintenance of Big Data solutions

  • Hands-on experience with major Big Data technologies and frameworks including Hadoop, MapReduce, Pig, Hive, HBase, Oozie, Mahout, Flume, ZooKeeper, MongoDB, and Cassandra.

  • Experience with Big Data solutions deployed in large cloud computing infrastructures such as AWS, GCE and Azure

  • Strong knowledge of programming and scripting languages such as Java, Linus, PHP, Ruby, Phyton

  • Big Data query tools such as Pig, Hive and Impala

  • Project Management Skills:

  • Ability to develop plans/projects from conceptualization to implementation

  • Ability to organize workflow and direct tasks as well as document milestones and ROIs and resolve problems

Proven experience with the following:

  • Open source software such as Hadoop and Red Hat

  • Shell scripting

  • Servers, storage, networking, and data archival/backup solutions

  • Industry knowledge and experience in areas such as Software Defined Networking (SDN), IT infrastructure and systems security, and cloud or network systems management

Additional Skills
Focus on problem resolution and troubleshooting
Knowledge on hardware capabilities and software interfaces and applications
Ability to produce quality digital assets/products
 
EEO Statement
Mercedes-Benz USA is committed to fostering an inclusive environment that appreciates and leverages the diversity of our team. We provide equal employment opportunity (EEO) to all qualified applicants and employees without regard to race, color, ethnicity, gender, age, national origin, religion, marital status, veteran status, physical or other disability, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local law.

Apporchid Inc
  • Philadelphia, PA

Java- Techcnial lead

Job description:

Experienced Java/J2EE technical lead with proven expertise in implementing, managing enterprise scale Hadoop architectures and environments. Setup highly available App Orchid Java Product platform in AWS with industry standard security frameworks. Collaborates with application developers to support, manage, enhance and tactical roadmaps to support large and highly visible Product environment deployments.

Roles and Responsibilities:

  • Work with Solution Architects and Business leaders to understand the architectural roadmaps that support and fulfill business strategy.
  • Lead and design custom solutions on our App Orchid Product Platform
  • Act as a Tech Lead and Engineer mentoring colleagues with less experience
  • Collaboration with a high-performing, forward-focused team, Product Owner(s) and Business stakeholders engagement
  • Enable and influence the timely and successful delivery of business data capabilities and/or technology objectives
  • Opportunity to expand your communication, analytical, interpersonal, and organization capabilities
  • Experience working in a fast paced environment driving business outcomes leveraging Agile to its fullest
  • Enhance your entrepreneurial mindset network opportunity and influencing outcomes
  • Supporting environment that fosters can-do attitude and opportunity for growth and advancement based on consistent demonstrative performance.
  • Expertise in system administration and programming skills. Storage, performance tuning and capacity management of Big Data.
  • Good understanding of Hadoop eco system such as HDFS, YARN, Map Reduce, HBase, Spark, and Hive.
  • Experience in setup of SSL and integration with Active Directory.
  • Good exposure to CI/CD
  • Oversee technical deliverables for invest and maintenance projects through the software development life cycle, including validating the completeness of estimates, quality and accuracy of technical designs, build and implementation.
  • Proactively address technical issues and risks that could impact project schedule and/or budget
  • Work closely with stakeholders to design and document automation solutions that align with the business needs and also consistent with the architectural vision.
  • Facilitate continuity between Sourcing Partners, other IT Groups and Enterprise Architecture.
  • Work closely with the architecture team to ensure that the technical solution designs and implementation are consistent with the architectural vision, as well as to drive the business through technical innovation through the use of newly identified and leading technologies.
  • Own and drive adoption of DevOps tools and best practices (including conducting (automated) code reviews, reducing/eliminating technical debt, and delivering vulnerability free code) across the application portfolio.

Qualifications

  • Bachelor's degree or equivalent work experience
  • Eight to Ten years (or more) experience as Java/J2EE Technical lead/Sr developer in a large production environment.
  • A deep understanding of Big Data,  Java, Elastic Search, Kibana, Postgresql, TestNG, Gradle
  • Good verbal and written communication skill
  • Demonstrated experience in working on large projects or small teams
  • Working knowledge of Red Hat Linux and Windows operating systems
  • Expert knowledge in Java programming language, SQL and microservices  
  • Good understanding of Cloud technologies, especially AWS stack
  • At least 8 years experience with developing and implementing applications

Desired Skills and Experience

  • Proficient with Java development
  • Ability to quickly learn new technologies and enable/train other analysts
  • Ability to work independently as well as in a team environment on moderate to highly complex issues
  • High technical aptitude and demonstrated progression of technical skills - continuous improvement
  • Ability to automate software/application installations and configurations hosted on Linux servers.
GE Capital
  • Ann Arbor, MI
  • ***Please Note: This Role is in Van Buren, MI (30 minutes drive from Ann Arbor)


Role Summary

Serve as analytics & visualization developer to build innovative solutions to support a broad range of analysis and outcomes. Partner with teams to create wing-to-wing transactional views, trends and anomalies leveraging GE's data lake. Look for new ways to harness the data we have for insights and actionable outcomes. 

 
In This Role, You Will

Essential Responsibilities: 


  • Develop Spotfire reports utilizing advanced data visualization techniques and related SQL.
  • Leverage Treasury Data Lake and data virtualization technologies (Denodo) to deliver new capabilities on tablet and mobile platforms.
  • Work on an agile team using Rally to quickly prototype and iterate on ideas
  • Lead the research and evaluation of emerging technology, industry and market trends to assist in project development and/or operational support activities
  • Technical analysis working with PostgreSQL & AWS native services
  • Partner with business teams to define requirements & user stories
  • Building and implementing analytical models with R and Python


Qualifications/Requirements
  • Bachelors degree from an accredited university or college in Computer Science or Information Systems
  • One or more years experience of design & development of data centric applications leveraging data from enterprise data warehouses.


Eligibility Requirements:

  • Legal authorization to work in the U.S. is required. We will not sponsor individuals for employment visas, now or in the future, for this job


Technical Expertise

Desired Characteristics:
  • 1 year+ experience with BI visualization and/or reporting tools (Expert level knowledge of Modern BI Platforms like Spotfire, Qlik, Tableau etc.); a data and reporting guru.
  • Experience with web technologies such as ASP, HTML and CSS Integration of same with Data Visualization tools (e.g. extensions) a plus
  • Experience with scripting languages like Java Script, Python etc.
  • Exposure to advanced analytic & data science applications
  • Excellent BI application development skills, as demonstrated by having led, designed and implemented successful web and mobile projects
  • Ability to clearly articulate creative ideas to senior leaders
  • Ability to guide and direct technical team members through the SDLC
  • Ability to hit tight deadlines and work under pressure
  • SAP and/or Oracle ERP systems exposure a plus
  • Passion for learning new technologies and eagerness to collaborate with other creative minds
  • Strong desire for exploring, evaluating and understanding new technologies
Strong Analytics
  • Chicago, IL
  • Salary: $80k - 125k

Strong Analytics is seeking a data scientist to join our team in developing machine learning pipelines, building  statistical models, and generally helping our clients discover value in their data.


At Strong, we pride ourselves not only in building the right solutions for our clients through research and development, but in implementing and scaling up those solutions through strong engineering. This role thus requires a deep expertise in applying statistics and machine learning to real-world problems where data must be gathered, transformed, cleaned, and integrated into some larger architecture.


We offer a comprehensive compensation package, including:



  • Competitive salary

  • Profit sharing or equity, based on experience

  • Health insurance

  • Four weeks paid vacation

  • Work-from-Home Wednesdays



Requirements



Candidates will be evaluated based on their experience in the following areas (though no one is expected to be an expert in each of these):



  • Statistical modeling and hypothesis testing

  • Designing, training, and validating results from a breadth of machine learning algorithms

  • Writing clean, efficient SQL

  • Integrating with various RDBMS (e.g., Postgres, MySQL) and distributed data stores (e.g., Hadoop)

  • Building Python applications

  • Deploying applications into cloud-based infrastructures (e.g., AWS)

  • Building deep neural networks with modern tools, such as PyTorch or Tensorflow

  • Creating and interacting with RESTful APIs

  • Managing *nix servers

  • Writing unit tests

  • Collaborating via Git


Applicants with a PhD in a quantitative field are preferred; however, all applicants will be considered based on their experience and demonstrated skill/aptitude.


Applicants should have the ability to travel infrequently (<5% of your time) for team meetings, conferences, and occasional client site visits.

Giesecke+Devrient Currency Technology GmbH
  • München, Deutschland

Arbeiten Sie mit an einer Zukunft, die bewegt. Für unsere Division Currency Management Solutions suchen wir Sie als



Big Data Engineer (m/w/d)



Ihre Aufgaben:




  • Sie sind verantwortlich für Design, Implementierung und Betrieb von effektiven Data Processing Architekturen innerhalb unserer innovativen Microservices- und Cloud-basierten Data Analytics, IIoT und Digitallösungen

  • Ingestion, Integration, Organization, Batch und Stream Processing, Lifecycle Management der Daten

  • Sichern der Qualität, Integrität und Privacy der Daten

  • Setup, Monitoring und Tuning der Hadoop Cluster und Datenbanken, you build it you run it

  • Enge Zusammenarbeit in agilen Teams mit Data Scientists, Entwicklungsteams und Product Ownern in der Datenmodellierung, Datenanalyse und Technologieberatung



Ihr Profil:




  • Studium (Master, FH / Uni) der Informatik oder einer vergleichbaren Fachrichtung

  • Sehr gute Kenntnisse mit Data Ingestion / Integration (Flume, Sqoop, Nify) Data Storage (PostgreSQL, MongoDB), Distributed Storage (Hadoop, Cloudera), Messaging (Kafka, MQTT), Data Processing (Spark, Scala), Scheduling (Oozie, Pig)

  • Praktische Erfahrung in der Entwicklung und Betrieb von large-scale Data Processing Pipelines in skalierbaren Microservice / REST Architekturen

  • Erfahrung in Cloud-Umgebungen wie Microsoft Azure oder AWS wünschenswert

  • Sehr gute Deutsch- und Englischkenntnisse in Wort und Schrift






Wir freuen uns auf Ihre Online-Bewerbung unter www.gi-de.com/karriere.




Giesecke+Devrient Currency Technology GmbH · Prinzregentenstraße 159 · 81677 München

Giesecke+Devrient Currency Technology GmbH
  • München, Deutschland

Arbeiten Sie mit an einer Zukunft, die bewegt. Für unsere Division Currency Management Solutions suchen wir Sie als



Java Software Architekt (m/w/d)



Ihre Aufgaben:




  • Sie gestalten und verantworten die Umsetzung komplexer IIoT Machine Operations, Data Analytics oder Digitallösungen für uns und Cash Center unserer Kunden. Cash Center sind von uns entwickelte voll automatisierte und schlüsselfertige Anlagen. Sie bieten Services von der Produktion, über das Sortieren und die Umlaufprüfung bis hin zur Vernichtung von Banknoten

  • Sie entwickeln skalierbare und zukunftsweisende Architekturen, die für den Cloud-basierten Einsatz wie auch für on-premise geeignet sind

  • Die notwendigen Technologien und Methoden sind Ihnen bestens vertraut und Sie entwickeln diese kontinuierlich im Team weiter

  • Mit Ihrer Erfahrung in der interdisziplinären und verteilten Zusam­men­arbeit und Nutzung von agilen Praktiken wie Scrum und Lean-Startup ge­währ­leis­ten Sie den Produkterfolg im gesamten Lebenszyklus (you build it you run it)

  • Sie kommunizieren und präsentieren verständlich Ihre Lösung, die tech­nische Roadmap inklusive Vision

  • Gemeinsam in einem Team von Architekten und Product Ownern bringen Sie die Gesamtlösung voran und sichern damit unsere Innovationsführerschaft und den Kundenerfolg



Ihr Profil:




  • Studium (Master, FH / Uni) der Informatik oder einer vergleichbaren Fach­rich­tung

  • Nachgewiesene Erfahrung als Software Architekt (m/w/d) von skalierbaren Lösungen mit profundem Wissen in modernen Design Patterns (wie Micro­services, REST, Web-APIs, IIoT, Digital Twins, Cloud, Data Analytics)

  • Praktische Erfahrung mit Data Storage Technologien (NoSQL, SQL, Caching, Hadoop) und erste Erfahrung in Cloud-Umgebungen (wie Micro­soft Azure oder AWS)

  • Erfahrung in Technologien wie Linux, Java, Python, Spring Boot, Traefik, Docker, Kubernetes, Hazelcast, Redis, Kafka, MQTT, PostgreSQL, MongoDB, ElasticSearch, Prometheus, Kibana, Angular, Vue, OAuth, DevOps

  • Sehr gute Deutsch- und Englischkenntnisse in Wort und Schrift






Wir freuen uns auf Ihre Online-Bewerbung unter www.gi-de.com/karriere.




Giesecke+Devrient Currency Technology GmbH · Prinzregentenstraße 159 · 81677 München

InnoGames
  • Hamburg, Deutschland

Software Engineer - Big Data


Development Hamburg, Germany Full-Time


Join our agile, cross-functional development team to push forward our Big Data ambitions, using the Hadoop ecosystem and technologies like MapReduce, Flink, Hive and Kafka. You will design and implement systems in a distributed environment to prepare, provide and analyze huge amounts of data.


Your mission:



  • Real- and near-time processing of our more than 1,500,000,000 daily game events

  • Preparing very large amounts of data (ETL) as a basis for further processing in other systems (e.g. Business Intelligence, CRM) using a range of technologies, including SQL (Hive), Flink and Kafka

  • Providing high-availability, centralized systems and services (REST APIs) for our games as well as solutions for subject-specific analysis requirements (e.g. multivariate testing), using technologies like Dropwizard or the Play framework.



Your profile:



  • You have completed your training as an IT specialist, a university degree in computer science or an equivalent qualification

  • Experience in object-oriented programming in Java and at least one other script or programming language (e.g. Bash, PHP, Scala, Go), ideally in an agile environment

  • You enjoy working with large amounts of data in distributed systems and you have very good knowledge of SQL

  • You know your way around the command line of UNIX operating systems

  • Ideally, you develop in a test-driven manner and use the advantages of a continuous integration server (e.g. Jenkins) for deployment

  • As the perfect candidate, you have already gained some experience with Big Data technologies, especially with the Hadoop ecosystem

  • You like to share a good laugh with your colleagues

  • Good English language skills complete your profile



Why join us?



  • Shape the success story of InnoGames with a great team of driven experts in an international culture

  • You will work in a multicultural Kanban-based team with daily stand ups and regular retrospectives

  • Work in a relaxed but solution-oriented and productive environment with clear goals and a focus on professional development

  • A great company culture: we strive for long term, sustainable success. We provide and accept feedback to improve others and ourselves

  • Professional education through developer conferences, unlimited free textbooks and frequent specialist presentations from your colleagues

  • We have regular team events (e.g. curling, cooking, paintball), BBQ together on our roof terrace and have a team lunch each Wednesday

  • You can set up your workspace as you wish: Mac or Linux, two or more monitors, a free choice of IDE as well as other software

  • We pay a share of your local public transportation ticket and even contribute to your company retirement plan

  • Exceptional benefits ranging from flawless relocation support to company gym, smartphone or tablet of your own choice for personal use, roof terrace with BBQ and much more

  • We have fun at work!



Excited to start your journey with InnoGames and join our dynamic team as a Software Engineer – Big Data? We look forward to receiving your application as well as your salary expectations and earliest possible start date through our online application form. Isabella Dettlaff would be happy to answer any questions you may have.

InnoGames, based in Hamburg, is one of the leading developers and publishers of online games with more than 200 million registered players around the world. Currently, more than 400 people from 30 nations are working in the Hamburg-based headquarters. We have been characterized by dynamic growth ever since the company was founded in 2007. In order to further expand our success and to realize new projects, we are constantly looking for young talents, experienced professionals, and creative thinkers.

BigCommerce
  • Austin, TX

BigCommerce is disrupting the e-commerce industry as the SaaS leader for fast- growing, mid-market businesses. We enable our customers to build intuitive and engaging stores to support every stage of their growth.

Be a leader on our Ecosystem Team that powers our billing, partner, & identity platforms. Youll be working with team members to extend our products and integrate with a broad array of external services. BigCommerce offers a heavily collaborative environment helping you expand your skill set and take ideas from inception to delivery. This role will require a need to balance: driving our aggressive product roadmap, improving the performance and stability of our system, introducing engineering best practices into the organization, and leading/mentoring other engineers.

What youll do

  • Use Ruby, Rails, gRPC, JavaScript, RabbitMQ, Docker, Resque, MySQL, Redis, and a slew of other technology to help power our platform
  • Help design/architect/execute the building of services for the BigCommerce platform.
  • Build highly-available, distributed systems
  • Build integrations with 3rd party SOAP/REST APIs that can span multiple codesets/services, fail gracefully, and be highly extensible
  • Coach team towards code that is performant, fault-tolerant, maintainable, testable, and concise
  • Drive our technical roadmap and direction of our stack
  • Encourage innovation and foster an environment of continuous improvement
  • Collaborate with stakeholders and other teams to promote communication & coordination
  • Support and coach 4-6 members of your team
  • Foster a culture that is open, positive, energized and innovative


Who You Are

  • 5-7+ years of experience in building systems using at least two different languages: Ruby/Rails (required), Scala, Java, PHP, Python, Node, etc. We currently primarily use Ruby, PHP, and Scala
  • 2+ years managing, driving and retaining a high-performance team.
  • Experience building integrations with 3rd party SOAP/REST API providers that can span multiple code sets, fail gracefully and be highly extendable.
  • Knowledge of TDD, BDD, DDD
  • Nice to Have: Experience with OAuth and/or SAML workflows and permissions. 
  • Nice to Have: DevOps experience, GCP experience, and/or Docker or other containerization technologies
  • Desire to work in a collaborative, open environment on an Agile team
  • Highly proactive and results-oriented with excellent critical thinking skills
  • Excited to learn new technologies
  • Experience with ecommerce, distributed queuing systems, SaaS platforms, highly desirable


Diversity & Inclusion at BigCommerce


We have the opportunity to build not only a great business, but a great company, with soul. Our beliefs and commitment to diversity and inclusion are a central part of achieving that.

Our dedication to diversity and inclusion is grounded in two things: a moral belief in the dignity, value, and potential of every individual, and a practical belief that diverse, inclusive teams will create the best outcomes for our customers, partners, employees, and company. We welcome everyone to be a part of our journey


Perks & Benefits


    • An amazing company culture that doesnt just talk values, but lives them
    • Our Think Big Program encourages and rewards employee-led innovation
    • Employees are empowered to go above and beyond their daily duties to act on ideas that help our customers and/or improve the BigCommerce platform
    • Competitive compensation packages and meaningful stock grants for every employee
    • Comprehensive health insurance coverage that starts on day one
    • A free online store to help you live out your entrepreneurial dreams
    • Time off for volunteering and employee-driven charity events