OnlyDataJobs.com

U.S. Bank
  • Minneapolis, MN

SUMMARY

The successful candidate for this Data Analyst position will be responsible for day-to-day reporting of mobile app voice capability function. This is a business line support function that will provide reporting of voice adoption and performance metrics used in communication plans with development and data science groups for targeting enhancements. The successful candidate should be able to provide and communicate reporting findings to peers and senior stakeholders.

RESPONSIBILITIES

This position's job functions will include:

- Reporting/KPIs: Agile reporting dashboards, Capex and Epic and tactical performance measurement.

- Raw data requests: active customers and customers by age, duration, income, etc.

- Adoption/Engagement: user metric tracking, engagement frequency, volume measurement, segmentation reporting and geo-location reporting.

- Digital voice funnel insights: success rates, processing and response timing and engagement and fallout.

- A/B testing: front-end test design with technical partners and test performance measurement (quality tacking, e.g., fall out, success rates, usage rates and response time, and conversion (tracking to goal).

- Satisfaction tracking: CSAT, sentiment and leading satisfaction indicator identification.

- Visualization: self-service dashboard development and proven ability to develop in Excel and Tableau.

This position will help drive and contribute to the overall team success measures and goals related to:

- KPI Metrics (LOB Dashboards)

- Descriptive Statistics

 -Raw Data Requests

- CapEx Planning / Forecasting

- Benchmarking

- Funnel Analysis

- Forecasting (Plan)

- Optimization (Operational)

REQUIRED

- Bachelor's degree in a quantitative field such as econometrics, computer science, engineering or applied mathematics, or equivalent work experience.

- 6+ years of statistics or analytics experience.

- Bachelor's degree in a quantitative field such as econometrics, computer science, engineering or applied mathematics, or equivalent work experience.

PREFERRED

- Working knowledge of analytics and statistical software such as SQL, R, Python, Excel, Hadoop, SAS, SPSS and others to perform analysis and interpret data.

- Mastery of Microsoft Excel.

- Experience in report development, automation, and visualization.

- Strong reporting skills with the ability to extract, collect, organize, and interpret trends or patterns in complex data sets.

- Basic understanding of data science capabilities and language.

- Demonstrated project management skills.

- Demonstrated leadership and pro-activity skills.

- Effective interpersonal, verbal and written communication skills.

- Considerable knowledge of assigned business line or functional area.

- Working knowledge of customer self-service technology capabilities.

- Basic understanding of data architecture and development concepts.
Novartis Institutes for BioMedical Research
  • Cambridge, MA

20 years of untapped data waiting for a new Principle Scientific Computing Engineer/Scientific Programmer, Imaging and Machine Learning to unlock the next breakthrough in innovative medicines for patients in need. You will be at the forefront of life sciences, tackling some incredible challenges that are curing diseases and improving patients’ lives.

Your responsibilities include, but are not limited to: 
Collaborating with scientists to create, optimize and accelerate workflows through the application of High Performance Computing Techniques. You will integrate algorithm and application development with state of the art technologies to create scalable platforms that accelerate scientific research in a reproducible and standardized manner.

Key responsibilities:
• Collaborate with scientists and Research IT peers to provide consulting services around parallel algorithm development and workflow optimization for the High Performance Computing (HPC) platform.
• Teaching and training the NIBR Scientific and Informatics community in areas of expertise
• Research, develop and integrate new technologies and computational approaches to enhance and accelerate scientific research.
• Establish and maintain the technical partnership with one or more scientific functions in NIBR.




Minimum Requirements


What you will bring to this role:
• BSc in computer science or related field; or equivalent experience with 
• 5 years minimum relevant experience including strong competencies in data structures, algorithms, and software design
• Experience with High Performance Computing and Cloud Computing
• Demonstrated proficiency in Python, C, C++, CUDA or OpenCL
• Demonstrated proficiency in Signal Processing, Advanced Imaging and Microscopy techniques.
• Solid project management skills and process analysis skills
• Demonstration of strong collaboration skills, effective communication skills, and creative problem-solving abilities

Preferred Qualifications
• MSc degree
• Demonstrated proficiency in 2 or more advanced machine learning frameworks and their application to natural language processing, action recognition-micro and macro tracking.
• Demonstrated proficiency in Signal Processing, Advanced Imaging and Microscopy techniques.
• Interest in drug discovery and knowledge of the life science is a strong plus
• Knowledge of Deep visualization, Deep transfer learning and Generative Adversarial Networks is a plus.
• Demonstrated proficiency in MPI in a large-scale Linux HPC environment
• Experience with CellProfiler, Matlab, ImageJ and R is a plus.

Position will be filled commensurate with experience

Accenture
  • Detroit, MI
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Minneapolis, MN
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Atlanta, GA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • Philadelphia, PA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
Accenture
  • San Diego, CA
Position: Full time
Travel: 100% travel
Join Accenture Digital and leave your digital mark on the world, enhancing millions of lives through digital transformation. Where you can create the best customer experiences and be a catalyst for first-to-market digital products and solutions using machine-learning, AI, big data and analytics, cloud, mobility, robotics and the industrial internet of things. Your work will redefine the way entire industries work in every corner of the globe.
Youll be part of a team with incredible end-to-end digital transformation capabilities that shares your passion for digital technology and takes pride in making a tangible difference. If you want to contribute on an incredible array of the biggest and most complex projects in the digital space, consider a career with Accenture Digital.
Basic Qualifications
    • Bachelor's degree in Computer Science, Engineering, Technical Science or 3 years of IT/Programming experience.
    • Minimum 2+ years of expertise in designing, implementing large scale data pipelines for data curation and analysis, operating in production environments, using Spark, pySpark, SparkSQL, with Java, Scala or Python on premise or on Cloud (AWS, Google or Azure)
    • Minimum 1 year of designing and building performant data models at scale for using Hadoop, NoSQL, Graph or Cloud native data stores and services.
    • Minimum 1 year of designing and building secured Big Data ETL pipelines, using Talend or Informatica Big Data Editions; for data curation and analysis of large scale production deployed solutions.
    • Minimum 6 months of experience designing and building data models to support large scale BI, Analytics and AI solutions for Big Data.
Preferred Skills
    • Minimum 6 months of expertise in implementation with Databricks.
    • Experience in Machine learning using Python ( sklearn) ,SparkML , H2O and/or SageMaker.
    • Knowledge of Deep Learning (CNN, RNN, ANN) using TensorFlow.
    • Knowledge of Auto Machine Learning tools ( H2O, Datarobot, Google AutoML).
    • Minimum 2 years designing and implementing large scale data warehousing and analytics solutions working with RDBMS (e.g. Oracle, Teradata, DB2, Netezza,SAS) and understanding of the challenges and limitations of these traditional solutions.
    • Minimum 1 year of experience implementing SQL on Hadoop solutions using tools like Presto, AtScale, Jethro and others.
    • Minimum 1 year of experience building data management (metadata, lineage, tracking etc.) and governance solutions for big data platforms on premise or on AWS, Google and Azure cloud.
    • Minimum 1 year of Re-architecting and rationalizing traditional data warehouses with Hadoop, Spark or NoSQL technologies on premise or transition to AWS, Google clouds.
    • Experience implementing data preparation technologies such as Paxata, Trifacta, Tamr for enabling self-service solutions.
    • Minimum 1 year of building Business Data Catalogs or Data Marketplaces on top of a Hybrid data platform containing Big Data technologies (e.g Alation, Informatica or custom portals).
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration. Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law. Job Candidates will not be obligated to disclose sealed or expunged records of conviction or arrest as part of the hiring process.
Accenture is committed to providing veteran employment opportunities to our service men and women .
TRA Robotics
  • Berlin, Germany

We are engineers, designers and technologists, united by the idea of shaping the future. Our mission is to reimagine the manufacturing process. It will be fully software defined. It will be driven entirely by AI. This will mean new products will get to market much quicker.


Now we are working on creating a flexible robotic factory managed by AI. We are developing and integrating a stack of products that will facilitate the whole production process from design to manufacturing. Our goal is complex and deeply rooted in science. We understand that it is only achievable in collaboration across diverse disciplines and knowledge domains.


We're looking for Computer Vision Lead to become a part of the team in our new Berlin office.


About the project:


We want our robots to have a perfect vision. As a team leader, you will manage a distributed team of high-skilled engineers as well as create algorithms for identification, localization and tracking of objects based on both classic computer vision and deep learning.


Your Qualifications:



  • Proficiency with C/C++, Python

  • Strong knowledge of CV algorithms, ML/DL algorithms

  • Extensive experience in OpenCV, TensorFlow, CUDA

  • Deep understanding of optimization methods, machine learning, linear algebra, theory of chances, math statistics, realtime systems

  • Experience to manage distributed teams

  • Fluency in English


Will be an advantage:



  • IoT experience

  • Matlab

  • Java

  • DSP/FPGA

  • location systems


Your tasks:



  • Working with sensors of various types (2D / 3D cameras, lidars, 3D scanners)

  • Development of computer vision algorithms

  • Solving the problem of identification and localization of the object

  • Development of visual quality control system


What we offer:



  • To join highly scientific-intensive culture and take part in developing the unique product

  • The ability to choose technology stack and approaches

  • Yearly educational budget - we support your ambitions to learn

  • Relocation package - we would like to make your start as smooth as possible

  • Flexible working environment - choose your working hours and equipment

  • Cozy co-working space in Berlin-Mitte with an access to a terrace

KORE1
  • Austin, TX

Unfortunately no H-1 transfers or C2C


Kore1, Inc., the world leader in the recruitment of creative and information technology professionals, has an immediate Pricing Analyst  opening for a Tech company in the Austin Texas area.  Pricing Analyst is to be detailed-orientated with strong analytical skills. This position will have responsibilities for the day-to-day pricing support, analytics and metric development for multiple business segments.  Pricing Analyst will help develop dashboard reporting, pricing model design and impact analysis tools as well as support day to day operations. The Pricing Team works closely with Sales, Product Management, Marketing, Finance and other functional teams related to strategic pricing


Every day millions of people in more than 100 countries use our products and services to securely access physical and digital places. We make it possible for people to transact safely, work productively and travel freely.

What You Will Be Doing:

  • Development Reporting and Analytics related tracking of Pricing, Discounting and Contribution Margin
  • Document global pricing process and workflows
  • Price Book Production and Support
  • Pricing dashboard development, including pricing deviations
  • Special Quote tracking, pricing promotions and other reports as needed by the business
  • Impact and gap analysis modeling
  • Execute ERP pricing, promotions, and OTC maintenance and development
  • Modeling of Pricing Programs, and Revenue impacts
  • FX evaluation and impact analysis
  • Business partner pricing verification and compliance
  • ERP Enhancement process documentation, testing and verification
  • Daily monitoring, implementation and troubleshooting of pricing requests
  • Day-to-day management of pricing, on company platforms; i.e. salesforce and Oracle
  • Provide operational pricing support for the complex pricing requests, i.e. regional modifiers, currency exchange, Global Accounts, as defined by the business
  • Data analytics and reporting for partner pricing and partner assessments, within Cubeware and Excel


What You Need:

  • Advanced level Microsoft Excel Skills
  • Ability to document and maintain complex process mapping and desk procedures
  • Comfortable driving change with a service oriented perspective
  • Comfortable in a collaborative team environment
  • Data management and database management experience preferred.
  • Must possess excellent organization and time management skills.
  • Strong analytical skills required.
  • Excellent verbal and written communication.
  • Exceptional attention to detail
  • Ability to adapt and deliver to fluid deadlines
  • Proficient with reading and creating process flow charts and diagrams.
  • Strong teamwork orientation; works well with cross-functional departments and ability to build and maintain strong relationships; willingness to help larger team meet goals and responsibilities.
  • Ability to think strategically, absorb complex business issues and assess business requirements to ensure alignment to corporate objectives.
  • Ability to establish priorities and follow through on projects, paying close attention to detail with minimal supervision.
  • Ability to interact effectively with all levels within the organization, including Executive, management and individual contributors.


Education and/or Experience:

  • 5+ years Pricing Administration Experience
  • Bachelors Degree in Finance preferred


Language Skills:

  • Ability to effectively communicate in the English language, both verbally and in writing.
  • Ability to construct succinct and efficient executive level communication
  • Ability to read and interpret technical journals, specifications, international technical standards, etc.              

Customer Expectations:

  • Collaborate with cross functional teams
  • Team Player, key contributor


Computer Skills:

  • Advanced Microsoft Excel and PowerPoint capabilities
  • Intermediate database construction and maintenance capability
  • Proficient with MS-Windows operating system.
  • ERP system experience preferred (Oracle EBS, SAP or comparable)
  • 5 years of experience in ERP systems operation, Oracle R12 preferred
  • Knowledge of Oracle Advanced Pricing Module
  • Advanced Microsoft Excel skills
  • Cubeware Reporting Experience a plus
  • Intermediate PowerPoint capabilities
  • Intermediate database construction and maintenance capability
  • Comfortable using cutting edge technological work tools and communication mediums
  • Experience with Salesforce preferred

#HED

Gravity IT Resources
  • Miami, FL

Overview of Position:

We undertaking an ambitious digital transformation across Sales, Service, Marketing, and eCommerce. We are looking for a web data analytics wizard with prior experience in digital data preparation, discovery, and predictive analytics.

The data scientist/web analyst will work with external partners, digital business partners, enterprise analytics, and technology team to strategically plan and develop datasets, measure web analytics, and execute on predictive and prescriptive use cases. The role demands the ability to (1) Learn quickly (2) Work in a fast-paced, team-driven environment (3) Manage multiple efforts simultaneously (4) Adept at using large datasets and using models to test effectiveness of different courses of action (5) Promote data driven decision making throughout the organization (6) Define and measure success of capabilities we provide the organization.


Primary Duties and Responsibilities

    Analy
    • ze data captured through Google Analytics and develop meaningful actionable insights on digital behavior. Put t
    • ogether a customer 360 data frame by connecting CRM Sales, Service, Marketing cloud data with Commerce Web behavior data and wrangle the data into a usable form. Use p
    • redictive modelling to increase and optimize customer experiences across online & offline channels. Evalu
    • ate customer experience and conversions to provide insights & tactical recommendations for web optimization
    • Execute on digital predictive use cases and collaborate with enterprise analytics team to ensure use of best tools and methodologies.
    • Lead support for enterprise voice of customer feedback analytics.
    • Enhance and maintain digital data library and definitions.

Minimum Qualifications

  • Bachelors degree in Statistics, Computer Science, Marketing, Engineering or equivalent
  • 3 years or more of working experience in building predictive models.
  • Experience in Google Analytics or similar web behavior tracking tools is required.
  • Experience in R is a must with working knowledge of connecting to multiple data sources such as amazon redshift, salesforce, google analytics, etc.
  • Working knowledge in machine learning algorithms such as Random Forest, K-means, Apriori, Support Vector machine, etc.
  • Experience in A/B testing or multivariate testing.
  • Experience in media tracking tags and pixels, UTM, and custom tracking methods.
  • Microsoft Office Excel & PPT (advanced).

Preferred Qualifications

  • Masters degree in statistics or equivalent.
  • Google Analytics 360 experience/certification.
  • SQL workbench, Postgres.
  • Alteryx experience is a plus.
  • Tableau experience is a plus.
  • Experience in HTML, JavaScript.
  • Experience in SAP analytics cloud or SAP desktop predictive tool is a plus
Sentek Global
  • San Diego, CA

Sentek Global is seeking a Software Engineer to provide support to PMW 150 in San Diego, CA!


Responsibilities
  • Design, build and maintain software, develop software infrastructure and development environments, and transition older products and capabilities to the new architectures.
  • Produce effective and powerful solutions to complex problems in areas such assoftware engineering, data analytics, automation,and cybersecurity.
  • Perform analysis of existing and emerging operational and functional requirements to support the current and future systems capabilities and requirements.
  • Provide technical expertise, guidance, architecture, development and support in many different technologies directly to government customers.
  • Perform schedule planning and program management tasks as required.
  • Perform Risk Analysis for implementation of program requirements.
  • Assist in the development of requirements documents.
  • Other duties as required.


Qualifications
  • A current active secret clearance is required to be considered for this role.
  • A Bachelors Degree in data science, data analytics, computer science, or a related technical discipline is required.
  • Three to five (3-5) years providing software engineering support to a DoD program office.
  • Experience working with data rich problems through research or programs.
  • Experience with computer programming or user experience/user interface.
  • Demonstrated knowledge completing projects with large or incomplete data and ability to recommend solutions.
  • Experience with Machine Learning algorithms including convolutional neural networks (CNN), regression, classification, clustering, etc.
  • Experience using deep learning frameworks (preferably TensorFlow).
  • Experience designing and developing professional software using Linux, Python, C++, JAVA, etc.
    • Experience applying Deep/Machine Learning technology to solve real-world problems:
    • Selecting features, building and optimizing classifiers using machine learning techniques.
    • Data mining using state-of-the-art methods.
    • Extending companys data with third party sources of information when needed.
    • Enhancing data collection procedures to include information that is relevant for building analytic systems.
  • Experience processing, cleansing, and verifying the integrity of data used for analysis.
  • Experience performing ad-hoc analyses and presenting results in a clear manner.
  • Experience creating automated anomaly detection systems and constant tracking of its performance.
  • Must be able to travel one to three (1-3) times per year.
KELZAL (QELZAL CORPORATION)
  • San Diego, CA

Challenge:

As Kelzals Machine Learning Engineer, youwill be part of an innovative team that designs and develops algorithms and software for the next generation of AI-enabled visual systems. You will develop power-efficient machine learning and adaptive signal processing algorithms to solve real-world imaging and video classification problems.


Responsibilities:

  • Develop algorithms for the fast, low-complexity and accurate detection and tracking of objects in real-world environments
  • Develop algorithms for event-based spatio-temporal signal processing
  • Contribute to our machine learning tool sets for curating data and training models
  • Inform sensor decisions for optimal approaches to classification for product requirements
  • Follow and drive research on state-of-the-art approaches in the areas described above, as applied to the problems we're solving


Requirements:

·      Experience in event-based signal processing

·      Experience in continuous-time signal processing techniques

·      Experience in some deep neural network packages (e.g. TensorFlow, NVIDIA Digits,             Caffe/Caffe2)

·      Experience with OpenCV

·      Experience with traditional computer vision approaches to image processing

·      Experience with developing machine-learning algorithms for multi-modal object detection,         scene understanding, semantic classification, face verification, human pose estimation, activity recognition, or anomaly detection

·      Strong experience with classification and regression algorithms

·      Strong coding skills with Python and/or C/C++ in Linux environment

·      Track record of research excellence or/and experience converting publications to actual implementations

·      Experience with commercial development processes such as continuous integration, deployment and release management tools a plus.

·      Experience launching products containing machine learning algorithms a plus

·      Experience with fixed point implementation a plus

·      3+ years hands-on experience working in industry

·      MS or PhD Degree in Computer Science, Electrical Engineering or a related field

.      Current US work authorization

UST Global
  • San Diego, CA

KEY SKILLSETS

- 7+ years experience with Python

- 4+ years experience with Java


General Responsibilities
- Selecting features, building and optimizing classifiers using machine learning techniques
- Data mining using state of the art methods
- Extending business data with third party sources of information when needed
- Enhancing data collection procedures to include information that is relevant for building analytic systems
- Processing, cleansing, and verifying the integrity of data used for analysis
- Doing ad hoc analysis and presenting results in a clear manner
- Creating automated anomaly detection systems and constant tracking of its performance
Skills and Qualifications
- Min 8 yrs of experience
- Hands on experience in Python
- Excellent understanding of machine learning techniques and algorithms.
- Experience with common data science toolkits, such as R, Weka, NumPy, MatLab, etc Excellence in at least one of these is highly desirable
- Great communication skills
- Experience with data visualization tools, such as GGplot, etc.
- Proficiency in using query languages such as SQL, Hive, Pig
- Experience with NoSQL databases, such as MongoDB
- Good applied statistics skills, such as distributions, statistical testing, regression,

UST Global
  • Atlanta, GA

KEY SKILLSETS

- 7+ years experience with Python

- 4+ years experience with Java


General Responsibilities
- Selecting features, building and optimizing classifiers using machine learning techniques
- Data mining using state of the art methods
- Extending business data with third party sources of information when needed
- Enhancing data collection procedures to include information that is relevant for building analytic systems
- Processing, cleansing, and verifying the integrity of data used for analysis
- Doing ad hoc analysis and presenting results in a clear manner
- Creating automated anomaly detection systems and constant tracking of its performance
Skills and Qualifications
- Min 8 yrs of experience
- Hands on experience in Python
- Excellent understanding of machine learning techniques and algorithms.
- Experience with common data science toolkits, such as R, Weka, NumPy, MatLab, etc Excellence in at least one of these is highly desirable
- Great communication skills
- Experience with data visualization tools, such as GGplot, etc.
- Proficiency in using query languages such as SQL, Hive, Pig
- Experience with NoSQL databases, such as MongoDB
- Good applied statistics skills, such as distributions, statistical testing, regression,

National Pen
  • San Diego, CA

National Pen is looking for an enthusiastic and passionate Digital Marketing Analyst to accelerate the growth of our ecommerce business utilizing Google Analytics (GA 360). In this role, youll work closely with the Channel Managers to provide valuable insight into how we measure, analyze, and attribute our digital marketing efforts. You will be expected to cull actionable insights from the data and work with the business leaders to implement and monitor your suggestions. This is a direct hire position with Benefits, PTO, 401K plus much more!


Responsibilities

  • Create dashboards and ad hoc reports supporting your analysis using Google Data Studio
  • Monitor trends to identify funnel drop-off points using Custom GA Reports and Google Sheets
  • Work with Channel Managers to identify tracking opportunities on the website
  • Facilitate any changes to analytics tagging by working with front-end engineering team to test and deploy new tags
  • Track Assisted Conversions for Paid Media campaigns
  • Synthesize quantitative data (GA) with the qualitative data (Full Story) to identify areas of improvement
  • Monitor and analyze Lifetime Value of all digital campaigns using in-house reporting
  • Analyze Brand and Non Brand keyword performance for SEO
  • Ensure data integrity for the Web Analytics dataset


Qualifications

  • 5+ years of experience in web analytics with ecommerce focus
  • 5+ years of exposure to digital marketing efforts (including Paid Media, SEO, or Email Marketing)
  • Deep subject matter expertise with Google Analytics (and GA 360)
  • Thorough understanding of Google Tag Manager
  • Basic SQL knowledge
  • Strong mathematical and problem-solving skills
  • Comfortable working with Web Analytics data outside of the GA interface (using R or Python)
  • Familiarity with A/B testing tools, including Monetate
  • Proven ability to turn data into actionable recommendations to achieve business goals
  • Outstanding communication skills with both technical and non-technical colleagues

National Pen is committed to equal opportunity for all employees and applicants without regard to race, religion, color, sex, gender, gender identity, gender expression, sexual orientation, national origin, ancestry, citizenship status, military or veteran status, marital status, pregnancy, age, protected medical condition, physical or mental disability, genetic information or any other status protected by applicable federal, state and local laws

Daimler AG
  • Stuttgart, Deutschland

Aufgaben






Automatisiertes Fahren ist eine der großen Herausforderungen der Automobilindustrie in den nächsten Jahren. Die Abteilung Umgebungserfassung ist dabei sowohl für die Weiterentwicklung geeigneter Sensorik, als auch für die Entwicklung der notwendigen Algorithmen zum Verstehen von Verkehrsszenen zuständig.

Eine sehr wichtige Technologie in diesem Umfeld ist die Radar-Technologie. Zur Entwicklung einer ausgereiften Radar-Lösung für den Einsatz im automatisierten Fahren suchen wir eine/-n Softwareentwickler/-in Radarverstehen zur Verstärkung des Teams Radar.

Ihre Aufgaben im Einzelnen:
Softwareseitige Anbindung und Inbetriebnahme von Sensoren (Autosar).
Entwicklung und Pflege von SW Infrastruktur und Basis-SW.
Definition und Test von Sensorschnittstellen.
Implementierung neuer Algorithmen und Migration bestehender Algorithmen auf die Zielarchitektur. (ARM, embedded GPU).
Spezifikation und Review von Lastenheften.
Abstimmung der Erfordernisse mit dem Zulieferer.
Freigabeuntersuchungen und Serienfreigaben.
Enge Zusammenarbeit mit in- und externen Partnern des Teams.




Qualifikationen






Studium:
Abgeschlossenes Studium der Fachrichtung Informatik, Technische Informatik, Ingenieurwissenschaften oder MINT-Fächer

Erfahrung und spezifische Kenntnisse:
Sehr gute Kenntnisse in der objektorientierten Programmierung mit C++ und CUDA unter Linux
Erfahrung in der Verwendung gängiger Entwicklungswerkzeuge (Eclipse CDT, gdb, gcc, CMake)
Mehrjährige Erfahrung in der Programmierung echtzeitfähiger Systeme
Kenntnis gängiger Entwurfsmuster sowie der UML
Gute Kenntnisse in Skriptsprachen (Python, Linux-Shell)
Kenntnis gängiger Bus-System im Automobilbereich (FlexRay, CAN, BroadR-Reach)
Mehrjährige Erfahrung in der Entwicklung großer Softwareprojekte und der Anwendung agiler Methoden (Scrum, Atlassian-Tools, git Workflow, Issue Tracking)
Erfahrung in der Programmierung von Steuergeräten von Vorteil
Kenntnisse in ISO26262 und MISRA C von Vorteil
Gute Kenntnisse in MATLAB Simulink
Praktische Erfahrung im Einsatz von Matlab, DOORS, MS Project, MS Office

Sprachkenntnisse:
Verhandlungssichere Deutsch- und Englischkenntnisse

Persönliche Kompetenzen:
Ausgeprägte Ziel- und Ergebnisorientierung
Ausgeprägte Kommunikationsfähigkeit
Teamfähigkeit
Unternehmerische Herangehensweise und Eigeninitiative

Sonstiges:
Führerschein Klasse B
Bereitschaft zur Teilnahme an Versuchsfahrten und gelegentlicher Reisetätigkeit

Splice Machine
  • Atlanta, GA

Splice Machine, an AI predictive platform startup company, is looking for a Solutions Architect with experience working with complex distributed systems and large data sets using Spark and Hadoop.  Work from anywhere in the US.

Splice Machines predictive platform solution helps companies turn their Big Data into actionable business decisions. Our predictive platform eliminates the complexity of integrating multiple compute engines and databases necessary to power next-generation enterprise predictive AI and Machine Learning applications.

Some of our use-cases include:

  • At a leading credit card company, Splice Machine powers a customer service application that returns sub-20ms record lookups on 7 PB of data
  • At a Fortune 50 bank, Splice Machine is replacing a leading RDBMS and data warehouse with one platform in a customer profitability application
  • At an RFID tag company, Splice Machine is replacing a complex architecture for a retail IoT solution
  • At a leading financial service company, Splice Machine powers an enterprise data hub for 10,000 users
  • At a leading healthcare solution provider, Splice Machine powers a predictive application to learn models and use them to save lives in hospitals

Splice Machines CEO/ Co-Founder, Monte Zweben, is a serial entrepreneur in AI, selling his first company, Red Pepper, to Peoplesoft/ Oracle for $225M and taking his second company Blue Martini, through one of the largest IPOs in the early 2000s ($2.9B). He started Splice Machine to disrupt the $30 billion traditional database market with the first open-source dual engine database and predictive platform to power Big Data, AI and Machine Learning applications.   

Splice Machine has recruited a team of legendary Big Data advisors including, Roger Bamford, Father of Oracle RAC, Michael Franklin, former Director of AMPLab at UC Berkeley, Ken Rudin, Head of Growth and Analytics for Google Search, Andy Pavlo, Assistant Professor of Computer Science at Carnegie Mellon University and Ray Lane, former COO of Oracle, to collaborate with the Splice Machine team as we blaze new trails in Big Data.

Solution Architect

About You

  • You have implemented several large (40-50 node) Hadoop projects and have demonstrated successful outcomes.
  • You take pride in working to understand, quantify and verify the business needs of customers and their specific use cases, translating these needs into big data, DB, or ML capabilities.
  • You are comfortable engaging both business and engineering leadership, team leads and individual contributors to drive successful business outcomes.
  • Your project leadership style emphasizes collaboration and follow-through.
  • You are very technical and are accustomed to working with architects, developers, project managers, and C-level experts to ensure the best implementation practices and use of the product.

About What Youll Work On

  • Build the customers trust by maintaining a deep understanding of our solutions and their business.
  • Speak with customers about Splice Machine's most relevant features/functionality for their specific business needs.
  • Manage all post-sales technical activity, working on a cross-functional team of Splice Machine and customer resources for solution implementation.
  • Ensure that a plan is in place for each customer deployment, change management and adoption and communicated to all contributors.
  • Act as the voice of the customer and provide internal feedback on how Splice Machine can better serve our customers while working closely with Product and Engineering on identification and tracking of new feature and enhancement requests.
  • Help Sales identify new business opportunities within the customer in other departments.
  • Increase customer retention and renewals by conducting regular check-in calls and perform quarterly business reviews that drive renewals, upsells, adoption and customer references.

Requirements

  • Expertise in Cloudera and/or Hortonworks Hadoop solutions.
  • 7+ years of experience in architecting complex database and big data solutions for enterprise software.
  • Experience working in a complex multi-functional environment
  • Hands-on experience with SQL, Java and tuning databases
  • Experience with scalable and highly available distributed systems
  • BS in Computer Science / B.A. or equivalent work experience

Our people enjoy access to the best tools available, an open and collaborative work environment and a supportive culture inspiring them to do their very best.  We offer great salaries, generous equity, employee & family health coverage, flexible time off, and an environment that gives you the flexibility to seize moments of inspiration among other perks.

We encourage you to learn more about working here!

The Rocket Science Group LLC
  • Atlanta, GA
Mailchimp is a leading marketing platform for small business. We empower millions of customers around the world to build their brands and grow their companies with a suite of marketing automation, multichannel campaign, CRM, and analytics tools.
Were looking for an enthusiastic, driven and skilled leader who can bring an understanding of analytics and the ability to apply those analytics to solve real customer and business problems. In this role, youll be building and leading a team of product analysts and cultivating a culture of data-driven decision making. As Director of Product Analytics you will be responsible for leading a team of Product Analysts to help product teams define success metrics, build data-driven roadmaps and track progress towards solving customer problems. Your role will be part people manager (grow and develop a team), part strategist (help shape product strategy) and part evangelist (drive data-driven decision-making). You will work cross-functionally with other leaders in Marketing Analytics, Data Science and Finance to ensure alignment in tracking and reporting on what matters, as well as data governance. The ideal candidate will be able to combine deep knowledge of sophisticated analytics with effective storytelling, strong people development and the ability to influence and partner cross-functionally.
Responsibilities
    • Grow the Product Analytics team both through hiring and coaching, training and mentoring existing team members
    • Cultivate an understanding of how product analytics connects to business strategy and product strategy and how analysts can effectively partner with product teams to build data-driven roadmaps
    • Lead a team of Product Analysts in successfully partnering with cross-functional teams to define success metrics, craft approaches to track the data and identify errors, quantify and evaluate the data, develop a common language for colleagues to understand KPIs and ensure consistency of data-driven decision-making throughout the org
    • Translate data into consumable, actionable narratives; regularly update the organization, including senior leadership
    • Maintain a customer-centric focus: strive to be a domain and product authority through data, develop trust amongst your peers and leaders, and ensure that your team has access to data to make decisions
    • Be an advocate for data-driven everything with the ability to focus on the big picture takeaways of any analysis while being able to speak to the underlying details
    • Partner with Marketing Analytics, Data Science, Business Operations and Finance to synthesize data from across the organization
    • Provide a leading voice in the ongoing work across the company to establish and maintain consistent data definitions and data/analytics strategy
    • Evangelize utilizing experimentation (A/B testing) to validate hypotheses and assumptions and measure impact
    • Identify gaps in existing data and work with Engineering teams to implement data tracking

Requirements
    • 7+ years of work experience in data analytics or a related discipline and 2+ years of management experience, preferably within a digital business (SaaS or Ecomm preferred)
    • A passion for analytics, using data to inform decision-making and the ability to tell stories using data
    • Proven track record in leading a team specifically hiring, building and developing a team
    • Working knowledge of managing large data sets, and using analytics and data science tools (eg - SQL, Python, R)
    • Ability to work and communicate with non-technical colleagues to analyze data and solve problems
    • Strong collaboration skills and ability to gain trust and influence others; proven track record of creating buy-in for ideas and initiatives
    • Confidence working with ambiguity, unknown variables, and missing context (and find ways to bridge those gaps, when necessary)
    • Strong verbal and written communication skills, with ability to present and communicate data to senior management

Mailchimp is a founder-owned and highly profitable company headquartered in the heart of Atlanta. Our purpose is to empower the underdog, and our mission is to democratize cutting edge marketing technology for small business. We offer our employees an exceptional workplace , extremely competitive compensation, fully paid benefits (for employees and their families), and generous profit sharing . We hire humble , collaborative, and ambitious people, and give them endless opportunities to grow and succeed.
We love our hometown and support sustainable urban renewal. Our headquarters is in the historic Ponce City Market , right on the Atlanta Beltline . If you'd like to be considered for this position, please apply below. We look forward to meeting you!
Mailchimp is an equal opportunity employer, and we value diversity at our company. We don't discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Cypress HCM
  • Atlanta, GA

**Local Candidates only**

Data Engineer


Cypress is partnering with a very successful technology startup in Atlanta that is growing and is looking for a strong data engineer to join their talented, small team. This is the first hire of this kind for the company, so it will be critical to their success. The role is focused on supporting their data science efforts to improve their product and marketing efforts.

What youll be doing:

    • Using Python to develop datasets and algorithms for our clients data science initiatives
    • In a highly collaborative role, youll be in constant communication with product, marketing, data science, and software teams, serving as the companys analytics subject matter expert
    • Designing and building endpoints for data consumption
    • Helping scale data processing flow to meet data growth targets
    • Building analytics cohorts, tracking user behavior, understanding product performance across different marketing channels
    • Delivering analytics-based insights/recommendations to drive product and marketing decisions
    • Maintaining and extending ETL processes and internal data warehouses
    • Recommending new data tools and overseeing their implementation

A little about you:

    • You have 3+ years of professional experience with a focus on data engineering
    • You know and love working with Python and SQL
    • You are very interested in data science
    • You have experience working with APIs
    • You have experience with relational databases like PostgreSQL
    • You have a background/education in analytics and/or statistics
    • You know a thing or two about modern ETL
    • You have cloud experience and have worked with things like Redshift, Looker, Tableau, Hadoop, Hive, Spark
REsurety
  • Boston, MA
Company Overview:
REsurety is a mission-driven organization solving the challenge of resource intermittency for renewable energy.  We work in partnership with the world’s leading energy and risk management providers to enable renewable energy consumers and producers to manage the fuel risk of the future: the weather.  As a high-growth, profitable company that has already supported over 5,000 MW of clean energy transactions, we are a small team making a big impact! Our culture is open and collaborative.  We expect excellence from our team members and reward it with high ownership and flexibility.  If you’re a high-achiever with a passion for clean energy, then we look forward to receiving your application.

Position Overview:
As a Quality Assurance Engineer, you will support and maintain the verification and validation processes for the REsurety software suite to ensure unwavering accuracy of our results, which accelerate the development of renewable energy worldwide.

Key Responsibilities

  • Test new software features for quality and accuracy before they enter production, identify root causes of issues, and stress-test for unusual conditions

  • Validate input and output datasets across the spectrum of power, weather and risk metrics

  • Develop, maintain and monitor automated unit tests, regression tests, evaluation metrics and holistic verification frameworks, taking action when issues arise

  • Investigate live quality issues in active customer quoting and settlement operations

  • Advocate for quality needs throughout the software lifecycle, including requirements gathering development

  • Identify sources of technical debt, reducing and mitigating its accumulation


 
Required Qualifications:

  • A bachelor’s or master’s degree in computer science or engineering, or a related field in the sciences

  • Professional experience with software quality processes and DevOps, including testing frameworks

  • Experience in scientific computing and statistical analysis, with R (preferred), Matlab, Python, or similar



Preferred Qualifications:

  • Proven results with R’s package development, S3/RC object oriented programming, Roxygen documentation, testthat unit testing, and data.table frameworks

  • Experience with agile development methodologies and operational tools such as bug tracking, source control and continuous integration

  • Experience with SQL database systems, particularly in monitoring and addressing data quality 

  • Exposure to applications using time series data

  • Attention to detail and a devious mind capable of surfacing deep bugs



Details:

  • Location: Boston, MA



Benefits:

  • Unlimited Vacation Policy

  • Medical Insurance

  • Dental Insurance

  • Health Savings Account (HSA)

  • 401(k)

  • Gym Membership Reimbursement

  • Blue Bikes Gold Membership



REsurety, Inc.  is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, gender identity, sexual orientation or any other characteristic protected by law.