OnlyDataJobs.com

State Farm
  • Dallas, TX

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


Skills & Professional Experience

·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

·        Experience with unstructured data sets: text analytics, image recognition etc.

·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

·        Exercise sound judgment to diagnose & resolve problems within area of expertise

·        Familiarity with CI/CD development methods, Git and Docker a plus


Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


Remote work option is not available.


There is no sponsorship for an employment visa for the position at this time.


Competencies desired:
Critical Thinking
Leadership
Initiative
Resourcefulness
Relationship Building
Elev8 Hire Solutions
  • Atlanta, GA

Sr. Python Developer/Data Scientist

We are an AI parts inventory optimization software that makes it easy for enterprises to manage their maintenance and repair operations (MRO). We've raised millions in venture funding from both Silicon Valley and Boston and are growing fast. Are you a driven software engineer interested in being a part of that growth?

Role Expectations/Responsibilities:

  • Write clean, maintainable, thoroughly tested,code
  • Participate in product, design, and code reviews
  • QA and ship code daily
  • Identify, incorporate, and communicate best practices

Required Skills:

  • 10+ years of software engineering experience
  • Proficient in Python
  • Proficient in SciPy and SciKit
  • Experience in a python testing framework
  • Proficient in neural network theory
  • Proficient in general machine learning algorithms
  • Proficient in using Tensorflow
  • Proficient in using Keras
  • Proficient in architecting, testing, optimizing, and deploying deep learning models
  • Competency in Git
  • Experience with data structures, algorithm design, problem-solving, and complexity analysis

Nice to have:

  • Experience in a startup environment
  • Experience working on a small team with high visibility
  • Ability to handle a high volume of projects

Benefits:

  • Health/dental/vision coverage
  • 15 Days PTO
  • Option for 1 day remote
The Wellcome Trust Sanger Institute
  • Cambridge, UK


Salary £36,737 to £44,451 (dependant on experience) plus excellent benefits. Fixed term for 3 years

The Wellcome Sanger Institute is seeking for an experienced Bioinformatician to provide computational analysis for the new international Cancer Dependency Map consortium, and to other projects engaged in the analysis of data from genome-editing and functional-genomics screens, in collaboration with Open Targets.

You will join the Cancer Dependency Map Analytics team, actively interacting with the Cancer Dependency Map consortium, whose broad goal is to identify vulnerabilities and dependencies that could be exploited therapeutically in every cancer cell.

You will be able to implement and use new computational pipelines for pre-processing and quality control assessment of data from genome-editing screens and for individual project requirements. This will include extending existing software, writing, documenting and maintaining code packages on public/internal repositories. We encourage applications with the background in genomic data curation and familiarity with the management of data from large-scale in-vitro drug/functional-genomic screens.

Finally, you will interact with Open Targets partners and collaborators, and with web development teams to coordinate data/results flows on the public domain.

This is an exciting opportunity to work at one of the world's leading genomic centres at the forefront of genomic research. You will have access to Sanger's computational resources, including a 15000+ core computational cluster, the largest in life science research in Europe, and multiple petabytes of high-speed cluster file systems.

We are part of a dynamic and collaborative environment at the Genome Campus and, although we seek someone who can work independently, you will have the opportunity to interact with researchers across many programmes at the Institute.

Essential Skills

  • PhD in a relevant subject area (Physics, Computer Science, Engineering, Statistics, Mathematics, Computational Biology, Bioinformatics)
  • Full working proficiency in a scripting language (e.g. R, Python, Perl)
  • Full working proficiency with software versioning systems (eg. Git, gitub, svn)
  • Previous experience in creating, documenting, and maintaining finished software
  • Previous experience with implementing omics-data analysis pipelines
  • Basic knowledge of statistics and combinatorics
  • Full working proficiency in UNIX/Linux
  • Ability to communicate ideas and results effectively
  • Ability to work independently and organise own workload


Ideal Skills

  • Ability to devise novel quantitative models, use relevant mathematics-heavy literature
  • Experience in formulating the world in statistical models and applying them to real data
  • Basic knowledge of genomics and molecular biology
  • Previous experience with data from high throughput assays
  • Previous experience with data from genetic screens
  • Full working proficiency in a compiled language (e.g. C, C++, Java, Fortran)


Other information



Open Targets is a pioneering public-private initiative between GlaxoSmithKline (GSK), Biogen, Takeda, Celgene, Sanofi, EMBL-EBI (European Bioinformatics Institute) and the WSI (Wellcome Sanger Institute), located on the Wellcome Genome Campus in Hinxton, near Cambridge, UK.

Open Targets aims to generate evidence on the biological validity of therapeutic targets and provide an initial assessment of the likely effectiveness of pharmacological intervention on these targets, using genome-scale experiments and analysis. Open Targets aims to provide an R&D framework that applies to all aspects of human disease, to improve the success rate for discovering new medicines and share its data openly in the interests of accelerating drug discovery.

Our Campus: Set over 125 acres, the stunning and dynamic Wellcome Genome Campus is the biggest aggregate concentration of people in the world working on the common theme of Genomes and BioData. It brings together a diverse and exceptional scientific community, committed to delivering life-changing science with the reach, scale and imagination to pursue some of humanity's greatest challenges.

Our Benefits: Our employees have access to a comprehensive range of benefits and facilities

Genome Research Limited hold an Athena SWAN Bronze Award and will consider all individuals without discrimination and are committed to creating an inclusive environment for all employees, where everyone can thrive.

Please include a covering letter and CV with your application. Closing date: 21st April 2019

Contact for informal enquiries

Luxoft Global Operations GmbH
  • Ingolstadt, Germany

Project Description


Are you tired of the unclear directions from customer of constantly changing requirements? Then this project is for you! You will work in the cross-location team of professionals in order to do a function integration for a new generation of Autonomous Driving solution for Highways for our customer (one of the top German OEMs).
You will work with other professionals, where broad-mindedness, skills in a wide range of technologies and creativity are highly appreciated. We cherish a spirit of collaboration and a dynamic working environment.
Used technologies: C/C++, Matlab/Simulink, TPT, ADTF, ADS2, Autobox, Jira, Python






Responsibilities


Planning and implementing of specific solutions for the integration of existing Highly Automated Driving modules.
I.e.: Guaranteeing the interaction of different paths in the overall configuration (main path, side path, supervisor, visualization)
Coordination and expansion of the integration process with the platform development.
Implementation of converter modules to ensure interface compatibility.
Describing requirements for different effect chains of autonomous driving.
Test and analyze data and control flows of the effect chains.






Skills



Several years of experience in developing and testing embedded systems in the automotive industry.
Experience of ingrating softwaremodules in complex systems
Strong skills in Matlab/Simulink and C/C++
Experience in at leat one of the following tools: ADTF, TPT, ADS2, Autosar
General knowledge of FlexRay- / CAN- and Ethernet- Communication
Sound experience in version-control system (git, svn)
Profound knowledge of agile software development
English: fluent
German: Intermediate+

NICE TO HAVE


Python
Jira
ASPICE
Driver's license







Languages



  • German: Upper-intermediate

  • English: Advanced/Fluent

OverDrive Inc.
  • Garfield Heights, OH

The Data Integration team at OverDrive provides data for other teams to analyze and build their systems upon. We are plumbers, building a company-wide pipeline of clean, usable data for others to use. We typically don’t analyze the data, but instead we make the data available to others. Your job, if you choose to join us, is to help us build a real-time data platform that connects our applications and makes available a data stream of potentially anything happening in our business.


Why Apply:


We are looking for someone who can help us wire up the next step. Help us create something from the ground up (almost a green field). Someone that can help us move large data from one team to the next and come up with ideas and solutions around how we go about looking at data. Using technologies like Kafka, Scala, Clojure, F#.


About You:



  • You always keep up with the latest in distributed systems. You're extremely depressed each summer when the guy who runs highscalability.com hangs out the "Gone Fishin" sign.

  • You’re humble. Frankly, you’re in a supporting role. You help build infrastructure to deliver and transform data for others. (E.g., someone else gets the glory because of your effort, but you don’t care.)

  • You’re patient. Because nothing works the first time, when it comes to moving data around.

  • You hate batch. Real-time is your thing.

  • Scaling services is easy. You realize that the hardest part is scaling your data, and you want to help with that.

  • You think microservices should be event-driven. You prefer autonomous systems over tightly-coupled, time-bound synchronous ones with long chains of dependencies.


 Problems You Could Help Solve:



  • Help us come up with solutions around speeding up our process

  • Help us come up with ideas around making our indexing better

  • Help us create better ways to track all our data

  • If you like to solve problems and use cutting edge technology – keep reading


 Responsibilities:



  • Implement near real-time ETL-like processes from hundreds of applications and data sources using the Apache Kafka ecosystem of technologies.

  • Designing, developing, testing and tuning a large-scale ‘stream data platform’ for connecting systems across our business in a decoupled manner.

  • Deliver data in near real-time from transactional data stores into analytical data stores.

  • R&D ways to acquire data and suggest new uses for that data.

  • “Stream processing.” Enable applications to react to, process and transform streams of data between business domains.

  • “Data Integration.” Capture application events and data store changes and pipe to other interested systems.


 Experience/Skills: 



  • Comfortable with functional programming concepts. While we're not writing strictly functional code, experience with languages like Scala, Haskell, or Clojure will make working with streaming data easier.

  • Familiarity with the JVM.  We’re using Scala with a little bit of Java and need to occasionally tweak the performance settings of the JVM itself.

  • Familiarity with C# and the .Net framework is helpful. While we don’t use it day to day, most of our systems run on Windows and .Net.

  • Comfortable working in both Linux and Windows environments. Our systems all run on Linux, but we interact with many systems running on Windows servers.

  • Shell scripting & common Linux tool skills.

  • Experience with build tools such as Maven, sbt, or rake.

  • Knowledge of distributed systems.

  • Knowledge of, or experience with, Kafka a plus.

  • Knowledge of Event-Driven/Reactive systems.

  • Experience with DevOps practices like Continuous Integration, Continuous Deployment, Build Automation, Server automation and Test Driven Development.


 Things You Dig: 



  • Stream processing tools (Kafka Streams, Storm, Spark, Flink, Google Cloud DataFlow etc.)

  • SQL-based technologies (SQL Server, MySQL, PostgreSQL, etc.)

  • NoSQL technologies (Cassandra, MongoDB, Redis, HBase, etc.)

  • Server automation tools (Ansible, Chef, Puppet, Vagrant, etc.)

  • Distributed Source Control (Mercurial, Git)

  • The Cloud (Azure, Amazon AWS)

  • The ELK Stack (Elasticsearch, Logstash, Kibana)


What’s Next:


As you’ve probably guessed, OverDrive is a place that values individuality and variety. We don’t want you to be like everyone else, we don’t even want you to be like us—we want you to be like you! So, if you're interested in joining the OverDrive team, apply below, and tell us what inspires you about OverDrive and why you think you are perfect for our team.



OverDrive values diversity and is proud to be an equal opportunity employer.

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

As a Data Science Engineer in Comcastdx, you will research, model, develop, support data pipelines and deliver insights for key strategic initiatives. You will develop or utilize complex programmatic and quantitative methods to find patterns and relationships in data sets; lead statistical modeling, or other data-driven problem-solving analysis to address novel or abstract business operation questions; and incorporate insights and findings into a range of products.

Assist in design and development of collection and enrichment components focused on quality, timeliness, scale and reliability. Work on real-time data stores and a massive historical data store using best-of-breed and industry leading technology.

Responsibilities:

-Develop and support data pipelines

-Analyze massive amounts of data both in real-time and batch processing utilizing Spark, Kafka, and AWS technologies such as Kinesis, S3, ElastiSearch, and Lambda

-Create detailed write-ups of processes used, logic applied, and methodologies used for creation, validation, analysis, and visualizations. Write ups shall occur initially, within a week of when process is created, and updated in writing when changes occur.

-Prototype ideas for new ML/AI tools, products and services

-Centralize data collection and synthesis, including survey data, enabling strategic and predictive analytics to guide business decisions

-Provide expert and professional data analysis to implement effective and innovative solutions meshing disparate data types to discover insights and trends.

-Employ rigorous continuous delivery practices managed under an agile software development approach

-Support DevOps practices to deploy and operate our systems

-Automate and streamline our operations and processes

-Troubleshoot and resolve issues in our development, test and production environments

Here are some of the specific technologies and concepts we use:

-Spark Core and Spark Streaming

-Machine learning techniques and algorithms

-Java, Scala, Python, R

-Artificial Intelligence

-AWS services including EMR, S3, Lambda, ElasticSearch

-Predictive Analytics

-Tableau, Kibana

-Git, Maven, Jenkins

-Linux

-Kafka

-Hadoop (HDFS, YARN)

Skills & Requirements:

-5-8 years of Java experience, Scala and Python experience a plus

-3+ years of experience as an analyst, data scientist, or related quantitative role.

-3+ years of relevant quantitative and qualitative research and analytics experience. Solid knowledge of statistical techniques.

-Bachelors in Statistics, Math, Engineering, Computer Science, Statistics or related discipline. Master's Degree preferred.

-Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem

-Experience with more advanced modeling techniques (eg ML.)

-Distinctive problem solving and analysis skills and impeccable business judgement.

-Experience working with imperfect data sets that, at times, will require improvements to process, definition and collection

-Experience with real-time data pipelines and components including Kafka, Spark Streaming

-Proficient in Unix/Linux environments

-Test-driven development/test automation, continuous integration, and deployment automation

-Excellent communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly

-Team player is a must

-Great design and problem-solving skills

-Adaptable, proactive and willing to take ownership

-Attention to detail and high level of commitment

-Thrives in a fast-paced agile environment

About Comcastdx:

Comcastdxis a result driven big data engineering team responsible for delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization.dxhas an overarching objective to gather, organize, and make sense of Comcast data with intention to reveal business and operational insight, discover actionable intelligence, enable experimentation, empower users, and delight our stakeholders. Members of thedxteam define and leverage industry best practices, work on large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines.

Comcast is an EOE/Veterans/Disabled/LGBT employer

Next Level Business Services, Inc.
  • Dallas, TX

Job Description

Position Title: Senior Data Scientist


Location : Dallas,TX


Full Time Position

Essential job functions:  
  • Work with stakeholders throughout the organization to identify opportunities for leveraging data to drive business solutions
  • Mine and analyze data from databases to drive optimization and improvement of product development, marketing techniques and business strategies
  • Assess the effectiveness and accuracy of new data sources and data gathering techniques
  • Develop custom data models and algorithms to apply to data sets
  • Use predictive modeling to increase and optimize customer experiences, revenue generation, ad targeting and other business outcomes
  • Coordinate with different functional teams to implement models and monitor outcomes
  • Develop processes and tools to monitor and analyze model performance and data accuracy
  • Strong Graph Theory including Graph Algorithms
  • Working Knowledge Azure HD Insight
  • Development, research, and exploration in the areas of statistics, machine learning, experimental design, optimization, simulation, and operational research
  • Interprets problems and develops solutions to business problems using data analysis, data mining, optimization tools, and machine learning techniques and statistics
  • Leverage big data to solve strategic, tactical, structure and unstructured business problems
  • Collaborate with client and Enterprise Data Products team to set analytic objectives, approaches and work schedule
  • Research and evaluate new analytical methodologies, approaches and solutions
  • Analyze customer and economic trends that impact business performance and recommend ways to improve outcomes
  • Developing advanced statistical models utilizing typical and atypical methodologies.
  • Developing and updating data models for statistical modeling purposes, tracking results against forecasts, and re-specifying when required.
  • Design and deploy data-science and technology based algorithmic solutions to address business needs for Mode Transportation. Identify, understand and evaluate new commerce analytic and data technologies to determine the effectiveness of the solution and its feasibility of integration with Mode Transportation s current platforms
  • Design large scale models using Logistic Regression, Linear Models Family (Poisson models, Survival models, Hierarchical Models, Naïve-Bayesian estimators), Conjoint Analysis, Spatial Models, Time-Series Models
  • Design large scale models using linear and mixed integer optimization; non-linear methods; and, heuristics
  • Design large scale discrete-event and Monte Carlo simulation models
  • Interpret and communicate analytic results to analytical and non-analytical business partners and executive decision makers.
  • Develop, coach and mentor team members within the department they create
  • Displays strong teamwork and interpersonal skills with the ability to communicate to all levels of management.

Qualifications:

  • Must have at least 5+ years of actual working experience performing advanced quantitative analyses.
  • Must have the actual working knowledge of Matlab/Python/R/SQL/Tensorflow/Theano.
  • Experience and passion for simulations, optimization, neural networks, artificial intelligence (deep learning and machine learning)
  • Experience with distributed data/computing tools: Map/Reduce, Hadoop, Hive, Spark, GIT, SQL, etc.
  • Working knowledge of big data manipulation tools is required.
  • Working knowledge of IBM SPS Statistics & Modeler is plus.
  • Strong knowledge Long Short-Term Memory (LSTM) recurrent neural networks including architectures and the mathematical theory
  • Expert knowledge in Multilayer Perceptrons (MLPs) and Convolutional Neural Networks (CNNs)
  • Ability to apply advanced statistical methodologies such as mixed model (random and fixed effects), simultaneous equations, ARIMA, neural networks, and multinomial discrete choice. Ability to apply mathematical operations to such tasks as cluster analytics, sampling theory and design of experiments, analysis of variance, correlation techniques, and factor analysis.
  • Ability to apply advanced optimization methodologies such as linear and mixed integer optimization.
  • Ability to apply advanced simulation modeling methodologies and techniques.
  • Utilize complex computer operations (intermediate programming in 3rd and 4th generation languages, relational databases, and operating systems) and advanced features of software packages (word-processing, spreadsheet, graphics, etc.).
  • Must have relational database experience.
  • A strong passion for empirical research and for answering tough questions with data
  • Demonstrated experience in organizing, prioritizing, and coordinating complex team efforts
  • Experience in working with executives or strategic planning departments to set and/or manage to corporate level strategies a plus
  • Strong communication and organizational skills along with the ability to deal with ambiguity while juggling multiple priorities and projects at the same time
  • Experience with business support software applications such as MS Office (Word, PowerPoint, Excel, Project) required
  • Familiar and comfortable with agile software development processes
  • Azure HD Insight certification is a plus
  • Previous experience in Transportation, Logistics and IT systems preferred


Education requirements:

  • M.S, or Ph.D. in math, statistics, operations research, computer science, engineering, econometrics, or other quantitative field.
PriceSenz
  • Dallas, TX

* Contract*

Job Description

In this role, the candidate will be responsible to develop technical solutions to solve business problems using one or more of the Blockchain Technology platforms such as Hyperledger, Ethereum and/or R3 Corda. This position is a hands-on, self-directed design and development of highly scalable, reliable, and performance efficient applications to consume and integrate complex data using open-source platforms and tools. The successful candidate should have thorough knowledge and hands on experience in all levels of Agile Software Development Life Cycle methodologies. Serve as the point of contact responsible for the construction of all objects necessary within the development platforms. The candidate must have strong communication skills and be able to work with both users and also lead IT team members in a highly collaborative team environment. Candidate should be disciplined, detail-oriented, self-motivated, and delivery-focused.

Responsibilities & Required Skills

  • Design, develop, and deliver complex Distributed Ledger/Blockchain technology applications

  • Responsible for architectural reviews, design reviews and consultations

  • Develop methods and tools to be used to test, monitor and implement new solutions using Blockchain and Distributed Ledger technologies

  • Hands-on architect & developer who can take part in the entire development life cycle

  • Hands-on experience with at least one of Hyperledger, Multichain, Ethereum, or other similar platforms

  • Follow and standardize the best practices from the industry and apply them in the project builds.

  • In-depth knowledge and hands-on experience in SDLC (Agile)

  • Strong understanding and demonstrated use of design patterns

  • Experience with unit and functional tests, preferably using test driven development Working knowledge of financial application systems is a plus point

  • Bachelor's degree or equivalent 7+ years of experience

Technical Experience

  • Minimum of

      3 years of hands-on experience in Java/J2EE/C#/Python/Golang and a minimum of 3 years of experience in application development and systems implementation

  • Minim

  • um of 1 year experience in Blockchain research, understanding and solution development

    • Experience in development on one of the Blockchain platforms like Hyperledger, Ethereum, R3

    • Corda, Quorum and experience in development of applications using distributed applications framework like Truffle, Fabric etc.

    • Experience in Angular & NodeJS, GIT repositories, open source tools and technologies

    • Extensive experience with microservices and RESTful APIs

    • Experience with NoSQL databases

    • Experience in developing and deploying applications on cloud

    • Experience with architecting, designing and debugging JavaScript enterprise software

    • Exceptional analytical and problem-solving skills.

    • Team player with the ability to communicate effectively at all levels of the development process.

    • Working experience in Telecom Industry preferred

Trilogy Education Services, Inc.
  • Miami, FL
Trilogy Education partners with universities to offer programs in Web Development, Data Analytics, UX/UI Design, and Cybersecurity. Our platform combines a market-driven curriculum, robust career services, and a multinational community of universities, instructors, and employers to prepare adult learners for careers in the digital economy.
Our programs have been in existence since 2015, and since then, we've launched an additional 300 classes across the nation. We have hired more than 1,900 Instructors and Teaching Assistants to support our students.
The Job
Trilogy Education is seeking an experienced Data Analyst who has a passion for teaching individuals how to code. Our students have a diverse background, from working professionals to fresh-out-of-college people who seek to transition into careers in data analytics and visualization positions.
Our Data Analytics And Visualization Program
For too many companies, Big Data isnt the solution to a problemBig Data is the problem. The Data Analytics and Visualization course provides learners with the knowledge, skills, and expertise to turn data, into insights, and into actionable recommendations to improve processes and drive company growth. Starting with Microsoft Exceland advancing through Python, JavaScript, HTML/CSS, API Interactions, Social Media Mining, SQL, Hadoop, Tableau, Advanced Statistics, Machine Learning, R, Git/GitHub, and morestudents work their way through the industrys most popular programming languages, software, and libraries in data analytics.
Why teach with us?
Are you passionate about education and making an impact? Do you love empowering others to find life-changing opportunities. Wed love to hear from you! If you bring knowledge, strong communication, and a positive energy to the classroom, you're going to help our students along their transformative path to a successful and rewarding career. Prior teaching experience isn't a prerequisite for success as an Instructor or Teaching Assistant within Trilogy.
We'll provide the guidance, training, lesson plans, and tools to support you on your journey of impacting lives in the classroom.
What You Will Do
    • Take attendance at the start of class via Bootcampspot
    • Ensure the Instructor is staying on track with regards to time
    • Walk around class during code activities and projects to assist students
    • Research and answer student questions when the instructor is unable to
    • Grade all homework assignments

Technology Language Skills
    • Python Pandas
    • Matplotlib
    • Beautiful Soup
    • JavaScript
    • HTML5
    • CSS3
    • D3
    • Leaflet
    • SQL, noSQL
    • Tableau
    • Machine Learning, Hadoop

What Makes You a Great Fit (Requirements)
    • Minimum of 1 year of work experience
    • A positive attitude
    • Share your own professional experiences and industry insight with the students
    • Support our students individually as they go through an emotional roller coaster
    • Be able to infuse empathy, support, encouragement, and fun into the student experience

Logistics
    • 24-week program
    • Mon/Wed/Sat OR Tue/Thu/Sat Schedule
    • Weekday Classes: 6:15pm - 10:30pm (includes office hours and break)
    • Saturday Classes: 9:30am - 2:30pm (includes office hours and lunch break)
Trilogy Education Services, Inc.
  • Miami, FL
Trilogy Education partners with universities to offer programs in Web Development, Data Analytics, UX/UI Design, and Cybersecurity. Our platform combines a market-driven curriculum, robust career services, and a multinational community of universities, instructors, and employers to prepare adult learners for careers in the digital economy.
Our programs have been in existence since 2015, and since then, we've launched an additional 300 classes across the nation. We have hired more than 1,900 Instructors and Teaching Assistants to support our students.
The Job
We are looking for an experienced Data Analyst to teach our part time Data Analytics and Visualization class at University of Miami. Our instructors are an essential piece to our students experience with us. You must bring a positive attitude and be able to infuse empathy, support, encouragement and fun into the classroom. As an instructor you will need to have the ability to deliver our lesson plans that are being taught across the country; while at the same time sharing your own professional experiences and industry insight with the students.
Our Data Analytics And Visualization Program
For too many companies, Big Data isnt the solution to a problemBig Data is the problem. The Data Analytics and Visualization course provides learners with the knowledge, skills, and expertise to turn data, into insights, and into actionable recommendations to improve processes and drive company growth. Starting with Microsoft Exceland advancing through Python, JavaScript, HTML/CSS, API Interactions, Social Media Mining, SQL, Hadoop, Tableau, Advanced Statistics, Machine Learning, R, Git/GitHub, and morestudents work their way through the industrys most popular programming languages, software, and libraries in data analytics.
Why teach with us?
Are you passionate about education and making an impact? Do you love empowering others to find life-changing opportunities. Wed love to hear from you! If you bring knowledge, strong communication, and a positive energy to the classroom, you're going to help our students along their transformative path to a successful and rewarding career. Prior teaching experience isn't a prerequisite for success as an Instructor or Teaching Assistant within Trilogy.
We'll provide the guidance, training, lesson plans, and tools to support you on your journey of impacting lives in the classroom.
What You Will Do
    • Lead lectures and educational coding activities
    • Answer questions from the stage
    • Walk around the classroom during coding activities and in-class projects to assist the students, as needed
    • Ensure Panopto is turned on and actively recording the class lecture to help us continuously improve
    • Upload all class content to Students Git Repository
    • Create Homework Assignments in Bootcampspot
    • Ensure that homework is graded on-time; and occasionally assist with grading

Technology Language Skills
    • Python Pandas
    • Matplotlib
    • Beautiful Soup
    • JavaScript
    • HTML5
    • CSS3
    • D3
    • Leaflet
    • SQL, noSQL
    • Tableau
    • Machine Learning, Hadoop

What Makes You a Great Fit (Requirements)
    • Bachelor's Degree
    • Minimum of 5 years of work experience
    • A positive attitude
    • Ability to deliver our lesson plans that are taught in classrooms across the country to the student body
    • Share your own professional experiences and industry insight with the students
    • Support our students individually as they go through an emotional roller coaster
    • Be able to infuse empathy, support, encouragement, and fun into the student experience

Logistics
    • 24-week program
    • Mon/Wed/Sat OR Tue/Thu/Sat Schedule
    • Weekday Classes: 5:45pm - 10pm (includes office hours and break)
    • Saturday Classes: 9:30am - 2:30pm (includes office hours and lunch break)
Elan Partners
  • Dallas, TX

Title: Big Data Engineers for Data Platform


Direct Hire

No Sponsorship

Prefer Local Candidates


Are you an experienced Big Data Engineer who enjoys working on highly visible projects that rely heavily on your expertise and insight to complete? Do you enjoy meaningful collaboration between your team and organization that appreciate your input and ideas? Then this could be the next great role in your career!


Requirements:

  • Minimum 3 years in the big data space
  • 4+ years of experience with several programming languages such as Python, Scala, or Java
  • Experience with DAG workflow scheduler systems such as Oozie
  •  Ability to pick up new languages and technologies quickly
  •  Experience with large-scale, big data technologies, such as MapReduce, Hadoop, Spark, Hive, Impala
  • Experience in data warehousing with Hive and a scalable store such as Snowflake or Redshift
  • Experience with NoSQL data stores such as HBase, Cassandra, MongoDB
  • Preferably - experience with public cloud technologies such as Azure, AWS, or Google Cloud
  • Strong understanding distributed system principles of load balancing, scaling, in-memory vs. disk
  • Familiar with Unix/Linux environment
  • Ability to work efficiently with source code management systems like GIT and SVN
  • Familiar with DevOps - automated deployment tools


Benefits: Fun work environment- foosball and ping-pong tables, Competitive overall company benefits

ConocoPhillips
  • Houston, TX
Our Company
ConocoPhillips is the worlds largest independent E&P company based on production and proved reserves. Headquartered in Houston, Texas, ConocoPhillips had operations and activities in 17 countries, $71 billion of total assets, and approximately 11,100 employees as of Sept. 30, 2018. Production excluding Libya averaged 1,221 MBOED for the nine months ended Sept. 30, 2018, and proved reserves were 5.0 billion BOE as of Dec. 31, 2017.
Employees across the globe focus on fulfilling our core SPIRIT Values of safety, people, integrity, responsibility, innovation and teamwork. And we apply the characteristics that define leadership excellence in how we engage each other, collaborate with our teams, and drive the business.
Description
The Analytics Platform Administrator is accountable for managing big data environments, on bare-metal, container infrastructure, or on a cloud platform. This role is responsible for system design, capacity planning, performance tuning, and ongoing monitoring of the data lake environment. As a lead administrator, this role will also manage day to day work of any onshore and offshore contractors on the platforms team. The position reports to the Director of Analytic Platforms and it is in Houston, TX.
Responsibilities May Include
  • Work with IT Operations and Information Security Operations for monitoring, troubleshooting, and support of incidents to maintain service levels
  • 24/7 coverage for analytics platforms
  • Monitor the performance of the systems and ensure high uptime
  • Deploy new and maintain existing data lake environments on Hadoop or AWS/Azure stack
  • Work closely with the various teams to make sure that all the big data applications are highly available and performing as expected. The teams include data science, database, network, BI, application, etc.
  • Work with AICOE and business analysts on designing and running technology proof of concepts on Analytics platforms
  • Capacity planning of the data lake environment
  • Manage and review log files, backup and recovery, upgrades, etc.
  • Responsible security management of the platforms
  • Support of our on-premise Hortonworks Hadoop environment
Basic/Required
  • Legally authorized to work in the United States
  • 5+ years of related IT experience
  • 3+ years of Structure Querying Language experience (SQL)
  • 1+ years of experience with Hadoop technology stack (HDFS, HBase, Spark, Sqoop, Hive, Ranger, NiFi, etc.)
  • Intermediate proficiency analyzing and understanding business/technology system architectures, databases, and client applications
Preferred
  • Bachelor's Degree in Computer Science, MIS, Information Technology or other related technical discipline
  • 1+ years of experience with AWS or Azure analytics stack
  • 1+ years of experience architecting data warehouses and/or data lakes
  • 1+ years of Oil and Gas Industry Experience
  • Delivery experience with enterprise databases and/or data warehouse platforms such as Oracle, SQL Server or Teradata
  • Automation experience with Python, PowerShell or a similar technology
  • Experience with source control and automated deployment. Useful technologies include Git, Jenkins, and Ansible
  • Experience with complex networking infrastructure including firewalls, VLANs, and load balancers
  • Experience as a DBA or Linux Admin
  • Ability to work in a fast-paced environment independently with the customer
  • Ability to learn new technologies quickly and leverage them in data analytics solutions
  • Ability to work with business and technology users to define and gather reporting and analytics requirements
  • Ability to work as a team player
  • Strong analytical, troubleshooting, and problem-solving skills
  • Takes ownership of actions and follows through on commitments by courageously dealing with important problems, holding others accountable, and standing up for what is right
  • Delivers results through realistic planning to accomplish goals
  • Generates effective solutions based on available information and makes timely decisions that are safe and ethical
To be considered for this position you must complete the entire application process, which includes answering all prescreening questions and providing your eSignature on or before the requisition closing date of March 11, 2019.
Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.
ConocoPhillips is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, veteran status, gender identity or expression, genetic information or any other legally protected status.
Job Function
Information Management-Information Technology
Job Level
Individual Contributor/Staff Level
Primary Location
NORTH AMERICA-USA-TEXAS-HOUSTON
Organization
ANALYTICS INNOVATION
Line of Business
Corporate Staffs
Job Posting
Mar 4, 2019, 1:39:58 PM
SugarCRM
  • Durham, NC

About SugarCRM, Inc.

SugarCRM enables businesses to create extraordinary customer relationships with the most empowering, adaptable and affordable customer relationship management (CRM) solution on the market. We are the industry’s leading company focused exclusively on customer relationship management. Helping our clients build a unique customer experience through great customer relationships is our sole focus.

Recognized by leading market analysts as a CRM visionary and innovator, Sugar is deployed by more than 2 million individuals in over 120 countries and 26 languages. Companies large and small are turning from yesterday’s CRM solutions to rely on Sugar to manage customer relationships.

Where do you fit?


The Sugar AI team's mission is to build products and services that help people make better and faster decisions, automate tasks and processes and extend reach through virtual assistants and agents. Our products and services help users sell more effectively and provide better service by bringing users a wealth of real-time knowledge.


We are looking for an enthusiastic, performance-driven self-starter and team player. You must have stellar software development, project organization, and communication skills, and work well in cross-functional teams. The ideal candidate is an exceptional full stack engineer with a background in system design, microservices architecture and technologies such as Node.js, Backbone, React, Angular2, and jQuery.



Impact you will make in the role:



  • You will create new and exciting products under the umbrella of an already solid and mature CRM leader

  • Work cross-functionally with other teams within SugarCRM to help implement the strategic vision of the product

  • Push the envelope of software design and architecture

  • Challenge, mentor, and learn from your peers


What you need to succeed:



  • 5+ years in software development

  • Experience with Node.js API/backend development in a cloud-based, scalable, fault tolerant architecture

  • Passionate about AI and Machine Learning and stays up-to-date with the latest developments in the field

  • Experience with client-side technologies (HTML5, CSS3) and JavaScript frameworks such as Backbone.js, React, and Angular 2 (and above)

  • Solid Computer Science fundamentals: data structures, algorithms, design patterns, databases, operating systems, and debugging

  • Experience with microservices architecture

  • Experience with REST APIs

  • Experience with source control such as Git/Bitbucket

  • Exposure to container technologies: Kubernetes, Docker, and AWS

  • Exposure to DevOps and CI/CD configuration management

  • BS degree in Computer Science, or equivalent


Location:  Cupertino, CA

We are an Equal Opportunity, Affirmative Action employer. Minorities, women, veterans and individuals with disabilities are encouraged to apply.

Engineering Team Culture:

Our focus is building out teams with smart engineers who are passionate about their craft and excited to build software for our unique solutions in the CRM (customer relationship management) space. At SugarCRM, you'll have the chance to work on various teams and stacks. Here's more:



  • Communication and collaboration is key to our processes, and we don't want to hinder it with walls

  • We are passionate about automated testing to deliver a high level of quality to our customers

  • We use a Scrum-based development methodology that includes daily stand-ups, regular Sprint reviews, and retrospectives to discuss

  • We value unique perspectives brought by diverse backgrounds and experiences. A broad range of ideas and perspectives help us to create the best possible product

  • As a part of our company culture, we encourage everyone to work at a healthy, sustainable pace


Benefits and Perks:

Beyond a stellar work environment, friendly people, and inspiring, innovative work, we have some great benefits and perks:



  • Competitive salaries

  • Excellent medical, dental and vision coverage for you and your family, along with other benefit plans like 401(k) match

  • Unlimited paid time off policy

  • Wellness Reimbursement and Workforce Fitness Program

  • Career & Personal Development Program – multi-platform

  • New Hire Onboarding for all new employees worldwide – at Corporate Headquarters in Cupertino, California

  • Regular social events

  • Ownership is the greatest self-identity at SugarCRM - you are making an impact now

  • We are a merit-based company - many opportunities to learn, excel and grow your career


Note to Recruiters and Placement Agencies:  SugarCRM does not accept unsolicited agency resumes. Please do not forward unsolicited agency resumes to our website or to any SugarCRM employee. SugarCRM will not pay fees to any third-party agency or firm and will not be responsible for any agency fees associated with unsolicited resumes. Unsolicited resumes received will be considered property of SugarCRM and will be processed accordingly.    

Briq
  • Santa Barbara, CA
  • Salary: $70k - 100k

Briq is hiring a Senior Full Stack Software Engineer Big Data/ML Pipelines to scale up its AI and ML dev team. You will need to have strong programming skills, a proven knowledge of traditional Big Data technologies, experience working with heterogeneous data types at scale, experience with Big Data architectures, past experience working with a team to transform proof-of-concept tools to production-ready toolkits, and excellent communication and planning skills. You and other engineers in this team will help advance Briq's capacity to build and deploy leading solutions for AI-based applications in cyber security.


What You'll Be Doing • Working with data scientists and data engineers to turn proof-of-concept analytics/workflows into production-ready tools and toolkits • Architecting and implementing high performance data pipelines and integrating them with existing cyber security infrastructure and solutions • Deploying and productionalizing solutions focused around threat hunting, anomaly detection, and security analytics • Providing input and feedback to teams regarding decisions surrounding topics such as infrastructure, data architectures, and DevOps strategy • Building automation and tools that will increase the productivity of teams developing distributed systems


What We Need To See • You have a BS, MS, or PhD in Computer Science, Computer Engineering, or closely related field with 4+ years of work or research experience in software development • 1+ years working with data scientists and data engineers to implement proof-of-concept ideas to production environments, including transitioning tools from research-based tools to those ready for deployment • Strong skills in Python and scripting tasks as well as comfort in using Linux and typical development tools (e.g., Git, Jira, Kanban) • Solid knowledge of traditional big data technologies (e.g., Hadoop, Spark, Cassandra) and expertise developing for and targeting at least one of these platforms • Experience with using automation tools (e.g., Ansible, Puppet, Chef) and DevOps tools (e.g., Jenkins, Travis-CI, Gitlab CI) • Experience with or exposure to cyber network data (e.g., PCAP, flow, host logs) or a demonstrated ability to work with heterogeneous data types that are produced at high velocity • Highly motivated with strong communication skills, you have the ability to work successfully with multi-functional teams and coordinate effectively across organizational boundaries and geographies


Ways To Stand Out From The Crowd • Experience working with AI/machine learning/deep learning computing is the most productive and pervasive platform for deep learning and AI.  We integrate and optimize every deep learning framework. With deep learning, we can teach AI to do almost anything.  We are making sense of the complex world of construction. These are just a few examples. AI will spur a wave of social progress unmatched since the industrial revolution.


Briq is changing the world of construction. Join our development team and help us build the real-time, cost-effective computing platform driving our success in this dynamic and quickly growing field in one of the world's largest industries.


Briq is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression , sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.


Read About Us on TechCrunch : https://techcrunch.com/2019/02/22/briq-the-next-building-block-in-techs-reconstruction-of-the-construction-business-raises-3-million/ 

Brickschain
  • Santa Barbara, CA
  • Salary: $70k - 100k

Briq is hiring a Senior Full Stack Software Engineer Big Data/ML Pipelines to scale up its AI and ML dev team. You will need to have strong programming skills, a proven knowledge of traditional Big Data technologies, experience working with heterogeneous data types at scale, experience with Big Data architectures, past experience working with a team to transform proof-of-concept tools to production-ready toolkits, and excellent communication and planning skills. You and other engineers in this team will help advance Briq's capacity to build and deploy leading solutions for AI-based applications in cyber security.


What You'll Be Doing • Working with data scientists and data engineers to turn proof-of-concept analytics/workflows into production-ready tools and toolkits • Architecting and implementing high performance data pipelines and integrating them with existing cyber security infrastructure and solutions • Deploying and productionalizing solutions focused around threat hunting, anomaly detection, and security analytics • Providing input and feedback to teams regarding decisions surrounding topics such as infrastructure, data architectures, and DevOps strategy • Building automation and tools that will increase the productivity of teams developing distributed systems


What We Need To See • You have a BS, MS, or PhD in Computer Science, Computer Engineering, or closely related field with 4+ years of work or research experience in software development • 1+ years working with data scientists and data engineers to implement proof-of-concept ideas to production environments, including transitioning tools from research-based tools to those ready for deployment • Strong skills in Python and scripting tasks as well as comfort in using Linux and typical development tools (e.g., Git, Jira, Kanban) • Solid knowledge of traditional big data technologies (e.g., Hadoop, Spark, Cassandra) and expertise developing for and targeting at least one of these platforms • Experience with using automation tools (e.g., Ansible, Puppet, Chef) and DevOps tools (e.g., Jenkins, Travis-CI, Gitlab CI) • Experience with or exposure to cyber network data (e.g., PCAP, flow, host logs) or a demonstrated ability to work with heterogeneous data types that are produced at high velocity • Highly motivated with strong communication skills, you have the ability to work successfully with multi-functional teams and coordinate effectively across organizational boundaries and geographies


Ways To Stand Out From The Crowd • Experience working with AI/machine learning/deep learning computing is the most productive and pervasive platform for deep learning and AI.  We integrate and optimize every deep learning framework. With deep learning, we can teach AI to do almost anything.  We are making sense of the complex world of construction. These are just a few examples. AI will spur a wave of social progress unmatched since the industrial revolution.


Briq is changing the world of construction. Join our development team and help us build the real-time, cost-effective computing platform driving our success in this dynamic and quickly growing field in one of the world's largest industries.


Briq is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression , sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.


Read About Us on TechCrunch : https://techcrunch.com/2019/02/22/briq-the-next-building-block-in-techs-reconstruction-of-the-construction-business-raises-3-million/ 

Marketcircle
  • No office location
  • Salary: C$65k - 75k

Are you a software developer that loves the option of working from home, collaborating with a fun team, and enjoy solving challenging problems?  If so, we're looking for an experienced software developer to join our Backend Team. This team is primarily responsible for developing, maintaining and supporting the backend services for our apps. You will be a highly motivated team player, as well as creative and passionate about developing new technology that not only improves the way our apps work, but also helps small businesses world wide. You are a self-starter and aren’t afraid to jump in on the deep end.  Why you’ll love working at Marketcircle:



  • Work remotely! No one likes having to battle traffic during rush-hour on a daily basis, so you don’t have to! 

  • Startup style company. You won’t find any of that corporate BS here!

  • Ownership. We give you the freedom and flexibility to take ownership of your work. In fact, we believe in this so much that its one of our core values. 

  • Learn. We invest in our employees both vertically and horizontally. Want to attend a conference? Great! Want to learn the latest language? We have unlimited Udemy courses. 

  • Team. Our team is like our second family. And why shouldn’t they be? We work, learn, eat and in some cases even live with each other! 


You are:



  • an experienced software developer, with some experience building backend services

  • comfortable working remotely

  • comfortable working independently or collaboratively

  • willing to participate in a rotating on-call schedule


You’ll be working on:



  • a HTTP/REST API written in Ruby (you will probably spend most of you time here)

  • an Authentication/Payment backend written in Ruby

  • a PostgreSQL database with a custom C extension to track changes


You have:



  • a solid understanding of modern backend applications

  • experience with modern API design and ideally know your way around in a web framework such as Ruby on Rails, Django, or Sinatra

  • experience with a either Ruby, Python, or a similar scripting language

  • an appreciation for well written, tested and documented code

  • experience with Linux or a BSD

  • experience with Git and GitHub


Bonus Points for:



  • experience with infrastructure management tools (like Puppet, Ansible or Chef) (we use Ansible)

  • experience with cloud infrastructure providers (like AWS, Google Cloud, Microsoft Azure or DigitalOcean)

  • knowing your way around the network stack, from HTTP to TCP to IP and have a solid understanding of security (TLS/ IPSec/Firewalls)


How to Apply:  Send your resume over to jobs[at]marketcircle[dot]com and be sure to include why you’d be the best fit for this role. 

Marketcircle
  • No office location
  • Salary: C$65k - 75k

Are you a software developer that believes in remote work, collaborating with a fun team, and enjoy solving challenging problems?  If so, we're looking for an experienced software developer to join our Backend Team. This team is primarily responsible for developing, maintaining and supporting the backend services for our apps. You will be a highly motivated team player, as well as creative and passionate about developing new technology that not only improves the way our apps work, but also helps small businesses world wide. You are a self-starter and aren’t afraid to jump in on the deep end.  Why you’ll love working at Marketcircle:



  • Work remotely! No one likes having to battle traffic during rush-hour on a daily basis, so you don’t have to! 

  • Startup style company. You won’t find any of that corporate BS here!

  • Learn. We invest in our employees both vertically and horizontally. Want to attend a conference? Great! Want to learn the latest language? We have unlimited Udemy courses. 

  • Team. Our team is like our second family. And why shouldn’t they be? We work, learn, eat and in some cases even live with each other! 


You are:



  • an experienced software developer, with some experience building backend services

  • comfortable working remotely

  • comfortable working independently or collaboratively

  • willing to participate in a rotating on-call schedule


You’ll be working on:



  • a HTTP/REST API written in Ruby (you will probably spend most of you time here)

  • an Authentication/Payment backend written in Ruby

  • PostgreSQL database(s) with a custom C extension to track changes

  • Elasticsearch 


You have:



  • a solid understanding of modern backend applications

  • experience with modern API design and ideally know your way around in a web framework such as Ruby on Rails, Django, or Sinatra

  • experience with a either Ruby, Python, or a similar scripting language

  • an appreciation for well written, tested and documented code

  • experience with Linux or a BSD

  • experience with Git and GitHub


Bonus Points for:



  • experience with infrastructure management tools (like Puppet, Ansible or Chef) (we use Ansible)

  • experience with cloud infrastructure providers (like AWS, Google Cloud, Microsoft Azure or DigitalOcean)

  • knowing your way around the network stack, from HTTP to TCP to IP and have a solid understanding of security (TLS/ IPSec/Firewalls)


How to Apply:  Send your resume over to jobs[at]marketcircle[dot]com and be sure to include why you’d be the best fit for this role. 

RAND Corporation
  • Santa Monica, CA

The RAND Corporation seeks an experienced Software Engineer with focus on supporting our multidisciplinary software development activities.


The position requires eliciting requirements from users, designing software products with researchers, implementing designs in code, and iterating with users and researchers to ensure functional and quality requirements are met.


The selected candidate will work on project teams of research staff and domain experts and will often be the sole software engineer on the project.  Some projects will involve multiple software developers so ability to work with them effectively is key.


Technical needs will vary by project so the selected candidate must be a well-rounded generalist able to develop solutions in more than one of the following application paradigms:  desktop, web, mobile, database, modelling & simulation, big data analytics including machine learning, statistical analysis, and visualization.



Qualifications 


KEY FUNCTIONS:



  • Developing in part or in whole interactive applications including the graphical user interface to back end server components and databases

  • Developing applications or scripts to data mine, analyze, and visualize data sets

  • Briefing others on software design, software development progress, and software tool results

  • Participate in software development QA activities

  • Written documentation of software and software tool results for inclusion in customer briefings and RAND publications 


TECHNICAL REQUIREMENTS:



  • Working knowledge of object oriented analysis and design

  • Expertise in PYTHON, JAVA, C#, R,  C++, OR C

  • Some expertise with at least one web programming language such as JAVASCRIPT, RUBY, or PYTHON and at least one framework such as NODE.JS, ANGULAR,  or REACT

  • Some expertise with at least one database platform such as MONGODB, MYSQL, or POSTGRES

  • Willingness to pick up new technologies on a frequent basis

  • Ability to prototype rapidly

  • Experience with:


    • Cloud service providers like AWS or AZURE and tools in those ecosystems

    • Big data analytic platforms like HADOOP or SPARK

    • Simulation/model development

    • Use of version control tools, like git, in a collaborative environment    



TECHNICAL PREFERENCES:



  • Experience working in a research environment



Education Requirements


BA/BS required, MS/MA preferred, preferably in Computer Science or highly related field such as Information Systems, Computer Engineering, etc.



Experience


2 years relevant experience minimum, 3 or more years relevant experience preferred.



Security Clearance 


U.S. Citizenship may be required for positions requiring a U.S. government security clearance.



Location


Santa Monica (CA), Pittsburgh (PA), Washington (DC), and Boston (MA)




RAND is an Equal Opportunity Employer–minorities/females/veterans/individuals with disabilities/sexual orientation/gender identity

Comcast
  • Washington, DC

Comcast brings together the best in media and technology. We drive innovation to create the world’s best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast.

Job Summary:


Responsible for identifying complex problems and developing individual algorithms to solve them. Evaluates accuracy and functionality of individual algorithms, and understands the overall implications of entire frameworks. Develops and tests prototypes and new products and applications to demonstrate functionality and transition algorithms to the Engineering team to put into production. Responsible for the initial design and implementation of future products and applications. Analyzes
and evaluates solutions both internally generated as well as third party supplied. Develops novel ways to solve problems and discover new products, provide guidance and leadership to more junior researchers.
Collaborates and influences immediate team members and collaborates cross functionally. Deciphers tasks into functional components and delegates to other members of the team. Integrates knowledge of business and functional priorities. Acts as a key contributor in a complex and crucial environment. May lead teams or projects and shares expertise.

As part of Applied AI NLP Engineering team, you will develop and improve frameworks that scale natural language processing to support the needs of the X1 voice remote. Help the team to create high performance web services to support millions of requests.


Employees at all levels are expect to:

- Understand our Operating Principles; make them the guidelines for how you do your job
- Own the customer experience-think and act in ways that put our customers first, give them seamless digital options at every touchpoint, and make them promoters of our products and services
- Know your stuff-be enthusiastic learners, users and advocates of our game-changing technology, products and services, especially our digital tools and experiences
- Win as a team-make big things happen by working together and being open to new ideas better for our customers
- Be an active part of the Net Promoter System-a way of working that brings more employee and customer feedback into the company-by joining huddles, making call backs and helping us elevate opportunities to do
- Drive results and growth
- Respect and promote inclusion and diversity
- Do what's right for each other, our customers, investors and our communities

Skills required:

  • Hands-on experience with RESTful web services
  • Experience with Agile development methodologies and practices
  • Experience in build lifecycle management
  • Experience in CI/CD pipeline for development

    Technologies:

  • AWS services familiarity
  • Docker
  • Java, spring, Java Web Container (Tomcat, Jetty, Netty)
  • Elasticsearch
  • DynamoDB
  • Git
  • experience with DevOps model

    Core Responsibilities:


    - Guides the successful completion of high profile, revenue-impacting programs. May serve as a leader to others on assigned projects.
    - Develops technical solutions, specifications, technical requirements, algorithms and frameworks of custom designs for future products and applications.
    - Determines the technical objectives of an assignment. Leads the design of prototypes, collaborating with the product team and other stakeholders through development. Conducts studies to support product or application development.
    - Organizes and maintains resources. Researches, writes and edits documentation and technical requirements, including evaluation plans, confluence pages, white papers, presentations, test results, technical manuals, formal recommendations and reports. Creates patents, API's and other intellectual property. Presents papers and/or attends conferences, as well as displaying leadership in these areas.
    - Tests and evaluates solutions presented to the Company by various internal and external partners and vendors. Completes case studies, testing and reporting.
    - Represents the work team and organization as the prime technical contact on assigned projects.
    - Consistent exercise of independent judgment and discretion in matters of significance.
    - Regular, consistent and punctual attendance. Must be able to work
    nights and weekends, variable schedule(s) as necessary.
    - Other duties and responsibilities as assigned.

    Job Specification:


    - Generally requires 7-11 years related experience.
    - Bachelor's Degree
    - Computer Science, Engineering, Applied Mathematics, or Statistics

Comcast is an EOE/Veterans/Disabled/LGBT employer

Comcast
  • Philadelphia, PA

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Job Summary:

Billions of requests. Millions of Customers. Be part of Comcast's TPX X1 Stream Apps' Platform and APIs team! Our team designs, builds and operates the APIs that power Comcast's X1 web, mobile, Roku and SmartTV properties. Reliability and performance at this scale require complex information systems to be made simple. We are looking for an engineer who is able to listen to stakeholders and clients, understand technical requirements, collaborate on solutions, and deliver technology services in a high velocity, dynamic, "always on" environment. As a member of our team you will work with other engineers and DevOps practitioners to produce mission-critical applications & infrastructure, tools, and processes that enable our systems to scale at a rapid pace. One day might involve creating an API that returns a customer's channel lineup or performance tuning of a Java web application; the next may be building tools to enable continuous delivery or infrastructure as code.

Technology snapshot: AWS, Apache, EC2, Git/Gerrit, Graphite, Grafana, Java, Lambda, Linux, Memcached, Scala, Akka, Splunk, Spring, Tomcat, Vmware, OpenStack, TerraForm, Ansible, Concourse

Where we headed?

Our goal is to build, scale and guard the systems that delight our customers. To do so, you will need strong skills in the following areas:

Responsibilities

As a member of Advanced Application Engineering's Platform and APIs Team, you will provide technical expertise and guidance within our cross-functional project team, and you'll work closely with other software and QA engineers to build quality, scalable products that delight our customers. Responsibilities range from high-level logical architecture through low-level detailed design and implementation, including:

  • Design, build, deliver and scale sophisticated high-volume web properties and agreed upon solutions from the catalog of TVX application services.
  • Collaborate with project stakeholders to identify product and technical requirements. Conducts analysis to determine integration needs.
  • Write code that meets functional requirements and is testable and maintainable. Have a passion for test driven development.
  • Work with Quality Assurance team to determine if applications fit specification and technical requirements.
  • Produce technical designs and documentation at varying levels of granularity.

Desired Qualifications

  • 2+ years software development experience in Java with a solid understanding of Spring, Hibernate frameworks and REST based architecture.
  • Experience with Java application servers and J2EE containers (Tomcat).
  • Knowledge of object-oriented design methodology and standard software design patterns.
  • Firm grasp of testing methodologies and frameworks.
  • Experience in caching especially in HTTP compliant caches.
  • Fundamental understanding of the HTTP protocol.
  • Experience developing with Major MVC frameworks (Spring MVC).
  • Familiarity with consolidating and normalizing data across many data sources, specifically Internet data aggregation and metadata processing.
  • Familiar with agile development methodologies such as Scrum.
  • Strong technical written and verbal communication skills.
  • A sense of ownership, initiative, and drive and a love of learning!

Additional Qualifications

  • UNIX background (Solaris/Linux)
  • CDN Experience is a plus
  • Familiarity with cloud computing (OpenStack, S3, SQS, Hadoop...).
  • Experience with Scala, Ruby on Rails, Akka

Education

Bachelor's degree in Engineering or Computer Science or a related field, or relevant work experience.

Comcast is an EOE/Veterans/Disabled/LGBT employer