OnlyDataJobs.com

RM Dayton Analytics
  • Dallas, TX
Job Summary & Responsibilities

RM Dayton Analytics premiere retail client has an immediate opening for a Web Analytics & Measurements Analyst.  

Overview:

 

The Analytics & Measurement Specialist position will identify data driven insights, inform test design and measure test results for various pilot test initiatives.

 

Responsibilities:

  • Proposes, executes and communicates discovery analytics to identify new tests or inform test design/sizing
  • Determine relevant metrics to effectively measure the performance of each test
  • Evaluates test performance against KPIs (including deep dives/segment cuts) according to measurement playbook, helps apply learnings to future tests and assists with making scaling decisions
  • Assists in sizing annualized impact of winning tests
  • Receive discovery analysis requests and turn them into concrete analysis plans
  • Perform relevant analytics to support test execution results and deep dives
  • Provide accurate and timely reporting of entire portfolio of active tests results

Requirements:

  • At least 2 years of experience in a business analytics role, preferably in testing or web analytics
  • Experience with A/B, multivariate and/or incremental test design and implementation
  • Experience in data manipulation and analysis using SQL, SAS, R, or Python
  • Bachelors degree in a quantitative discipline (Statistics, Applied Mathematics, Economics, etc) or sufficient on the job use of related skills
  • Exceptional standards for quality and strong attention to detail
  • Experience working in an Agile/Scrum environment
  • Exposure to big data tools such as Hadoop/Hive a plus
  • Experience developing various types of predictive models with a targeted result of increasing revenue is a plus

Equal Opportunity Employer. All qualified applicants will receive consideration for employment and will not be discriminated against based on race. color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability, age, pregnancy, genetic information or any other consideration prohibited by law or contract.

JPMorgan Chase & Co.
  • Houston, TX
Our Corporate Technology team relies on smart, driven people like you to develop applications and provide tech support for all our corporate functions across our network. Your efforts will touch lives all over the financial spectrum and across all our divisions: Global Finance, Corporate Treasury, Risk Management, Human Resources, Compliance, Legal, and within the Corporate Administrative Office. Youll be part of a team specifically built to meet and exceed our evolving technology needs, as well as our technology controls agenda.
As a Machine Learning Engineer, you will provide high quality technology solutions that address business needs by developing applications within mature technology environments. You will utilize mature (3rd or 4th Generation) programming methodologies and languages and adhere to coding standards, procedures and techniques while contributing to the technical code documentation.
You will participate in project planning sessions with project managers, business analysts and team members to analyze business requirements and outline the proposed IT solution. You will participate in design reviews and provide input to the design recommendations; incorporate security requirements into design; and provide input to information/data flow, and understand and comply to Project Life Cycle Methodology in all planning steps. You will also adhere to IT Control Policies throughout design, development and testing and incorporate Corporate Architectural Standards into application design specifications.
Additionally, you will document the detailed application specifications, translate technical requirements into programmed application modules and develop/Enhance software application modules. You will participate in code reviews and ensure that all solutions are aligned to pre-defined architectural specifications; identify/troubleshoot application code-related issues; and review and provide feedback to the final user documentation.
Key Responsibilities
Provide leadership and direction for the key machine learning initiatives in the Operation Risk domain
Act as machine learning evangelist in the Operation Risk domain
Perform research and proof of concepts to determine ML/AI applicability to potential use cases
Mentor junior data scientists and team members new to machine learning
Display efficient work style with attention to detail, organization, and strong sense of urgency
Designing software and producing scalable and resilient technical designs
Creating Automated Unit Tests using Flexible/Open Source Frameworks
Digesting and understanding Business Requirements and designing new modules/functionality which meet the needs of our business partners.
Implement model reviews and machine learning governance framework
Manage code quality for total build effort.
Participate in design reviews and provide input to the design recommendations
Essentials
  • Advanced degree in Math, Computer Science or another quantitative field
  • Three to five years working experience in machine learning, preferably natural language processing
  • Ability to work in a team as a self-directed contributor with a proven track record of being detail orientated, innovative, creative, and strategic
  • Strong problem solving and data analytical skills
  • Industry experience building end-to-end Machine Learning systems leveraging Python (Scikit-Learn, Pandas, Numpy, Tensorflow, Keras, NLTK, Gensim et al.) or other similar languages
  • Experience of a variety of machine learning algorithms (classification, clustering, deep learning et al.) and natural language processing applications (topic modeling, sentiment analysis et al.)
  • Experience with NLP techniques to transform unstructured text data to structured data (lemmatization, stemming, Bag-of-words, word embedding et al.)
  • Experience visualizing/presenting data for stakeholders using Tableau, or open-source Python packages such as matplotlib, seaborn et al.
  • Familiar with Hive/Impala to manipulate data and draw insights from Hadoop
Personal Specification
Demonstrate Continual Improvement in terms of Individual Performance
Strong communication skills
Bright and enthusiastic, self-directed
Excellent analytical and problem solving skills
When you work at JPMorgan Chase & Co., youre not just working at a global financial institution. Youre an integral part of one of the worlds biggest tech companies. In 14 technology hubs worldwide, our team of 40,000+ technologists design, build and deploy everything from enterprise technology initiatives to big data and mobile solutions, as well as innovations in electronic payments, cybersecurity, machine learning, and cloud development. Our $9.5B+ annual investment in technology enables us to hire people to create innovative solutions that will not only transform the financial services industry, but also change the world.
Varen Technologies
  • Annapolis Junction, MD
  • Salary: $90k - 180k

Varen Technologies is seeking an experienced and flexible Cloud Software Engineer to augment the existing platform team for a large analytic cloud repository. A successful candidate for this position has experience working with large Hadoop and Accumulo based clusters and a familiarity with open-source technologies. Additional knowledge of Linux OS development, Prometheus, Grafana, Kafka and CentOS would benefit the candidate.


The Platform Team developers build/package/patch the components (typically open-source products) and put initial monitoring in place to ensure the component is up and running, if applicable. The platform team builds subject matter expertise and the integration team installs. Ideal candidates would have familiarity with open-source products and be willing/able to learn new technologies.


REQUIRED EXPERIENCE:



  • 5 years of experience in Software Engineering

  • 4 years of experience developing software with high level languages such as Java, C, C++

  • Demonstrated experience working with OpenSource (NoSQL) products that support highly distributed, massively parallel computation needs such as Hbase, CloudBase/Acumulo, Big Table, etc

  • Demonstrated work experience with the Map Reduce programming model and technologies such as Hadoop, Hive, Pig, etc.


DESIRED EXPERIENCE:



  • Hadoop/Cloud Developer Certification

  • Experience developing and deploying analytics within a heterogeneous schema environment.

  • Experience designing and developing automated analytic software, techniques, and algorithms.


 CLEARANCE REQUIREMENT:



  • TS/SCI w/ Polygraph

Audible
  • Newark, NJ

Provide technical leadership for a team of machine learning engineers and data scientists tasked with empowering Audibles customer experience through model driven insights

Be the technical expert driving the exploration, selection and implementation of tools and enablers to empower anyone at Audible engaged in data science

Define technical strategy and architecture across Data Engineering, Data Science and Business Intelligence as well as Audible Client Facing platforms

Define iterative and incremental technical strategy to implement a Data Science platform to enable agility, speed and leverage, as well as operational excellence

Define practical and maintainable architectures that can handle large-scale datasets, based on detailed knowledge of relevant technologies in the big data engineering space as it relates to the support of data science

Define technical integration points with Big Data, Business Intelligence and our client facing platforms; resolve complex cross stream/group system architecture and business problems

Hands-on, deep technical expertise: deliver key aspects of the architecture through implementation of proofs of concept, and/or key components of the overall architecture that serve as examples or the foundation for implementation

Lead a team of machine learning engineers as they use tools and develop processes to efficiently train, productionize, monitor and maintain data models at scale

Partner with technical managers whose teams will deliver the strategy and architecture incrementally; hands on mentorship and influence of engineers through design and code reviews

Influence business strategy based on emerging technology capabilities and trends; Provide advice to senior managers across the marketing, product, data science and engineering disciplines




BASIC QUALIFICATIONS


· Bachelors Degree in Computer Science
· 7+ years of experience of software development and deployment of distributed multi-tier applications with high throughput requirements




PREFERRED QUALIFICATIONS


· Demonstrated expertise in a variety of Big Data and Data Science technologies and platforms such as AWS RedShift, Glue, EMR, Sagemaker, as well as Hadoop, Spark, Jupyter
· NotebooksDemonstrated experience using machine learning engineering tools and practices to productionize, train, deploy, monitor and maintain data models at scale using libraries such as MLLib, SciKit-learn and TensorFlow
· Action-oriented strategic thinker
· Be able to thrive in an ambiguous environment - where change is the only constant
· Detailed oriented to ensure that project success is paramount
· Strong verbal and communication skills
· Strong analytical skills and an out of the box thinker
· Self-starter with the ability to multi-task and work in a very fast paced environment
· Track record of defining and delivering cross functional solutions that are innovative and extensible
· Be able to disagree, yet align, when dealing with different stakeholders
· Results oriented and with a strong customer focus
· Highly autonomous. Delivers with little guidance
· Strong mentor of peers and subordinates

Audible is an Equal Opportunity Employer Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age

Audible
  • Newark, NJ

ABOUT THE TEAM:

Our Data Technology team owns and develops the technology platform that offers decision makers both performance metrics and analysis as well as the self-service capability to perform independent analysis on a wide array of internal and external datasets in order to identify opportunities, trends and issues, uncover new insights, and fine-tune operations to meet business goals.

In this role, you will be instrumental in revolutionizing the way Audible captures high quality, reusable data from across our landscape of applications and services. You will partner with other software engineering teams and work together with them to improve data availability and data instrumentation, and help to positively transform the value of data across Audible, by unlocking data so that it is faster, more available, more robust, and more of an actively useful asset for Audible teams to leverage.

KEY RESPONSIBILITIES
· Play a leading role in building and maintaining the infrastructure for Enterprise Data Platforms, using software engineering best practices, data management fundamentals, data storage principles, recent advances in distributed systems and data streaming, and operational excellence best practices.
· Work closely with product owners and engineers across the company to instrument key data elements
· Design, build, and support platforms for monitoring and surfacing data quality issues
· Integrate different technologies to provide data lineage and visibility
· Effectively communicate with various teams and stakeholders, escalate technical and managerial issues at the right time and resolve conflicts.
· Demonstrate passion for quality and productivity by use of efficient development techniques, standards and guidelines
· Peer reviews of work. Actively mentor more junior members of the team, improving their skills, their knowledge of our systems and their ability to get things done
HOW DOES AMAZON FIT IN?

We're a part of Amazon: they are our parent company and it's a great partnership. You'll get to play with Amazon's technologies, but it doesn't stop there. Audible is built on a strong foundation of Amazon technology and you'll have insight into the inner workings of the world's leading ecommerce experience. There's a LOT to learn! Your career will benefit from working with teams like Alexa, Search, Kindle, A9, P13N and many more.

If you want to own and solve problems, work with a creative dynamic team, fail fast in a supportive environment whilst growing your career and working on a platform that powers web applications used by millions of customers worldwide we want to hear from you.




BASIC QUALIFICATIONS


· Bachelors degree or higher in Computer Science or related field
· 5+ years of professional software development experience
· Experience with a variety of modern programming languages (Java, JavaScript, C/C++) and open-source technologies (Linux, Spring, SOA)




PREFERRED QUALIFICATIONS


· Strong problem-solving skills with the ability to navigate highly complex and ambiguous situations
· Ability to work independently with little supervision and successfully resolve ambiguity
· Willingness to learn, be open minded to new ideas and different opinions yet knowing when to stop, analyze, and reach a decision
· Well-rounded engineering skills; full-stack development experience - web + services - If you've built something in your spare time send us the link, we'd love to hear about it
· Great communication skills - ability to think creatively and adapt the message to the audience. Can provide information to technical and non-technical stakeholders alike and guide them to confidently informed decisions
· Strong data-oriented skills with knowledge of Core Data and database design
· Prior use of AWS technologies at scale in a production environment
· Familiarity with big data technologies (Hadoop, Hive, Spark, etc.)
· Experience working with Agile methodologies

Audible is an Equal Opportunity Employer Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age

Indeed - Tokyo, JP
  • Tokyo, Japan

Your job.



The role of Data Science at Indeed is to follow the data. We log, analyze, visualize, and model terabytes of job search data. Our Data Scientists build and implement machine learning models to make timely decisions. Each of us is a mixture of a statistician, scientist, machine learning expert, and engineer. We have a passion for building and improving Internet-scale products. We seek to understand human behavior via careful experimentation and analysis, to “help people get jobs”.

You're someone who wants to see the impact of your work making a difference every day. You understand how to use data to make decisions and how to train others to do so. You find passion in the craft and are constantly seeking improvement and better ways to solve tough problems. You produce the highest quality Data Science solutions and lead others to do the same.


You understand that the best managers serve their teams by removing roadblocks and giving individual contributors autonomy and ownership. You have high standards and will take pride in Indeed like we do as well as push us to be better. You have delivered challenging technical solutions at scale. You have led Data Science or engineering teams, and earned the respect of talented practitioners. You are equally happy talking about deep learning and statistical inference, as you are brainstorming about practical experimental design and technology career development. You love being in the mix technically while providing leadership to your teams.


About you.


Requirements   



  • Significant prior success as a Data Scientist working on challenging problems at scale

  • 5+ years of industrial Data Science experience, with expertise in machine learning and statistical modeling

  • The ability to guide a team to achieve important goals together

  • Have full stack experience in data collection, aggregation, analysis, visualization, productionization, and monitoring of Data Science products

  • Strong desire to solve tough problems with scientific rigour at scale

  • An understanding of the value derived from getting results early and iterating

  • Strong ability to coach Data Scientists, helping them improve their skills and grow their careers

  • Ph.D. or M.S. in a quantitative field such as Computer Science, Operations Research, Statistics, Econometrics or Mathematics

  • Passion to answer Product/Engineering questions with data

  • Proficiency with the English language 


We get excited about candidates who



  • Can do small data modeling work: R, Python, Julia, Octave

  • Can do big data modeling work: Hadoop, Pig, Scala, Spark

  • Can fish for data: SQL, Pandas, MongoDB

  • Can deploy Data Science solutions: Java, Python, C++

  • Can communicate concisely and persuasively with engineers and product managers



Indeed provides a variety of benefits that help us focus on our mission of helping people get jobs.


View our bounty of perks: http://indeedhi.re/IndeedBenefits  

IBM
  • Austin, TX

IBM Global Business Services: Join a Leader. Consult with us. IBM Global Business Services helps top-tier clients solve their most complex business and technical issues. As the Advanced Analytics Leader, you will deliver innovative business consulting, business process design, systems integration, and application design and management to leading commercial and public-sector organizations in 17 industries worldwide. With access to resources that only a global leader can provide, as a consultant you will learn valuable skills, gain access to a vast and diverse network of talented professionals, and enjoy unparalleled career, training, and educational opportunities.
 

As the Advanced Analytics Leader you will have an analytics background with in depth knowledge in-SAP, HANA, Big Data-Hadoop, Machine learning . The responsibilities include delivery on consulting engagements, sales activities, and thought leadership. You will also have strong leadership acumen, an ability to operate in positions requiring significant self-direction and motivation, and a proven track record in Analytics consultative selling solutions to senior business and IT leaders. You will be empowered to manage multiple priorities and capable of developing strong relationships at assigned accounts and must be able to:
 

  • Lead and manage data science projects. Support machine learning offerings and be a thought leader in machine learning initiatives across analytics
  • Have a proven track record of drawing insights from data and translating those insights into tangible business outcomes
  • Ability to implement new technologies with cutting-edge machine learning and statistical modeling techniques
  • Establish and maintain deal focused trusted relationships with clients and partners to scope, solution, propose, close and deliver complex projects
  • Identify, notice, validate, and qualify opportunities and help close them on an as needed basis. Maintain a strong pipeline of opportunities and progress them during the quarter
  • Recruit, motivate, mentor and develop team members

Bottom Line? We outthink ordinary. Discover what you can do at IBM.


Required Professional and Technical Expertise :

  • At least 5 years experience in professional services consulting, and sales at a national, or global management consulting firm
  • At least 3 years experience leading and delivering SAP HANA solutions - calculation view modeling, PAL, SQL scripting and performance tuning
  • At least 3 years experience working on full life cycle implementation projects with SAP
  • Strong understanding of SAP data SAP ERP/CRM/APO across modules SD, MM, PP, FI and SAP processes O2C, R2R, P2P
  • At least 2 years experience working on data science projects with of a variety of machine learning and data mining techniques (clustering, decision tree learning, GLM, Bayesian modeling artificial neural networks, etc.)
  • Expertise using statistical computer languages (R, Python, HANA PAL etc.) to manipulate data and draw insights from large data sets
  • Knowledge of working with HANA studio/web ide with calculation views, store procedures, flowgraphs, XS applications

Preferred Professional and Technical Expertise :

  • At least 5 years experience in applying predictive analytic methodologies in a commercial environment
  • At least 5 years experience in professional SAP services consulting, and sales at a national, or global management consulting firm
  • At least 5 years experience working with SAP HANA solution in areas of calculation view modeling, PAL, SQL scripting and performance tuning
  • At least 5 years experience working on full life cycle implementation projects with SAP
  • Strong understanding of SAP data SAP ERP/CRM/APO across modules SD, MM, PP,FI  and SAP processes O2C, R2R, P2P
  • At least 5 years deployment experience on data science projects with a variety of machine learning and data mining techniques (clustering, decision tree learning, artificial neural networks, etc.)
  • Masters in Mathematics, Statistics, Computer Science or similar degree

BENEFITS
Health Insurance. Paid time off. Corporate Holidays. Sick leave. Family planning. Financial Guidance. Competitive 401K. Training and Learning. We continue to expand our benefits and programs, offering some of the best support, guidance and coverage for a diverse employee population.
  • http://www-01.ibm.com/employment/us/benefits/
  • https://www-03.ibm.com/press/us/en/pressrelease/50744.wss
 
CAREER GROWTH
Our goal is to be essential to the world, which starts with our people. Company wide we kicked off an internal talent strategy program called Go Organic. At our core, we are committed to believing and investing in our workforce through:
 
  • Skill development: helping our employees grow their foundational skills
  • Finding the dream job at IBM: navigating our company with the potential for many careers by channeling an employees strengths and career aspirations
  • Diversity of people: Diversity of thought driving collective innovation
 
In 2015, Go Organic filled approximately 50% of our open positions with internal talent that were promoted into the role.


CORPORATE CITIZENSHIP
With an employee population of 375,000 in over 170 countries, amazingly we connect, collaborate, and care. IBMers drive a corporate culture of shared responsibility. We love grand challenges and everyday improvements for our company and for the world. We care about each other, our clients, and the communities we live, work, and play in!
  • http://www.ibm.com/ibm/responsibility/initiatives.html
  • http://www.ibm.com/ibm/responsibility/corporateservicecorps
State Farm
  • Dallas, TX

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


Skills & Professional Experience

·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

·        Experience with unstructured data sets: text analytics, image recognition etc.

·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

·        Exercise sound judgment to diagnose & resolve problems within area of expertise

·        Familiarity with CI/CD development methods, Git and Docker a plus


Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


Remote work option is not available.


There is no sponsorship for an employment visa for the position at this time.


Competencies desired:
Critical Thinking
Leadership
Initiative
Resourcefulness
Relationship Building
Blue Chip Talent
  • Ann Arbor, MI

Summary:
This position is part of the business intelligence team and will be responsible for projects including BI and Analytics but with a lot of opportunity to innovate by incorporating advanced capabilities of descriptive and predictive insights into BI Deliverables. As a dynamic and effective BI team member, you will liaise with business across all domains to understand their growing analytic data needs and develop and operationalize solutions with business impact. The ideal candidate will leverage their knowledge of business intelligence tools, statistical and data mining techniques, data warehouse and SQL to find innovative solutions utilizing data from multiple sources. We are looking for a strong team member who can communicate with the business as well as IT.

GENERAL RESPONSIBILITIES

  • Drive the utilization of new data sources for impactful business insights
  • Condense large data sets into clear and concise observations and recommendations
  • Design and develop BI/Analytics solution to facilitate end user experience and impact the business
  • Generate new ideas and execute on solutions
  • Demonstrate expert knowledge of at least one analytics tool
  • Understand and apply advanced techniques for data extraction, cleaning and preparation tasks
  • Understand dimensional modeling and data warehouse concepts
  • Able to clearly articulate findings and answers through effective data visualization approaches
  • Work with stakeholders to determine analytics data requirements and implement solutions to provide actionable business insight
  • Serve as a mentor for other team members
  • Perform BI on call duties
  • Regularly share best practices to help develop others
  • Able to effectively communicate with multiple stakeholders across the organization
  • Able to work in a team-oriented, collaborative, fast-paced and diverse environment
  • 5+ years of Experience with implementing and supporting BI/Analytics solutions
  • 5+ years of Experience with databases and querying language, particularly SQL
  • 3+ years of Experience with statistical tool such as R and SAS
  • 5+ years of experience delivering BI/DW solutions within an Agile construct
  • Experience working with large data sets; experience with distributed computing tools (Hadoop, Map/Reduce, Hive, Spark) and other emerging technologies is highly desired
  • Familiarity with statistical methods used in descriptive and inferential statistics
  •  Knowledge of statistical modeling and machine learning is preferable
  • Fluency with programming language such as Python
Perficient, Inc.
  • Phoenix, AZ
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient Data Solutions is looking for an experienced Hadoop Administrator with experience administering Cloudera on AWS. This postition is located in Boston, however the candidate can be located in any well connected city. Perficient is on a mission to help enterprises take advantage of modern data and analytics architectures, tools, and patterns to improve the business operations and better engage customers. This is an excellent opportunity for the right individual to assist Perficient and its customers to grow the capabilities necessary to improve care through better use of data and information, and in the process take their career to the next level.
Job Overview
The Hadoop System Administrator (SA) is responsible for effective provisioning, installation/configuration, operation, and maintenance of systems hardware and software and related infrastructure to enable Hadoop and analytics on Big Data. This individual participates in technical research and development to enable continuing innovation within the infrastructure. This individual ensures that system hardware, operating systems, software systems, and related procedures adhere to organizational values, enabling staff, volunteers, and Partners.
This individual will assist project teams with technical issues in the Initiation and Planning phases of our standard Project Management Methodology. These activities include the definition of needs, benefits, and technical strategy; research & development within the project life-cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. Participation on projects is focused on smoothing the transition of projects from development staff to production staff by performing operations activities within the project life-cycle.
This individual is accountable for the following systems: Linux and Windows systems that support GIS infrastructure; Linux, Windows and Application systems that support Asset Management; Responsibilities on these systems include SA engineering and provisioning, operations and support, maintenance and research and development to ensure continual innovation.
Responsibilities
  • Provide end to end vision and hands on experience with Cloudera and AWS Platforms especially best practices around HIVE and HBASE
  • Experience automating common adminstratvie tasks in Cloudera and AWS
  • Troubleshoot and develop on Hadoop technologies including HDFS, Kafka, Hive, Pig, Flume, HBase, Spark, Impala and Hadoop ETL development via tools such as ODI for Big Data and APIs to extract data from source. Troubleshooting for AWS Technologies like EWR, EC2, S3, Cloud Foundation, etc.
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Administration of Cloudera clusters on AWS services, security, scalability, configuration and availability and access
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data Modeling
  • Performance tune HIVE and HBASE jobs
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components is a plus
  • Lead capacity planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
Qualifications
  • 3 Plus years of Hadoop Administration
  • Cloudera and AWS certifications are strongly desired.
  • Bachelor's degree, with a technical major, such as engineering or computer science.
  • Four to six years of Linus/Unix system administration experience.
  • Ability to travel up to 50 percent, preferred.
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Perficient, Inc.
  • Detroit, MI
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient Data Solutions is looking for an experienced Hadoop Administrator with experience administering Cloudera on AWS. This postition is located in Boston, however the candidate can be located in any well connected city. Perficient is on a mission to help enterprises take advantage of modern data and analytics architectures, tools, and patterns to improve the business operations and better engage customers. This is an excellent opportunity for the right individual to assist Perficient and its customers to grow the capabilities necessary to improve care through better use of data and information, and in the process take their career to the next level.
Job Overview
The Hadoop System Administrator (SA) is responsible for effective provisioning, installation/configuration, operation, and maintenance of systems hardware and software and related infrastructure to enable Hadoop and analytics on Big Data. This individual participates in technical research and development to enable continuing innovation within the infrastructure. This individual ensures that system hardware, operating systems, software systems, and related procedures adhere to organizational values, enabling staff, volunteers, and Partners.
This individual will assist project teams with technical issues in the Initiation and Planning phases of our standard Project Management Methodology. These activities include the definition of needs, benefits, and technical strategy; research & development within the project life-cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. Participation on projects is focused on smoothing the transition of projects from development staff to production staff by performing operations activities within the project life-cycle.
This individual is accountable for the following systems: Linux and Windows systems that support GIS infrastructure; Linux, Windows and Application systems that support Asset Management; Responsibilities on these systems include SA engineering and provisioning, operations and support, maintenance and research and development to ensure continual innovation.
Responsibilities
  • Provide end to end vision and hands on experience with Cloudera and AWS Platforms especially best practices around HIVE and HBASE
  • Experience automating common adminstratvie tasks in Cloudera and AWS
  • Troubleshoot and develop on Hadoop technologies including HDFS, Kafka, Hive, Pig, Flume, HBase, Spark, Impala and Hadoop ETL development via tools such as ODI for Big Data and APIs to extract data from source. Troubleshooting for AWS Technologies like EWR, EC2, S3, Cloud Foundation, etc.
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Administration of Cloudera clusters on AWS services, security, scalability, configuration and availability and access
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data Modeling
  • Performance tune HIVE and HBASE jobs
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components is a plus
  • Lead capacity planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
Qualifications
  • 3 Plus years of Hadoop Administration
  • Cloudera and AWS certifications are strongly desired.
  • Bachelor's degree, with a technical major, such as engineering or computer science.
  • Four to six years of Linus/Unix system administration experience.
  • Ability to travel up to 50 percent, preferred.
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Tinamica, S.L.
  • Madrid, Spain

Quieres trabajar en una empresa innovadora, dinámica y especializada en Big Data, Analytics, Inteligencia Artificial y BI?

No te pierdas lo que OFRECEMOS:

- Proyectos innovadores donde poder aportar y desarrollar tus conocimientos.

- Estabilidad profesional.

- Formación gratuita en una de las mejores Instituciones de Madrid.

- Trabajar con grandes profesionales que compartan la pasión por la tecnología.

- Trabajar en clientes de primer nivel en diferentes sectores.

- Ambiente laboral inmejorable.

- Salario atractivo y revisable en función del desarrollo profesional.

- Programa de mentoring para optimizar la adquisición de habilidades y conocimientos.

- Contrato indefinido.

- Estar informado de las últimas novedades del Smart Data (asistencia eventos, meetup, etc.).

- Seguro Médico gratuito.

- 26 días de vacaciones al año.

- Salario adaptable a las necesidades (cheques guardería, tickets restaurant, etc.)

- Instalaciones modernas, divertidas y confortables (futbolín/dardos, desayunos...).

SELECCIONAMOS DESARROLLADORES especializados en BIG DATA (Scala, Spark, Flume, HBase, Kafka...) con experiencia de al menos 2 años en proyectos reales y con el manejo del siguiente entorno tecnológico y herramientas:

- Plataformas ecosistema Hadoop (HortonWorks/Cloudera).

- Desarrollo/programación con Scala.

- Sistemas computacionales Apache Spark/Storm.

- Bases de Datos in-Memory: Bases de Datos no-sql, streaming programming.

- Procesamiento distribuido de datos en tiempo real.

- Componentes de Google.

- Valorable: experiencia en Metodologías Agile (Scrum).

Si te gusta lo que lees, eres una persona apasionada de este sector, con ganas de seguir creciendo y desarrollarte en el mundo de Big Data, Analytics y BI, esta es tu oportunidad, no lo dudes más y ¡únete a nuestro equipo!

Coolblue
  • Rotterdam, Netherlands
As an Advanced Data Analyst / Data Scientist you use the data of millions of visitors to help Coolblue act smarter.

Pros and cons

  • Youre going to be working as a true Data Scientist. One who understands why you get the results that you do and apply this information to other experiments.
  • Youre able to use the right tools for every job.
  • Your job starts with a problem and ends with you monitoring your own solution.
  • You have to crawl underneath the foosball table when you lose a game.

Description Data Scientist

Your challenge in this sprint is improving the weekly sales forecasting models for the Christmas period. Your cross-validation strategy is ready, but before you can begin, you have to query the data from our systems and process them in a way that allows you to view the situation with clarity.

First, you have a meeting with Matthias, whos worked on this problem before. During your meeting, you conclude that Christmas has a non-linear effect on sales.  Thats why you decide to experiment with a multiplicative XGBoost in addition to your Regularised-Regression model. You make a grid with various features and parameters for both models and analyze the effects of both approaches. You notice your Regression is overfitting, which means XGBoost isnt performing and the forecast isnt high enough, so you increase the regularization and appoint the Christmas features to XGBoost alone.

Nice! You improved the precision of the Christmas forecast with an average of 2%. This will only yield results once the algorithm has been implemented, so you start thinking about how you want to implement this.

Your specifications

  • You have at least 4 years of experience in a similar function.
  • You have a university degree, MSC, or PHD in Mathematics, Computer Science, or Statistics.
  • You have experience with Machine Learning techniques, such as Gradient Boosting, Random Forest, and Neutral Networks, and you have proven experience with successfully applying these (or similar) techniques in a business environment.
  • You have some experience with Data mining, SQL, BigQuery, NoSQL, R, and monitoring.
  • You're highly knowledgeable about Python.
  • You have experience with Big Data technologies, such as Spark and Hadoop.

Included by default.

  • Money.
  • Travel allowance and a retirement plan.
  • 25 leave days. As long as you promise to come back.
  • A discount on all our products.
  • A picture-perfect office at a great location. You could crawl to work from Rotterdam Central Station. Though we recommend just walking for 2 minutes.
  • A horizontal organisation in the broadest sense. You could just go and have a beer with the boss.

Review



'I believe I'm working in a great team of enthusiastic and smart people, with a good mix of juniors and seniors. The projects that we work on are very interesting and diverse, think of marketing, pricing and recommender systems. For each project we try to use the latest research and machine learning techniques in order to create the best solutions. I like that we are involved in the projects start to end, from researching the problem to experimenting, to putting it in production, and to creating the monitoring dashboards and delivering the outputs on a daily basis to our stakeholders. The work environment is open, relaxed and especially fun'
- Cheryl Zandvliet, Data Scientist
Railinc
  • Headquarters: Cary, NC URL: h
Headquarters: Cary, NC URL: https://www.railinc.com/rportal/web/guest/home [https://www.railinc.com/rportal/web/guest/home] Primary Accountability/Responsibility: Responsible for modeling complex business problems, discovering business insights and identifying opportunities through the use of statistical, algorithmic, data mining, machine learning, and visualization techniques. In addition to advanced data analytics skills, this role is also proficient at integrating and preparing large, varied datasets, and communicating results and recommendations based on the model/analysis to both technical and non-technical audiences. This role is also able to recommend the most effective data science approaches for various challenges and will be able to implement these approaches independently without guidance as well as guide other data scientists in their efforts. Additionally, the lead data scientist will be able to manage data science teams and help drive the organization’s technology vision, be up to date with the state of technology, and help ensure the organization makes forward-looking strategic decisions in its data science approach. Job AccountabilityResponsibilitiesKey MeasuresEssential Functions • 35%: Conduct analytical research, develop prototypes and contribute to production-ready solutions. Business domains include but are not limited to:o Fleet Utilization for Railroads and Equipment Ownerso Miles & time between car repair incidentso Railroad yard traffic predictionso Predictive modeling of maintenance, ETAs, etc.o Internal measurement developmento Data quality evaluationso Data enrichmento Prototype reports to be included in production systems • 35%: Deliver analytical projects by:o Working with project managers to define schedule and deliver results to all customers and internal stakeholders and managing expectationso Collaborating with IT resources to develop and optimize production solutionso Creating and documenting repeatable solutions for meeting ongoing customer needso Communicating the results and methodology to internal and external stakeholders • 20%: Drive the organization’s data science and technology vision through research on the state of the art in data science technologies to ensure forward-thinking strategic decisions and recommendations • 10%: Gather, interpret, and translate customer needs into business problems that can be solved via advanced analytics methodologies by:o Working with business stakeholders and/or facilitate data analysis opportunity discussionso Leading solution analysis, definition, and requirements gathering for data serviceso Prioritizing data analysis using rail industry priorities & business cases • Collaborate with other data scientists to drive data analysis methodology, repeatable and structured • Exercise judgment within generally defined practices and policies in selecting methods and techniques for obtaining solutions • Takes the lead on analytical projects, and may lead the projects of other data scientists and analytical consultants • Work with manager to finalize priority and deadlines, and adjust & communicate as necessary • Customer feedback and delivery against commitments Non-Essential Functions • Support and improve internal decision making • Develop measurement dashboards • Conduct and report industry trend and market analysis to meet industry needs • Performs other duties as assigned • Customer feedback and delivery against commitmentsSuccess Factors: • Knowledge/Skills/Abilities Minimum Requirements • Superior analytical skills with working knowledge of basic statistical, predictive and optimization models • Experience in leading and managing data science teams • Strong programming proficiency and working experience (10+years) in a subset of Python, R, Scala, Java (Python preferred) • Strong proficiency and experience (10+ years) with data science and machine learning software stacks, e.g. NumPy, Pandas, SK-learn for Python • Programming proficiency in Spark and MLlib (3+ years) • Working experience with and understanding of large-scale data analysis systems, e.g. Hadoop or MPP databases (5+ years) • Proficiency and experience with SQL (10+ years) • Significant experience (10+ years) implementing machine learning and data science models, including through production • Strong theoretical and practical knowledge of machine learning models and algorithms (unsupervised and supervised), their use in applications, and their advantages and disadvantages • Strong knowledge of code-based data visualization tools (e.g. matplotlib) for data exploration and to present models and results to internal and external stakeholders • Experience with cloud-based systems and toolsets (3+ years) • Experience and understanding of experiment design and evaluation • Knowledge of big data engineering toolsets a plus • Superior data preparation skills; be able to access, transform and manipulate Railinc and external customer data in its base form • Superior problem-solving skills • Entrepreneurial mindset and business understanding • Up to date with the state of the art in the data science technology and related infrastructure and services space • Strong team management and leadership skills • Strong verbal and written communication skills • Ability to work effectively with clients, IT management, business management, project managers, and IT staff • Ability to manage multiple activities in a deadline-oriented environment; highly organized and flexible • Ability to work independently and jointly in unstructured environments in a self-directed way • Ability to take a leadership role on engagements and with customers • Strong teamwork skills and ability to work effectively with multiple internal customersAdditional Requirements: • Education • Experience • Certifications • Advanced degree (PhD or Masters) in an analytical or technical field (e.g. applied mathematics, statistics, physics, computer science, operations research, business, economics) • A minimum of 10 years related work experience • Strong statistical analysis and methodology experience • Experience with analytics tools and big data platforms • Experience with business intelligence and analyticsPhysical RequirementsList physical activities and requirements, including but not limited to: • Sedentary work: assignment involves sitting at workstation (desk) most of the time (up to 8 hours) with only occasional walking and/or standing • Keyboarding: Primarily using fingers for typing • Talking: Expressing or communicating verbally through use of spoken words (accurately conveying detailed or important spoken instructions to others) • Hearing: Ability to receive detailed information through oral communication and to make discriminations in sound. • Visual: Through close visual acuity, required to perform activities such as: preparing and analyzing data and figures; transcribing; viewing computer terminal; extensive reading (with or without correction) • Environment: work is performed within an office setting and therefore no substantial exposure to adverse environmental conditions (i.e. extreme heat, cold, noise, etc.). Customer visits may be done at railroad facilities, which would require appropriate safety equipment. • Travel: Some travel may be required (up to 25%).To apply: https://recruiting.ultipro.com/RAI1006RLNC/JobBoard/d0e5b848-54f5-44c4-8878-c03cf9954c79/OpportunityDetail?opportunityId=0bf87568-7bb1-4013-97e8-cc3e72cf8194 [https://recruiting.ultipro.com/RAI1006RLNC/JobBoard/d0e5b848-54f5-44c4-8878-c03cf9954c79/OpportunityDetail?opportunityId=0bf87568-7bb1-4013-97e8-cc3e72cf8194]
NESC Staffing Corp.
  • Houston, TX

HOURS OF WORK - 8 hours minimum with flexible start before 9:00 AM

Description


NESC is seeking an experienced Data Modeler to assist in building and supporting RigSystems & Aftermarket's Data Warehouse. This resource will be responsible for separating different types of data into structures that can be easily processed by various systems. This resource will also focus on a variety of issues, such as enhancing data migration from one system to another and eliminating data redundancy. Duties and responsibilities include:

Understand and translate business needs into data models supporting long-term solutions.

Work with the Application Development team to implement data strategies, build data flows and develop conceptual data models.

Create logical and physical data models using best practices to ensure high data quality and reduced redundancy.

Optimize and update logical and physical data models to support new and existing projects.

Maintain conceptual, logical and physical data models along with corresponding metadata.

Develop best practices for standard naming conventions and coding practices to ensure consistency of data models.

Recommend opportunities for reuse of data models in new environments.

Perform reverse engineering of physical data models from databases and SQL scripts.

Evaluate data models and physical databases for variances and discrepancies.

Validate business data objects for accuracy and completeness.

Analyze data-related system integration challenges and propose appropriate solutions.

Develop data models according to company standards.

Guide System Analysts, Engineers, Programmers and others on project limitations and capabilities, Performance requirements and interfaces.

Review modifications to existing software to improve efficiency and performance.

Examine new application design and recommend corrections if required.

Data modeling using ERWin tool (Work Group version)

Enterprise Data Warehouse modeling skill

Business Analysis Skill

Oracle Database Skill


EXPERIENCE/SKILLS: 


3+ years experience as a Data Modeler/Data Architect

Proficient in the use of data modeling tools; Eriwin proficiency is a must.

Experience in meta data management and data integration engines such as Biztalk or Informatica

Experience in supporting as well as implementing Oracle and SQL data infrastructures

Knowledge of the entire process behind software development including design and deployment (SOA knowledge and

experience is a bonus)

Expert analytical and problem-solving traits

Knowledge of the design, development and maintenance of various data models and their components

Understand BI tools and technologies as well as the optimization of underlying databases

Education:


BS in Computer Science or IT



IT Associates
  • Ann Arbor, MI

6+ Month Contract to Hire Position

Location - Ann Arbor, MI


Our client is looking to add a Lead BI Analyst that will be part of the business intelligence team and will be responsible for projects including BI and Analytics but with a lot of opportunity to innovate by incorporating advanced capabilities of descriptive and predictive insights into BI Deliverables. As a dynamic and effective BI team member, you will liaise with business across all domains to understand their growing analytic data needs and develop and operationalize solutions with business impact. The ideal candidate will leverage their knowledge of business intelligence tools, statistical and data mining techniques, data warehouse and SQL to find innovative solutions utilizing data from multiple sources. We are looking for a strong team member who can communicate with the business as well as IT.

GENERAL RESPONSIBILITIES

             Drive the utilization of new data sources for impactful business insights

            Condense large data sets into clear and concise observations and recommendations

            Design and develop BI/Analytics solution to facilitate end user experience and impact the business

            Generate new ideas and execute on solutions

            Demonstrate expert knowledge of at least one analytics tool

            Understand and apply advanced techniques for data extraction, cleaning and preparation tasks

            Understand dimensional modeling and data warehouse concepts

            Able to clearly articulate findings and answers through effective data visualization approaches

            Able to effectively communicate with multiple stakeholders across the organization

            Able to work in a team-oriented, collaborative, fast-paced and diverse environment

QUALIFICATIONS/REQUIREMENTS

            BS/MS degree in Computer Science, Applied Statistics or related field

            5+ years of Experience with implementing and supporting BI/Analytics solutions

            5+ years of Experience with databases and querying language, particularly SQL

            Experience with statistical tool such as R and SAS

            5+ years of experience delivering BI/DW solutions within an Agile construct

            Experience working with large data sets; experience with distributed computing tools (Hadoop, Map/Reduce, Hive, Spark) and other emerging technologies is highly desired

            Familiarity with statistical methods used in descriptive and inferential statistics

            Knowledge of statistical modeling and machine learning is preferable

            Fluency with programming language such as Python

            Must have the ability to work independently and with minimal supervision

Packsize LLC
  • Salt Lake City, UT

Data Scientist


Job Overview:

Packsize has a large software and hardware platform that is used around the world every day generating large amounts of data.  We are looking for a Data Scientist to support our product, engineering, sales, leadership, and marketing teams with insights gained from analyzing company data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action.  Our customers rely on data we generate to affect their business decisions.

 
What Youll Do:

Working with product and R&D leadership you will identify opportunities for leveraging company data to affect important business decisions, objectives, and priorities.  You will need to gather and use complex data sets across domains using a range of data science methodologies combined with subject matter expertise. You will deliver artifacts on projects, define methodology, and own the analysis.  Solutions are testable, reproducible, and clearly communicated to technical and non-technical audiences.


Your analysis will drive the optimization and improvement of product development.  Additionally, you will assess the effectiveness and accuracy of new data sources and data gathering techniques, coordinate with cross-functional teams to implement models and monitor outcomes.  You know your influence with your team and peers will drive the success of Packsize.

 Qualifications:

  • Masters in Statistics, Mathematics, Computer Science with two years of experience or Bachelors with five years of experience.
  • Years of experience in R, Python, SQL, NoSQL.
  • Experience with Apache Spark, Hadoop, AWS
  • Proficiency in Data Visualization (matplotlib, ggplot, Tableau, etc.).
  • You have worked in a collaborative environment.
  • Diving into data to discover hidden patterns and conduction error/deviation analysis.
  • Knowledge of various machine learning techniques and key parameters that affect their performance.
  • Experience working with and developing experiments and analysis plans for data modeling and determining cause and effect relationships.
  • Experience working with and creating data architectures.
  • Knowledge of advanced statistical techniques and concepts such as confidence intervals, significance of error measurements, etc.
  • Excellent written and verbal communication for communicating with technical and non-technical audiences.
  • A drive to learn and master new technologies and techniques.
  • You innovate for your customer by being proactive in improvements and enhancements.
  • Are excited about solving hard problems, clearing ambiguity, and never settling.


 Competencies:

  •  Interpersonal Skills
  • Organizing
  • Problem Solving
  • Drive for Results
  • Time Management
  • Analytical Skills


 
Physical Demands and Working Conditions:

  • Typical office working conditions
  • The ability to sit for long periods
  • Travel 10%


 
Packsize is an Equal Opportunity employer and is committed to diversity in its workforce. In compliance with applicable federal and state laws, Packsize policy of equal employment opportunity prohibits discrimination on the basis of race or ethnicity, religion, color, national origin, sex, age, sexual orientation, gender identity/expression, veterans status, status as a qualified person with a disability, or genetic information. Individuals from historically underrepresented groups, such as minorities, women, qualified persons with disabilities, and protected veterans are strongly encouraged to apply. Reasonable accommodations in the application process will be provided to qualified individuals with disabilities.

 

Lawrence Harvey
  • Austin, TX

Lawrence Harvey are recruiting for a top Data Engineer for one of the top Artificial Intelligence organizations in Austin. 


In this position as a Data Engineer will work on developing Machine Learning models and APIs for the companies AI Platform. Your will be involved in interaction with  multiple teams across the business to evolve their architecture as well as optime their data acquisition, selection, and pipeline.

Requirements: 


  • 5+ years of experience owning and building data pipelines
  • Extensive knowledge of data engineering tools, technologies and approaches for both batch and streaming environments
  • Proven experience building or extending data platforms from scratch for data consumption across a wide variety of use cases (e.g data science, ML, etc)
  • Experience with data transformations, API wrappers and output formats used with Machine Learning
  • Ability to absorb business problems and understand how to service required data needs
  • Demonstrated ability to build complex, scalable systems with high quality
  • Experience with multiple data technologies and concepts such as Airflow, Kafka, Hadoop, Hive, Spark, MapReduce, SQL, NoSQL, and Columnar databases
  • Experience in one or more of Java, Scala, and Python
  • Experience with serverless technologies a plus (e.g. AWS Lambda)
  • Hands-on, in-depth experience with AWS or other cloud infrastructure technologies
Paradigm Technology
  • Phoenix, AZ

Informatica Developer

  • 3+ years work experience in enterprise software development and/or implementation.

  • Experience with design, development, and testing within Informatica PowerCenter, B2B, Big Data, and BDM

  • Communicate with team to fully understand requirements, provide feedback, and request clarification as needed

  • Participating in process and functional design activities, carrying out application design, build, test, and deploy activities.

  • Design, Develop and deploy ETL job workflows with reliable error/exception handling and rollback framework.

  • Adapt ETL code to accommodate changes in source data and new business requirements.

  • Manage and perform data cleansing, de-duplication, and harmonization of data received from, and potentially used by, multiple systems.

  • Spearheads development of ETL code, metadata definitions and models, queries and reports, schedules, work processes, and maintenance procedures.

  • Manages automation of file processing as well as all ETL processes within a job workflow.

  • Ensures data quality throughout the entire ETL process.

  • Expert ability in SQL and PL/SQL


Big Data Management

  • Experience with Apache Spark, Kafka, Hadoop, Hive, Yarn

  • Cloudera (desired) , HDP (Horton Data Platform)

  • Experience with Data Ingestion to Kafka Queue, Read and Process data from Kafka queue, Apply Transformations to enrich the data.

B2B

  • Experience with Informatica B2B Data Exchange

  • Familiar with XML, Parsers, Mappers, Streamers, Validators

  • B2B Integration with Power Center

Apps IT America
  • Houston, TX

Data Modeler

EXPERIENCE & SKILL SET:
Data modeling using ERWin tool (Work Group version)
Enterprise Data Warehouse modeling skill
Business Analysis Skill
Oracle Database Skill


Description 


My Client is seeking an experienced Data Modeler to assist in building and supporting RigSystems & Aftermarket's Data Warehouse. This resource will be responsible for separating different types of data into structures that can be easily processed by various systems. This resource will also focus on a variety of issues, such as enhancing data migration from one system to another and eliminating data redundancy. Duties and responsibilities include:

Understand and translate business needs into data models supporting long-term solutions.
Work with the Application Development team to implement data strategies, build data flows and develop conceptual data models.
Create logical and physical data models using best practices to ensure high data quality and reduced redundancy.
Optimize and update logical and physical data models to support new and existing projects.
Maintain conceptual, logical and physical data models along with corresponding metadata.
Develop best practices for standard naming conventions and coding practices to ensure consistency of data models.
Recommend opportunities for reuse of data models in new environments.
Perform reverse engineering of physical data models from databases and SQL scripts.
Evaluate data models and physical databases for variances and discrepancies.
Validate business data objects for accuracy and completeness.
Analyze data-related system integration challenges and propose appropriate solutions.
Develop data models according to company standards.
Guide System Analysts, Engineers, Programmers and others on project limitations and capabilities, Performance requirements and interfaces.
Review modifications to existing software to improve efficiency and performance.
Examine new application design and recommend corrections if required.

Required Skills

3+ years experience as a Data Modeler/Data Architect
Proficient in the use of data modeling tools; Eriwin proficiency is a must.
Experience in meta data management and data integration engines such as Biztalk or Informatica
Experience in supporting as well as implementing Oracle and SQL data infrastructures
Knowledge of the entire process behind software development including design and deployment (SOA knowledge and
experience is a bonus)
Expert analytical and problem-solving traits
Knowledge of the design, development and maintenance of various data models and their components
Understand BI tools and technologies as well as the optimization of underlying databases