OnlyDataJobs.com

ConocoPhillips
  • Houston, TX
Our Company
ConocoPhillips is the worlds largest independent E&P company based on production and proved reserves. Headquartered in Houston, Texas, ConocoPhillips had operations and activities in 17 countries, $71 billion of total assets, and approximately 11,100 employees as of Sept. 30, 2018. Production excluding Libya averaged 1,221 MBOED for the nine months ended Sept. 30, 2018, and proved reserves were 5.0 billion BOE as of Dec. 31, 2017.
Employees across the globe focus on fulfilling our core SPIRIT Values of safety, people, integrity, responsibility, innovation and teamwork. And we apply the characteristics that define leadership excellence in how we engage each other, collaborate with our teams, and drive the business.
Description
The Sr. Analytics Analyst will be part of the Production, Drilling, and Projects Analytics Services Team within the Analytics Innovation Center of Excellence that enables data analytics across the ConocoPhillips global enterprise. This role works with business units and global functions to help strategically design, implement, and support data analytics solutions. This is a full-time position that provides tremendous career growth potential within ConocoPhillips.
Responsibilities May Include
  • Complete end to end delivery of data analytics solutions to the end user
  • Interacting closely with both business and developers while gathering requirements, designing, testing, implementing and supporting solutions
  • Gather business and technical specifications to support analytic, report and database development
  • Collect, analyze and translate user requirements into effective solutions
  • Build report and analytic prototypes based on initial business requirements
  • Provide status on the issues and progress of key business projects
  • Providing regular reporting on the performance of data analytics solutions
  • Delivering regular updates and maintenance on data analytics solutions
  • Championing the data analytics solutions and technologies at ConocoPhillips
  • Integrate data for data models used by the customers
  • Deliver Data Visualizations used for data driven decision making
  • Provide strategic technology direction while supporting the needs of the business
Basic/Required
  • Legally authorized to work in the United States
  • 5+ years of related IT experience
  • 5+ year of Structure Querying Language experience (ANSI SQL, T-SQL, PL/SQL)
  • 3+ years hands-on experience delivering solutions with an Analytics Tools i.e. (Spotfire, SSRS, Power BI, Tableau, Business Objects)
Preferred
  • Bachelor's Degree in Information Technology or Computer Science
  • 5+ years of Oil and Gas Industry experience
  • 5+ years hands-on experience delivering solutions with Informatica PowerCenter
  • 5+ years architecting data warehouses and/or data lakes
  • 5+ years with Extract Transform and Load (ETL) tools and best practices
  • 3+ years hands-on experience delivering solutions with Teradata
  • 1+ years developing analytics models with R or Python
  • 1+ years developing visualizations using R or Python
  • Experience with Oracle (11g, 12c) and SQL Server (2008 R2, 2010, 2016) and Teradata 15.x
  • Experience with Hadoop technologies (Hortonworks, Cloudera, SQOOP, Flume, etc.)
  • Experience with AWS technologies (S3, SageMaker, Athena, EMR, Redshift, Glue, etc.)
  • Thorough understanding of BI/DW concepts, proficient in SQL, and data modeling
  • Familiarity with ETL tools (Informatica, etc.) and ETL processes
  • Solutions oriented individual; learn quickly, understand complex problems, and apply useful solutions
  • Ability to work in a fast-paced environment independently with the customer
  • Ability to work as a team player
  • Ability to work with business and technology users to define and gather reporting and analytics requirements
  • Strong analytical, troubleshooting, and problem-solving skills experience in analyzing and understanding business/technology system architectures, databases, and client applications to recognize, isolate, and resolve problems
  • Demonstrates the desire and ability to learn and utilize new technologies in data analytics solutions
  • Strong communication and presentation skills
  • Takes ownership of actions and follows through on commitments by courageously dealing with important problems, holding others accountable, and standing up for what is right
  • Delivers results through realistic planning to accomplish goals
  • Generates effective solutions based on available information and makes timely decisions that are safe and ethical
To be considered for this position you must complete the entire application process, which includes answering all prescreening questions and providing your eSignature on or before the requisition closing date of February 20, 2019.
Candidates for this U.S. position must be a U.S. citizen or national, or an alien admitted as permanent resident, refugee, asylee or temporary resident under 8 U.S.C. 1160(a) or 1255(a) (1). Individuals with temporary visas such as A, B, C, D, E, F, G, H, I, J, L, M, NATO, O, P, Q, R or TN or who need sponsorship for work authorization in the United States now or in the future, are not eligible for hire.
ConocoPhillips is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, national origin, age, disability, veteran status, gender identity or expression, genetic information or any other legally protected status.
Job Function
Information Management-Information Technology
Job Level
Individual Contributor/Staff Level
Primary Location
NORTH AMERICA-USA-TEXAS-HOUSTON
Organization
ANALYTICS INNOVATION
Line of Business
Corporate Staffs
Job Posting
Feb 13, 2019, 4:56:49 PM
Burtch Works
  • Atlanta, GA

Our client in the Atlanta area is seeking a Director of Statistical Modeling to lead a team that will be developing scores and analytic products for their clients in the insurance industry. This group will work closely with other internal teams to develop best in class products and deliver statistical analytics of the highest caliber. You will also work closely with the sales organization to deliver results and present analytics as necessary. Strong leadership skills and business acumen is crucial to the success of this role.

Requirements:

Graduate degree in a quantitative field.

At least 10 years of experience in analytics.

At least 2 years of people management experience.

Experience with SAS, R, Python or equivalent analytic software.

Previous experience in an analytical leadership role as well as the ability to think strategically.

Thorough understanding of statistical methodologies including linear regression, logistic regression, CHAID/CART and neural networks.

Salary range up to mid-$100's on base + bonus. Great benefits.

KEYWORDS: statistical modeling, SAS, analytics, insurance, predictive modeling, regression, risk, R, Python, logistic regression, linear regression, neural networks, CHAID/CART

State Farm
  • Atlanta, GA

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


Skills & Professional Experience

·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

·        Experience with unstructured data sets: text analytics, image recognition etc.

·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

·        Exercise sound judgment to diagnose & resolve problems within area of expertise

·        Familiarity with CI/CD development methods, Git and Docker a plus


Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


Remote work option is not available.


There is no sponsorship for an employment visa for the position at this time.


Competencies desired:
Critical Thinking
Leadership
Initiative
Resourcefulness
Relationship Building
State Farm
  • Dallas, TX

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

WHAT ARE THE DUTIES AND RESPONSIBILITIES OF THIS POSITION?

    Perfo
    • rms improved visual representation of data to allow clearer communication, viewer engagement and faster/better decision-making Inves
    • tigates, recommends, and initiates acquisition of new data resources from internal and external sources Works
    • with IT teams to support data collection, integration, and retention requirements based on business need Ident
    • ifies critical and emerging technologies that will support and extend quantitative analytic capabilities Manag
    • es work efforts which require the use of sophisticated project planning techniques Appli
    • es a wide application of complex principles, theories and concepts in a specific field to provide solutions to a wide range of difficult problems Devel
    • ops and maintains an effective network of both scientific and business contacts/knowledge obtaining relevant information and intelligence around the market and emergent opportunities Contr
    • ibutes data to State Farm's internal and external publications, write articles for leading journals and participate in academic and industry conferences
    • Collaborates with business subject matter experts to select relevant sources of information
    • Develop breadth of knowledge in programming (R, Python), Descriptive, Inferential, and Experimental Design statistics, advanced mathematics, and database functionality (SQL, Hadoop)
    • Develop expertise with multiple machine learning algorithms and data science techniques, such as exploratory data analysis, generative and discriminative predictive modeling, graph theory, recommender systems, text analytics, computer vision, deep learning, optimization and validation
    • Develop expertise with State Farm datasets, data repositories, and data movement processes
    • Assists on projects/requests and may lead specific tasks within the project scope
    • Prepares and manipulates data for use in development of statistical models
    • Develops fundamental understanding of insurance and financial services operations and uses this knowledge in decision making


Additional Details:

For over 95 years, data has been key to State Farm.  As a member of our data science team with the Enterprise Data & Analytics department under our Chief Data & Analytics Officer, you will work across the organization to solve business problems and help achieve business strategies.  You will employ sophisticated, statistical approaches and state of the art technology.  You will build and refine our tools/techniques and engage w/internal stakeholders across the organization to improve our products & services.


Implementing solutions is critical for success. You will do problem identification, solution proposal & presentation to a wide variety of management & technical audiences. This challenging career requires you to work on multiple concurrent projects in a community setting, developing yourself and others, and advancing data science both at State Farm and externally.


Skills & Professional Experience

·        Develop hypotheses, design experiments, and test feasibility of proposed actions to determine probable outcomes using a variety of tools & technologies

·        Masters, other advanced degrees, or five years experience in an analytical field such as data science quantitative marketing, statistics, operations research, management science, industrial engineering, economics, etc. or equivalent practical experience preferred.

·        Experience with SQL, Python, R, Java, SAS or MapReduce, SPARK

·        Experience with unstructured data sets: text analytics, image recognition etc.

·        Experience working w/numerous large data sets/data warehouses & ability to pull from such data sets using relevant programs & coding including files, RDBMS & Hadoop based storage systems

·        Knowledge in machine learning methods including at least one of the following: Time series analysis, Hierarchical Bayes; or learning techniques such as Decision Trees, Boosting, Random Forests.

·        Excellent communication skills and the ability to manage multiple diverse stakeholders across businesses & leadership levels.

·        Exercise sound judgment to diagnose & resolve problems within area of expertise

·        Familiarity with CI/CD development methods, Git and Docker a plus


Multiple location opportunity. Locations offered are: Atlanta, GA, Bloomington, IL, Dallas, TX and Phoenix, AZ


Remote work option is not available.


There is no sponsorship for an employment visa for the position at this time.


Competencies desired:
Critical Thinking
Leadership
Initiative
Resourcefulness
Relationship Building
Riccione Resources
  • Dallas, TX

Sr. Data Engineer Hadoop, Spark, Data Pipelines, Growing Company

One of our clients is looking for a Sr. Data Engineer in the Fort Worth, TX area! Build your data expertise with projects centering on large Data Warehouses and new data models! Think outside the box to solve challenging problems! Thrive in the variety of technologies you will use in this role!

Why should I apply here?

    • Culture built on creativity and respect for engineering expertise
    • Nominated as one of the Best Places to Work in DFW
    • Entrepreneurial environment, growing portfolio and revenue stream
    • One of the fastest growing mid-size tech companies in DFW
    • Executive management with past successes in building firms
    • Leader of its technology niche, setting the standards
    • A robust, fast-paced work environment
    • Great technical challenges for top-notch engineers
    • Potential for career growth, emphasis on work/life balance
    • A remodeled office with a bistro, lounge, and foosball

What will I be doing?

    • Building data expertise and owning data quality for the transfer pipelines that you create to transform and move data to the companys large Data Warehouse
    • Architecting, constructing, and launching new data models that provide intuitive analytics to customers
    • Designing and developing new systems and tools to enable clients to optimize and track advertising campaigns
    • Using your expert skills across a number of platforms and tools such as Ruby, SQL, Linux shell scripting, Git, and Chef
    • Working across multiple teams in high visibility roles and owning the solution end-to-end
    • Providing support for existing production systems
    • Broadly influencing the companys clients and internal analysts

What skills/experiences do I need?

    • B.S. or M.S. degree in Computer Science or a related technical field
    • 5+ years of experience working with Hadoop and Spark
    • 5+ years of experience with Python or Ruby development
    • 5+ years of experience with efficient SQL (Postgres, Vertica, Oracle, etc.)
    • 5+ years of experience building and supporting applications on Linux-based systems
    • Background in engineering Spark data pipelines
    • Understanding of distributed systems

What will make my résumé stand out?

    • Ability to customize an ETL or ELT
    • Experience building an actual data warehouse schema

Location: Fort Worth, TX

Citizenship: U.S. citizens and those authorized to work in the U.S. are encouraged to apply. This company is currently unable to provide sponsorship (e.g., H1B).

Salary: 115 130k + 401k Match

---------------------------------------------------


~SW1317~

Signify Health
  • Dallas, TX

Position Overview:

Signify Health is looking for a savvy Data Engineer to join our growing team of deep learning specialists. This position would be responsible for evolving and optimizing data and data pipeline architectures, as well as, optimizing data flow and collection for cross-functional teams. The Data Engineer will support software developers, database architects, data analysts, and data scientists. The ideal candidate would be self-directed, passionate about optimizing data, and comfortable supporting the Data Wrangling needs of multiple teams, systems and products.

If you enjoy providing expert level IT technical services, including the direction, evaluation, selection, configuration, implementation, and integration of new and existing technologies and tools while working closely with IT team members, data scientists, and data engineers to build our next generation of AI-driven solutions, we will give you the opportunity to grow personally and professionally in a dynamic environment. Our projects are built on cooperation and teamwork, and you will find yourself working together with other talented, passionate and dedicated team member, all working towards a shared goal.

Essential Job Responsibilities:

  • Assemble large, complex data sets that meet functional / non-functional business requirements
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing data models for greater scalability, etc.
  • Leverage Azure for extraction, transformation, and loading of data from a wide variety of data sources in support of AI/ML Initiatives
  • Design and implement high performance data pipelines for distributed systems and data analytics for deep learning teams
  • Create tool-chains for analytics and data scientist team members that assist them in building and optimizing AI workflows
  • Work with data and machine learning experts to strive for greater functionality in our data and model life cycle management capabilities
  • Communicate results and ideas to key decision makers in a concise manner
  • Comply with applicable legal requirements, standards, policies and procedures including, but not limited to the Compliance requirements and HIPAA.


Qualifications:Education/Licensing Requirements:
  • High school diploma or equivalent.
  • Bachelors degree in Computer Science, Electrical Engineer, Statistics, Informatics, Information Systems, or another quantitative field. or related field or equivalent work experience.


Experience Requirements:
  • 5+ years of experience in a Data Engineer role.
  • Experience using the following software/tools preferred:
    • Experience with big data tools: Hadoop, Spark, Kafka, etc.
    • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
    • Experience with AWS or Azure cloud services.
    • Experience with stream-processing systems: Storm, Spark-Streaming, etc.
    • Experience with object-oriented/object function scripting languages: Python, Java, C#, etc.
  • Strong work ethic, able to work both collaboratively, and independently without a lot of direct supervision, and solid problem-solving skills
  • Must have strong communication skills (written and verbal), and possess good one-on-one interpersonal skills.
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases
  • A successful history of manipulating, processing and extracting value from large disconnected datasets.
  • Working knowledge of message queuing, stream processing, and highly scalable big data data stores.
  • 2 years of experience in data modeling, ETL development, and Data warehousing
 

Essential Skills:

  • Fluently speak, read, and write English
  • Fantastic motivator and leader of teams with a demonstrated track record of mentoring and developing staff members
  • Strong point of view on who to hire and why
  • Passion for solving complex system and data challenges and desire to thrive in a constantly innovating and changing environment
  • Excellent interpersonal skills, including teamwork and negotiation
  • Excellent leadership skills
  • Superior analytical abilities, problem solving skills, technical judgment, risk assessment abilities and negotiation skills
  • Proven ability to prioritize and multi-task
  • Advanced skills in MS Office

Essential Values:

  • In Leadership Do whats right, even if its tough
  • In Collaboration Leverage our collective genius, be a team
  • In Transparency Be real
  • In Accountability Recognize that if it is to be, its up to me
  • In Passion Show commitment in heart and mind
  • In Advocacy Earn trust and business
  • In Quality Ensure what we do, we do well
Working Conditions:
  • Fast-paced environment
  • Requires working at a desk and use of a telephone and computer
  • Normal sight and hearing ability
  • Use office equipment and machinery effectively
  • Ability to ambulate to various parts of the building
  • Ability to bend, stoop
  • Work effectively with frequent interruptions
  • May require occasional overtime to meet project deadlines
  • Lifting requirements of
DISYS
  • Minneapolis, MN
Client: Banking/Financial Services
Location: 100% Remote
Duration: 12 month contract-to-hire
Position Title: NLU/NLP Predictive Modeling Consultant


***Client requirements will not allow OPT/CPT candidates for this position, or any other visa type requiring sponsorship. 

This is a new team within the organization set up specifically to perform analyses and gain insights into the "voice of the customer" through the following activities:
Review inbound customer emails, phone calls, survey results, etc.
Review data that is unstructured "natural language" text and speech data
Maintain focus on customer complaint identification and routing
Build machine learning models to scan customer communication (emails, voice, etc)
Identify complaints from non-complaints.
Classify complaints into categories
Identify escalated/high-risk complaints, e.g. claims of bias, discrimination, bait/switch, lying, etc...
Ensure routed to appropriate EO for special

Responsible for:
Focused on inbound retail (home mortgage/equity) emails
Email cleansing: removal of extraneous information (disclaimers, signatures, headers, PII)
Modeling: training models using state of art techniques
Scoring: "productionalizing" models to be consumed by the business
Governance: model documentation and Q/A with model risk group.
Implementation of model monitoring processes

Desired Qualifications:
Real-world experience building/deploying predictive models, any industry (must)
SQL background (must)
Self-starter, able to excel in fast-paced environment w/o a ton of direction (must)
Good communication skills (must)
Experience in text/speech analytics (preferred)
Python, SAS background (preferred)
Linux (nice to have)
Spark (Scala or PySpark) (nice to have)

Burger King Corporation
  • Miami, FL

Position Overview:

This person will be key in the structuring of our new Guest Insights and Intelligence area within BK North America. Burger marketing analytics has evolved substantially over the past several years, to a point where we have a very detailed understanding of our sales on a product or ticket level. Nevertheless, our understanding of who is buying our products and offers is still very limited. With an increasingly competitive market, our objective is to create a new area that will have as its core focus the understanding of our guests, which ultimately will drive our strategy across several different initiatives, including calendar, pricing, innovation, advertising/communication. With more data available than ever coming from our mobile app, kiosks, POS, credit card, and external data sources, were looking for a data scientist with strong business judgment who will be able to help us structure and develop this area, effectively transforming how we look at marketing analytics at BK and eventually RBI.


Responsibilities & Qualifications:


    • 3-5 years of professional experience, masters degree a plus
    • Expertise in data modelling, visualization, and databases
    • Proficiency with statistical packages (e.g. SAS and R), database languages (e.g. SQL Server, MySQL, Oracle Express), and media measurement tools (e.g. DoubleClick, Omniture, Google Analytics)
    • Datab
      • ase design and implementation Machi
      • ne learning Time
      • series and forecasting Data
      • mining Linea
      • r and logistic regression Decis
      • ion trees Segme
      • ntation analysis Clust
      • ering techniques Marke
      • ting mix models Data
      • visualization techniques


  • Market research and competitive analysis aimed at driving growth
  • Comprehensive data analysis and manipulation to help identify trends
  • Strong interpersonal and communication skills

Restaurant Brands International US Services LLC (RBI) is an equal opportunity employer and gives consideration for employment to qualified applicants without regard to race, color, religion, sex, national origin, disability, or protected veteran status.

Pythian
  • Dallas, TX

Google Cloud Solutions Architect (Pre Sales)

United States | Canada | Remote | Work from Home

Why You?

Are you a US or Canada based Cloud Solutions Architect who likes to operate with a high degree of autonomy and have diverse responsibilities that require strong leadership, deep technology skills and a dedication to customer service? Do you have Big data and Data centric skills? Do you want to take part in the strategic planning of organizations data estate with a focus of fulfilling business requirements around cost, scalability and flexibility of the platform? Can you draft technology roadmaps and document best practice gaps with precise steps of how to get there? Can you implement the details of the backlogs you have helped build? Do you demonstrate consistent best practices and deliver strong customer satisfaction? Do you enjoy pre sales? Can you demonstrate adoption of new technologies and frameworks through the development of proofs of concepts?

If you have a passion for solving complex problems and for pre sales then this could be the job for you!

What Will You Be Doing?  

  • Collaborating with and supporting Pythian sales teams in the pre-sales & account management process from the technical perspective, remotely and on-site (approx 75%).
  • Defining solutions for current and future customers that efficiently address their needs. Leading through example and influence, as a master of applying technology solutions to solve business problems.
  • Developing Proof of Concepts (PoC) in order to demonstrate feasibility and value to Pythians customers (approx 25%).
  • Defining solutions for current and future customers that efficiently address their needs.
  • Identifying then executing solutions with a commitment to excellent customer service
  • Collaborating with others in refining solutions presented to customers
  • Conducting technical audits of existing architectures (Infrastructure, Performance, Security, Scalability and more) document best practices and recommendations
  • Providing component or site-wide performance optimizations and capacity planning
  • Recommending best practices & improvements to current operational processes
  • Communicating status and planning activities to customers and team members
  • Participate in periodic overtime (occasionally on short notice) travelling up to approx 50%).

What Do We Need From You?

While we realise you might not have everything on the list to be the successful candidate for the Solutions Architect job you will likely have at least 10 years experience in a variety of positions in IT. The position requires specialized knowledge and experience in performing the following:

  • Undergraduate degree in computer science, computer engineering, information technology or related field or relevant experience.
  • Systems design experience
  • Understanding and experience with Cloud architectures specifically: Google Cloud Platform (GCP) or Microsoft Azure
  • In-depth knowledge of popular database and data warehouse technologies from Microsoft, Amazon and/or Google (Big Data & Conventional RDBMS), Microsoft Azure SQL Data Warehouse, Teradata, Redshift,  BigQuery, Snowflake etc.
  • Be fluent in a few languages, preferably Java and Python, and having familiarity with Scala and Go would be a plus.
  • Proficient in SQL. (Experience with Hive and Impala would be great)
  • Proven ability to work with software engineering teams and understand complex development systems, environments and patterns.
  • Experience presenting to high level executives (VPs, C Suite)
  • This is a North American based opportunity and it is preferred that the candidate live on the West Coast, ideally in San Francisco or the Silicon Valley area but strong candidates may be considered from anywhere in the US or Canada.
  • Ability to travel and work across North America frequently (occasionally on short notice) up to 50% with some international travel also expected.

Nice-to-Haves:

  • Experience Architecting Big Data platforms using Apache Hadoop, Cloudera, Hortonworks and MapR distributions.
  • Knowledge of real-time Hadoop query engines like Dremel, Cloudera Impala, Facebook Presto or Berkley Spark/Shark.
  • Experience with BI platforms, reporting tools, data visualization products, ETL engines.
  • Experience with any MPP (Oracle Exadata/DW, Teradata, Netezza, etc)
  • Understanding of continuous delivery and deployment patterns and tools (Jenkins, Artifactory, Maven, etc)
  • Prior experience working as/with Machine Learning Engineers, Data Engineers, or Data Scientists.
  • A certification such as Google Cloud Professional Cloud Architect, Google Professional Data Engineer or related AWS Certified Solutions Architect / Big Data or Microsoft Azure Architect
  • Experience or strong interest in people management, in a player-coach style of leadership longer term would be great.

What Do You Get in Return?

  • Competitive total rewards package
  • Flexible work environment: Why commute? Work remotely from your home, theres no daily travel requirement to the office!
  • Outstanding people: Collaborate with the industrys top minds.
  • Substantial training allowance: Hone your skills or learn new ones; participate in professional development days, attend conferences, become certified, whatever you like!
  • Amazing time off: Start with a minimum 3 weeks vacation, 7 sick days, and 2 professional development days!
  • Office Allowance: A device of your choice and personalise your work environment!  
  • Fun, fun, fun: Blog during work hours; take a day off and volunteer for your favorite charity.
SoftClouds LLC
  • San Diego, CA

Job Overview: SoftClouds is looking for a Data Engineer to join our analytics platform team in designing and developing the next generation data and analytics solutions. The candidate should have deep technical skills as well as the ability to understand data and analytics, and an openness to working with disparate platforms, data sources and data formats.


Roles and Responsibilities:
  • Experience with MySQL, MS SQL Server, or Hadoop, or MongoDB.
  • Writing SQL Queries, tables joins.
  • AWS, python, or bash shell scripting
  • Have some experience pulling data from Hadoop.
  • Analyze data, system and data flows and develop effective ways to store and present data in BI applications
  • ETL experience a plus.
  • Work with data from disparate environments including Hadoop, MongoDB Talend, and other SQL and NoSQL data stores
  • Help develop the next generation analytics platform
  • Proactively ensure data integrity and focus on continuous performance improvements of existing processes.


Required skills and experience:
  • 5  or more years of experience in software development
  • 3 year of experience in writing Data applications using Spark
  • Experience in Java and Python
  • Familiarity  with Agile development methodology `
  • Experience with Scala is a plus
  • Experience with NoSQL databases, e.g., Cassandra is a plus
  • Expertise in Apache Spark & Hadoop.
  • Expertise in machine learning algorithms


Education / Experience:

  • Bachelor's Degree in Engineering or Computer Science or related field required.
  • U.S. Citizens/GC/GC EAD are encouraged to apply. We are unable to sponsor at this time. NO C2C or third-party agencies.



Applied Resource Group
  • Atlanta, GA

Applied Resource Group is seeking a talented and experienced Data Engineer for our client, an emerging leader in the transit solutions space. As an experienced Data Engineer on the Data Services team, you will lead the design, development and maintenance of comprehensible data pipelines and distributed systems for data extraction, analysis, transformation, modelling and visualization. They're looking for independent thinkers that are passionate about technology and building solutions that continually improve the customer experience. Excellent communication skills and the ability to work collaboratively with teams is critical.
 

Job Duties/Responsibilities:

    • Building a unified data services platform from scratch, leveraging the most suitable Big Data tools following technical requirements and needs
    • Exploring and working with cutting edge data processing technologies
    • Work with distributed, scalable cloud-based technologies
    • Collaborating with a talented team of Software Engineers working on product development
    • Designing and delivering BI solutions to meet a wide range of reporting needs across the organization
    • Providing and maintaining up to date documentation to enable a clear outline of solutions
    • Managing task lists and communicating updates to stakeholders and team members following Agile Scrum methodology
    • Working as a key member of the core team to support the timely and efficient delivery of critical data solutions

 
Experience Needed:
 

    • Experience with AWS technologies are desired, especially those used for Data Analytics, including some of these: EMR, Glue, Data Pipelines, Lambda, Redshift, Athena, Kinesis, Elasticache, Aurora
    • Minimum of 5 years working in developing and building data solutions
    • Experience as an ETL/Data warehouse developer with knowledge in design, development and delivery of end-to-end data integration processes
    • Deep understanding of data storage technologies for structured and unstructured data
    • Background in programming and knowledge of programming languages such as Java, Scala, Node.js, Python.
    • Familiarity with cloud services (AWS, Azure, Google Cloud)
    • Experience using Linux as a primary development environment
    • Knowledge of Big data systems - Hadoop, pig, hive, shark/spark etc. a big plus.
    • Knowledge of BI platforms such as Tableau, Jaspersoft etc.
    • Strong communication and analytical skills
    • Capable of working independently under the direction of the Head of Data Services
    • Excellent communication, analytical and problem-solving skills
    • Ability to initially take direction and then work on own initiative
    • Experience working in AGILE

 
Nice-to-have experience and skills:

    • Masters in Computer-Science, Computer Engineering or equivalent  
    • Building data pipelines to perform real-time data processing using Spark Streaming and Kafka, or similar technologies.
Booz Allen Hamilton - Tagged
  • San Diego, CA
Job Description
Job Number: R0042382
Data Scientist, Mid
The Challenge
Are you excited at the prospect of unlocking the secrets held by a data set? Are you fascinated by the possibilities presented by machine learning, artificial intelligence advances, and IoT? In an increasingly connected world, massive amounts of structured and unstructured data open up new opportunities. As a data scientist, you can turn these complex data sets into useful information to solve global challenges. Across private and public sectors-from fraud detection, to cancer research, to national intelligence-you know the answers are in the data.
We have an opportunity for you to use your analytical skills to improve the DoD and federal agencies. Youll work closely with your customer to understand their questions and needs, then dig into their data-rich environment to find the pieces of their information puzzle. Youll develop algorithms, write scripts, build predictive analytics, use automation, and apply machine learning to turn disparate data points into objective answers to help our nations services and leaders make data-driven decisions. Youll provide your customer with a deep understanding of their data, what it all means, and how they can use it. Join us as we use data science for good in the DoD and federal agencies.
Empower change with us.
Build Your Career
At Booz Allen, we know the power of data science and machine intelligence and were dedicated to helping you grow as a data scientist. When you join Booz Allen, you can expect:
  • access to online and onsite training in data analysis and presentation methodologies, and tools like Hortonworks, Docker, Tableau, Splunk, and other open source and emerging tools
  • a chance to change the world with the Data Science Bowlthe worlds premier data science for social good competition
  • participation in partnerships with data science leaders, like our partnership with NVIDIA to deliver Deep Learning Institute (DLI) training to the federal government
Youll have access to a wealth of training resources through our Analytics University, an online learning portal specifically geared towards data science and analytics skills, where you can access more than 5000 functional and technical, certifications, and books. Build your technical skills through hands-on training on the latest tools and state-of-the-art tech from our in-house experts. Pursuing certifications? Take advantage of our tuition assistance, on-site bootcamps, certification training, academic programs, vendor relationships, and a network of professionals who can give you helpful tips. Well help you develop the career you want as you chart your own course for success.
You Have
  • Experience with one or more statistical analytical programming languages, including Python or R
  • Experience with source control and dependency management software, including Git or Maven
  • Experience with using relational databases, including MySQL
  • Experience with identifying analytic insight in data, developing visualizations, and presenting findings to stakeholders
  • Knowledge of object-oriented programming, including Java and C++
  • Knowledge of various machine learning algorithms and their designs, capabilities, and limitations
  • Knowledge of statistical analysis techniques
  • Ability to build complex extraction, transformation, and loading (ETL) pipelines to clean and fuse data together
  • Ability to obtain a security clearance
  • BA or BS degree
Nice If You Have
  • Experience with designing and implementing custom machine learning algorithms
  • Experience with graph algorithms and semantic Web
  • Experience with designing and setting up relational databases
  • Experience with Big Data computing environments, including Hadoop
  • Experience with Navy mission systems
  • MA degree in Mathematics, CS, or a related quantitative field
Clearance
Applicants selected will be subject to a security investigation and may need to meet eligibility requirements for access to classified information.
Were an EOE that empowers our peopleno matter their race, color, religion, sex, gender identity, sexual orientation, national origin, disability, or veteran statusto fearlessly drive change.
, CJ1, GD13, MPPC, SIG2017
phData, Inc.
  • Minneapolis, MN

Title: Big Data Solutions Architect (Minneapolis or US Remote)


Join the Game-Changers in Big Data  


Are you inspired by innovation, hard work and a passion for data?    


If so, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.  


As a Solution Architect on our Big Data Consulting team, your responsibilities include:


    • Design, develop, and innovative Big Data solutions; partner with our internal Managed Services Architects and Data Engineers to build creative solutions to solve tough big data problems.  
    • Determine the project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions
    • Work across a broad range of technologies from infrastructure to applications to ensure the ideal Big Data solution is implemented and optimized
    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources
    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

Qualifications

  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics - combined with an expertise in Hadoop Technologies and Java programming
  • Technical Leadership experience leading/mentoring junior software/data engineers, as well as scoping activities on large scale, complex technology projects
  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  
  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc
  • Expert programming experience in Java, Scala, or other statically typed programming language
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Excellent communication skills including proven experience working with key stakeholders and customers
  • Ability to translate big picture business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics
  • Customer relationship management including project escalations, and participating in executive steering meetings
  • Ability to learn new technologies in a quickly changing field
phData, Inc.
  • Minneapolis, MN

Title: Big Data Solutions Architect (Minneapolis or US Remote)


Join the Game-Changers in Big Data  


Are you inspired by innovation, hard work and a passion for data?    


If so, this may be the ideal opportunity to leverage your background in Big Data and Software Engineering, Data Engineering or Data Analytics experience to design, develop and innovate big data solutions for a diverse set of clients.  


As a Solution Architect on our Big Data Consulting team, your responsibilities include:


    • Design, develop, and innovative Big Data solutions; partner with our internal Managed Services Architects and Data Engineers to build creative solutions to solve tough big data problems.  
    • Determine the project road map, select the best tools, assign tasks and priorities, and assume general project management oversight for performance, data integration, ecosystem integration, and security of big data solutions
    • Work across a broad range of technologies from infrastructure to applications to ensure the ideal Big Data solution is implemented and optimized
    • Integrate data from a variety of data sources (data warehouse, data marts) utilizing on-prem or cloud-based data structures (AWS); determine new and existing data sources
    • Design and implement streaming, data lake, and analytics big data solutions

    • Create and direct testing strategies including unit, integration, and full end-to-end tests of data pipelines

    • Select the right storage solution for a project - comparing Kudu, HBase, HDFS, and relational databases based on their strengths

    • Utilize ETL processes to build data repositories; integrate data into Hadoop data lake using Sqoop (batch ingest), Kafka (streaming), Spark, Hive or Impala (transformation)

    • Partner with our Managed Services team to design and install on prem or cloud based infrastructure including networking, virtual machines, containers, and software

    • Determine and select best tools to ensure optimized data performance; perform Data Analysis utilizing Spark, Hive, and Impala

    • Mentor and coach Developers and Data Engineers. Provide guidance with project creation, application structure, automation, code style, testing, and code reviews

Qualifications

  • 5+ years previous experience as a Software Engineer, Data Engineer or Data Analytics - combined with an expertise in Hadoop Technologies and Java programming
  • Technical Leadership experience leading/mentoring junior software/data engineers, as well as scoping activities on large scale, complex technology projects
  • Expertise in core Hadoop technologies including HDFS, Hive and YARN.  
  • Deep experience in one or more ecosystem products/languages such as HBase, Spark, Impala, Solr, Kudu, etc
  • Expert programming experience in Java, Scala, or other statically typed programming language
  • Strong working knowledge of SQL and the ability to write, debug, and optimize distributed SQL queries
  • Excellent communication skills including proven experience working with key stakeholders and customers
  • Ability to translate big picture business requirements and use cases into a Hadoop solution, including ingestion of many data sources, ETL processing, data access and consumption, as well as custom analytics
  • Customer relationship management including project escalations, and participating in executive steering meetings
  • Ability to learn new technologies in a quickly changing field
Antuit
  • Dallas, TX

Location: Dallas, TX or Chicago, IL. Open to talk to candidates from across locations


Antuit seeks a Data Scientist/Senior Data Scientist to develop machine learning algorithms in Supply Chain and Forecasting domain with data science toolkits that include Python, R or SAS. This role is also participates in the design process and it is responsible for implementation. The ideal candidate will view the role as an excellent opportunity to master and support solving world-class data science problems.

 Data Scientist responsibilities and duties:

    Devel
    • op machine learning algorithms in Supply chain and Forecasting domain with data science toolkits that include Python, R or SAS Furth
    • er design processes and implement them
    • Research and develop efficient and robust machine learning algorithms
    • Collaborate and work closely with cross-functional Antuit teams and domain experts to identify gaps and structure problems
    • Create meaningful presentations and analyses that tell a story, focused on insights, to communicate results and ideas to key decision makers at Antuit and client companies

Data Scientist qualifications and skills:

  • Experience / Education. Masters or PhD in Computer Science, Computer Engineering, Electrical Engineering, Statistics, Applied Math or another related field. 4-10 years work experience involving quantitative data analyses for problem solving (work experience negotiable for recent PhDs with relevant research experience). Experience working with cloud Big Data Stack to orchestrate data gathering, cleansing, preparation and modelling. Additional experience with forecasting and optimization problems; and implementing data analytics solutions with Python, R or SAS
  • Knowledge. Exceptionally skilled in machine learning, data analytics, pattern recognition and predictive modelling
  • Strong communication and presentation skills. Effective communication and story-telling skills
  • Energy and enthusiasm. Passion for learning and contributing to development
  • A true team plater. Collaborative mindset for effective communication across teams

EEOC

Antuit is an at-will, equal opportunity employer. All qualified applicants will receive consideration for employment witout regard to race, color, religion, marital status, age, national origin, ancestry, physical or mental disability, medical condition, pregnancy, genetic information, gender, sexual orientation, gender identity or expression, veteran status, or any other status protected under federal, state, or local law. 

Perficient, Inc.
  • Dallas, TX

At Perficient, youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.

Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.

About Our Data Governance Practice:


We provide exceptional data integration services in the ETL, Data Catalog, Data Quality, Data Warehouse, Master Data Management (MDM), Metadata Management & Governance space.

Perficient currently has a career opportunity for a Python Developer who resides in the vicinity of Jersey City, NJ or Dallas,TX.

Job Overview:

As a Python developer, you will participate in all aspects of the software development lifecycle which includes estimating, technical design, implementation, documentation, testing, deployment, and support of application developed for our clients. As a member working in a team environment, you will take direction from solution architects and Leads on development activities.


Required skills:

  • 6+ years of experience in architecting, building and maintaining software platforms and large-scale data infrastructures in a commercial or open source environment
  • Excellent knowledge of Python
  • Good knowledge of and hands on experience working with quant/data Python libraries (pandas/numpy etc)
  • Good knowledge of and hands on experience designing APIs in Python (using Django/Flask etc)

Nice to have skills (in the order of priority):

  • Comfortable and Hands on experience with AWS cloud (S3, EC2, EMR, Lambda, Athena, QuickSight etc.) and EMR tools (Hive, Zeppelin etc)
  • Experience building and optimizing big data data pipelines, architectures and data sets.
  • Hands on experience in Hadoop MapReduce or other big data technologies and pipelines (Hadoop, Spark/pyspark, MapReduce, etc.)
  • Bash Scripting
  • Understanding of Machine Learning and Data Science processes and techniques
  • Experience in Java / Scala


Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities, and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues with great benefits are just part of what makes Perficient a great place to work.

Computer Staff
  • Fort Worth, TX

We have been retained by our client located in Fort Worth, Texas (south Ft Worth area), to deliver a Risk Modeler on a regular full-time basis.   We prefer SAS experience but are interviewing candidates with R, SPSS, WPS, MatLab or similar statistical package experience if candidate has experience from financial loan credit risk analysis industry. Enjoy all the resources of a big company, none of problems that small companies have. This company has doubled in size in 3 years. We have a keen interest in finding a business minded statistical modeling candidate with some credit risk experience to build statistical models within the marketing, direct mail areas of financial services, lending, loans. We are seeking a candidate with statistical modeling, and data analysis skills, interested in creating better ways to solve problems in order to increase loan originations, and decrease loan defaults, and more. Our client is in business to find prospective borrowers, originate loans, provide loans, service loans, process loans and collect loan payments. The team works with third party data vendors, credit reporting agencies and data service providers, data augmentation, address standardization, fraud detection; decision sciences, analytics, and this position includes create of statistical models. They support the one of the, if not the largest profile of decision management in the US.  


We require experience with statistical analysis tools such as SAS, Matlab, R, WPS or SPSS or Python if to do statistical analysis. This is a statistical modeling, risk modeling, model building, decision science, data analysis and statistical analysis type of role requiring SQL and/or SQL Server experience and critical thinking skills to solve problems.   We prefer candidates with experience with data analysis, SQL queries, joins (left, inner, outer, right), reporting from data warehouses with tools such as Tableau, COGNOS, Looker, Business Objects. We prefer candidates with financial and loan experience especially knowledge of loan originations, borrower profiles or demographics, modeling loan defaults, statistical analysis i.e. Gini Coefficients and K-S test / Kolmogorov-Smirnov test for credit scoring and default prediction and modeling.


However, primarily critical thinking skills, and statistical modeling and math / statistics skills are needed to fulfill the tasks of this very interesting and important role, including playing an important role growing your skills within this small risk/modeling team. Take on challenges in the creation and use of statistical models. There is no use for Hadoop, or any NoSQL databases in this position this is not a big data type of position. no "big data" type things needed. There is no Machine Learning or Artificial Intelligence needed in this role. Your role is to create and use those statistical models. Create statistical models for direct mail in financial lending space to reach the right customers with the right profiles / demographics / credit ratings, etc. Take credit risk, credit analysis, loan data and build a new model, or validate the existing model, or recalibrate it or rebuild it completely.   The models are focused on delivering answers to questions or solutions to problems within these areas financial loan lending: Risk Analysis, Credit Analysis, Direct Marketing, Direct Mail, and Defaults. Logistical regression in SAS or Knowledge Studio, and some light use of Looker as the B.I. tool on top of SQL Server data.   Deliver solutions or ways for this business to make improvements in these areas and help the business be more profitable. Seek answers to questions. Seek solutions to problems. Create models. Dig into the data. Explore and find opportunities to improve the business. Expected to fit within the boundaries of defaults or loan values and help drive the business with ideas to get a better models in place, or explore data sources to get better models in place. Use critical thinking to solve problems.


Answer questions or solve problems such as:

What are the statistical models needed to produce the answers to solve risk analysis and credit analysis problems?

What are customer profiles have the best demographics or credit risk for loans to send direct mail items to as direct marketing pieces?

Why are loan defaults increasing or decreasing? What is impacting the increase or decrease of loan defaults?  



Required Skills

Bachelors degree in Statistics or Finance or Economics or Management Information Systems or Math or Quantitative Business Analysis or Analytics any other related math or science or finance degree. Some loan/lending business domain work experience.

Masters degree preferred, but not required.

Critical thinking skills.

must have SQL skills (any database SQL Server, MS Access, Oracle, PostgresSQL, Postgres) and the ability to write queries, joins, inner joins, left joins, right joins, outer joins. SQL Server is highly preferred.

Any statistical analysis systems / packages experience including statistical modeling experience, and excellent math skills:   SAS, Matlab, R, WPS, SPSS or Python with R language if used in statistical analysis. Must have significant statistical modeling skills and experience.



Preferred Skills:
Loan Credit Analysis highly preferred.   SAS highly preferred.
Experience with Tableu, Cognos, Business Objects, Looker or similar data warehouse data slicing and dicing and data warehouse reporting tools.   Creating reports from data warehouse data, or data warehouse reporting. SQL Server SSAS but only to pull reports. Direct marketing, direct mail marketing, loan/lending to somewhat higher risk borrowers.



Employment Type:   Regular Full-Time

Salary Range: $85,000 130,000 / year    

Benefits:  health, medical, dental, vision only cost employee about $100 per month.
401k 4% matching after 1 year, Bonus structure, paid vacation, paid holidays, paid sick days.

Relocation assistance is an option that can be provided, for a very well qualified candidate. Local candidates are preferred.

Location: Fort Worth, Texas
(area south of downtown Fort Worth, Texas)

Immigration: US citizens and those authorized to work in the US are encouraged to apply. We are unable to sponsor H1b candidates at this time.

Please apply with resume (MS Word format preferred), and also Apply with your Resume or apply with your Linked In Profile via the buttons on the bottom of this Job Posting page:  

http://www.computerstaff.com/?jobIdDescription=314  


Please call 817-424-1411 or please send a Text to 817-601-7238 to inquire or to follow up on your resume application. Yes, we recommend you call to leave a message, or send a text with your name, at least.   Thank you for your attention and efforts.

Acxiom
  • Austin, TX
As a Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You must be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You must be a self-starter to continuously evaluate new technologies, innovate and deliver solutions for business critical applications. 


 

What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Own the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Manage and review Hadoop log files with the help of  Log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with Vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 3+ years of Big Data Administration experience
  • Extensive knowledge of Hadoop based data manipulation/storage technologies such as HDFS, MapReduce, Yarn, HBASE, HIVE, Pig, Impala and Sentry
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Great operational expertise such as good troubleshooting skills, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
  • Experience in Hadoop cluster migrations or upgrades
  • Strong Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera/Horton Works/MapR versions along with Monitoring/Alerting tools (Nagios, Ganglia, Zenoss , Cloudera Manager)
  • Scripting skills in Perl, Python, Shell Scripting, and/or Ruby on Rails
  • Knowledge of JAVA/J2EE and other web technologies
  • Understanding of On-premise and Cloud network architectures
  • DevOps experience is a great plus (CHEF, Puppet and Ansible)
  • Excellent verbal and written communication skills


 

ITCO Solutions, Inc.
  • Austin, TX
  • Another Location: Sunnyvale, CA/ Austin, TX

    C2C is ok


    Description
    : Looking for a candidate with the skills of a hands-on Big Data engineer

    • Programming experience in building high-quality software. Skills with Java, Python or Scala preferred
    • Big Data/Hadoop ecosystem programming experience highly desirable, especially using java, Spark, hive, oozie, Kafka, and Map Reduce
    • Experience with databases like Oracle, Teradata, Vertica, Hadoop
    • In-depth understanding of data structures, algorithms and end-to-end solutions design
    • Experience in designing and developing ETL data pipelines. Should be proficient in writing Advanced SQLs, Expertise in performance tuning of SQLs
    • Knowledge of distributed computing, parallel programming, concurrency control, transaction processing.
    • Has experience in managing and processing large data sets distributed on multi-server, distributed systems from inception to execution.
    • Experience in designing and building dimensional data models to improve accessibility, efficiency, and quality of data
    • Demonstrate strong understanding of development processes and agile methodologies
    • Strong analytical and communication skills
    • Self-driven, highly motivated and ability to learn quick
Experian
  • Austin, TX

Experian Consumer Services – Careers That Define “What’s the Next Big (Data) Thing” for Consumers?

What could be more exciting – personally and professionally – than being part of a “disruptive” business? Consider taking your career to the next level by joining the Leader that continues to disrupt the competition. As the “disruptor” and market leader we pride ourselves on building new markets, leading the pack through continuous evolution and innovation. It’s a position Experian Consumer Services has enjoyed for more than a decade and we’re always looking for the talent that can help expand that lead.

When you’re the leader, it’s always urgent, important and market-changing. We think that defines the true “disruptive” business. Join us and create some chaos for the competition.

Key Responsibilities:



  • Drive enterprise-wide technology integration strategy-setting and implementation based on leading-edge industry standards, best practices and comprehensive understanding of business operations.

  • Responsible for design and implementation of integration strategy, architecture and platforms

  • Accountable for adhering to enterprise architecture standards, ensuring integration technology standards and best practices are maintained across the organization and leading enterprise architecture strategy-setting

  • Leads governance processes of new technologies and solutions to ensure consistent technology life cycle management

  • Lead integration efforts across all business areas and client groups including key data and infrastructure components across the enterprise.


Knowledge, Skills, & Experience



  • 7+ years of experience in MySQL

  • Deep knowledge in MySQL databases – physical / logical database design, security model, HA/DR, performance optimization, DBA and application design/development

  • Ability to create clear, detailed, concise documentation — architecture diagrams, presentations, and design documents

  • Ability to identify issues with existing schemas, suggest improvements

  • 2+ years of hands-on experience with Apache Hadoop, Spark and Python.

  • Excellent understanding of Entity Relationship Diagrams (ERDs)

  • Experience working in cloud computing and distributed data environments is PLUS

  • Strong data modeling and design skills.

  • Experience with data migration and ETL tools.

  • Strong experience in translating business requirements to data solutions.

  • Experience in large data environments with highly performant, highly scalable solutions.

  • Experience with Unix/Linux operating environments as well as shell scripts.

  • Strong SQL and PL/SQL development skills.

  • Strong in diagnosing various queries and Improving/suggesting improvements to SQL queries for efficiency, latency etc.

  • Ability to work in a team with highly motivated people.

  • Data analysis, modeling, and integration.

  • Database management, design, and development.

  • Strong debugging and technical troubleshooting skills.

  • Ability to prioritize and work on multiple projects concurrently.

  • A proactive approach to problem solving and decision-making.

  • Strong attention to detail.

  • Excellent written and oral communication skills.

  • Collaborate effectively with the database and other technology teams.

  • Knowledge of release management and version control practices and procedures.