OnlyDataJobs.com

WorldLink US
  • Dallas, TX

Business Analyst

Dallas, TX

Full time, direct hire position

Seeking a bright, motivated individual with a unique, wide range of skills and the ability to process large data sets while communicating findings clearly and concisely.

Responsibilities

  • Analyze data from a myriad of sources and generate valuable insights
  • Interface with our sales team and clients to discuss issues related to data availability and customer targeting
  • Execute marketing list processing for mail, email and integrated multi-channel campaigns
  • Assist in development of tools to optimize and automate internal systems and processes
  • Assist in conceptualization and maintenance of business intelligence tools

Requirements

  • Bachelors degree in math, economics, statistics or related quantitative field
  • An ability to deal and thrive with imperfect, mixed, varied and inconsistent data from multiple sources
  • Must possess rigorous analytical disciplined approach, as well as dynamic, abstract problem solving skills (get to the answer via both inspiration and perspiration)
  • Proven ability to work in a fast-paced environment and to meet changing deadlines / priorities on multiple simultaneous projects
  • Extensive experience writing queries for large, complex data sets in SQL (MySQL, PostgreSQL, Oracle, other SQL/RDBMS)
  • Highly proficient with Excel (or an alternate spreadsheet application like OpenOffice Calc) including macros, pivot tables, vlookups, charts and graphs
  • Solid knowledge of statistics and able to perform analysis in R SAS or SPSS proficiently
  • Strong interpersonal skills as a team leader and team player
  • Self-learning attitude, constantly pushing towards new opportunities, approaches, ideas and perspectives
  • Bonus points for experience with high-level, dynamically compiled programming languages: Python, Ruby, Perl, Lisp or PHP

  **No VISA Sponsorship available

Acxiom
  • Austin, TX
As a Senior Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze the latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You are able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You are also a self-starter able to continuously evaluate new technologies, innovate and deliver solutions for business critical applications


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Lead the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Lead and review Hadoop log files with the help of log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 6+ years of Big Data Administration Experience
  • Extensive knowledge and Hands-on Experience of Hadoop based data manipulation/storage technologies like HDFS, MapReduce, Yarn, Spark/Kafka, HBASE, HIVE, Pig, Impala, R and Sentry/Ranger/Knox
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Experience supporting Data Science teams and Analytics teams on complex code deployment, debugging and performance optimization problems
  • Great operational expertise such as excellent troubleshooting skills, understanding of system's capacity, bottlenecks, core resource utilizations (CPU, OS, Storage, and Networks)
  • Experience in Hadoop cluster migrations or upgrades
  • Strong scripting skills in Perl, Python, shell scripting, and/or Ruby on Rails
  • Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera, HortonWorks, and/or MapR versions along with monitoring/alerting tools (Nagios, Ganglia, Zenoss, Cloudera Manager)
  • Strong problem solving and critical thinking skills
  • Excellent verbal and written communication skills


What will set you apart:


  • Solid understanding and hands-on experience of Big Data on private/public cloud technologies(AWS/GCP/Azure)
  • DevOps experience (CHEF, Puppet and Ansible)
  • Strong knowledge of JAVA/J2EE and other web technologies

 
Aspirent
  • Atlanta, GA

MUST BE A US CITIZEN


Responsibilities

·       Help to create and deliver project proposals and guide our strategic thinking on offering analytics consulting services to our clients

·       Consult with clients to gain an understanding of the current state of the business area(s), their immediate and long term needs, any KPI goals, and to determine what success looks like

·       Consult heavily with business users and stakeholders to 1) identify, capture and leverage appropriate data sources across areas of the business, and 2) ensure that analytical solutions are tailored to business needs and will support or result in actionable customer strategies, and 3) determine delivery & implementation options

·       Develop & help implement strategic analytical and data mining solutions to understand key business behaviors such as customer acquisition, product up-sell, customer retention, lifetime value, channel preferences, customer satisfaction, and loyalty drivers, etc.

·       Lead analytics and strategy engagements; Provide project-specific guidance to our team members in performing analyses and delivering strategic recommendations; Create and maintain project plans, project schedules, and other project documentation

·       Provide statistical methodology and project management support for commercial deliverables as well as custom studies

Qualifications:  

·       5+ years experience in any of the following areas:

·       Statistical/Data Model development leveraging SAS, Python, and R.

·       Data Mining/Financial Analysis

·       Programming strength in a variety of languages: SQL, C/C++, Java, Python, Perl

  • Optional programming strength in the following Hadoop tools: MapReduce, Pig, Hive, Hbase

Knowledge, Skills, & Abilities:  

·       Strong written/verbal communication and presentation skills

·       The ability to work with all levels of staff & leadership

·       Ability to self-motivate, adapt, and multi-task in a fast-paced environment

·       Regression (linear, multiplicative, logistic, censored, Cox, etc)

·       Test Design/Design of Experiments

·       Segmentation and clustering

·       Decision tree analysis

·       Neural networks, genetic algorithms and other computational methods

·       Mathematical programming and optimization

·       Structural equations modeling

·       Conjoint analysis

·       Time series analysis and forecasting, smoothing techniques

·       Information design, info-graphics, scorecard/dashboard/presentation development

EDUCATION REQUIREMENTS:

·       BA or BS required; MS, MBA or PhD preferred

·       Formal training Economics/Econometrics, Statistics, Operations Research, Finance or Mathematics is a plus, familiarity is necessary.

Wallethub
  • No office location

Company details


WalletHub is one of the leading personal finance destinations in the US and rapidly growing. We’re looking for a highly skilled and motivated Senior Systems Administrator for a full-time, permanent position.


Requirements


You are the ideal candidate for this job if you have:



  • At least 5 years of experience in supporting AWS based production infrastructure.

  • Bachelor's or Master’s degree in Computer Science or equivalent work experience.

  • 3+ years of experience administering UNIX/Linux server required or equivalent work experience.

  • 3+ years of experience with Apache, Tomcat and any other Java application servers and relational database servers like MySQL (LAMP experience is highly preferred).

  • Experience with monitoring tools like Nagios, tripwire, aide etc and other custom monitoring tools.

  • Experience with configuring and securing mission critical production servers.

  • Experience with configuring load balancers and data.

  • Experience in Shell Scripting or Perl, with experience implementing automation and monitoring using shell scripting.

  • Experience in analysis and system performance tuning.

  • Critical thinking skills in a complex IT environment to analyze, troubleshoot, and resolve problems without direction.

  • Outstanding organizational skills and the ability to handle multiple projects simultaneously while meeting deadlines.

  • Excellent verbal and written communication skills.

  • Willingness to work hard (50 hrs per week).


Responsibilities



  • Ensure proper security, monitoring, alerting and reporting for the infrastructure and be the on-call for production servers.

  • Develop security monitoring and other tools to ensure the integrity and availability of our applications, server resources, reviewing system and application logs.

  • Work with the incident team to diagnose and recover from hardware or software failures working with or as the Incident Commander to coordinate and communicate with our internal customers.

  • Assist project teams with technical issues during development efforts.

  • Gather system requirements and support several project teams in evolving, testing and rolling-out new products and services, then transitioning the site or product to post launch operations activities throughout life of the product or service.

  • Work with the application development team and other systems engineers to make improvements to current infrastructure.

  • Document processes and procedures and follow a formal change management procedure.


Our Offer



  • Very competitive salary based on prior experience and qualifications

  • Potential for stock options after the first year

  • Raise and advancement opportunities based on periodic evaluations

  • Health benefits (in case you will be working from our office in Washington DC)

  • Visa sponsorship



Note: This position requires candidates to be living in the US. The position can be performed remotely if you don't live in the Washington DC area.



More about WalletHub


WalletHub is a high-growth fintech company based in Washington, DC that is looking for talented, hard-working individuals to help us reshape personal finance. More specifically, we are harnessing the power of data analytics and artificial intelligence to build the brain of a smart financial advisor, whose services we’re offering to everyone for free. The WalletHub brain enables users to make better financial decisions in a fraction of the time with three unique features:


1) Customized Credit-Improvement Tips: WalletHub identifies improvement opportunities and guides you through the necessary corrections.


2) Personalized Money-Saving Advice: WalletHub’s savings brain constantly scours the market for load-lightening opportunities, bringing you only the best deals.


3) Wallet Surveillance: Personal finance isn’t as scary with 24/7 credit monitoring providing backup, notifying you of important credit-report changes.


In addition to the valuable intelligence the brain provides, WalletHub is the first and only service to offer free credit scores and full credit reports that are updated on a daily basis absent of user interaction, rather than weekly or monthly and only when a user logs in. Some other services hang their hats on free credit scores and reports, yet they’re still inferior to what WalletHub considers minor pieces to a much larger puzzle.



How to Apply


To get our attention, all you need to do is send us a resume. If we believe that you will be a good match, we'll contact you to arrange the next steps. You can apply directly on Stackoverflow or email your application to jobs.dev@wallethub.com

Wallethub
  • No office location

Company details


WalletHub is one of the leading personal finance destinations in the US and rapidly growing. We’re looking for a highly skilled and motivated Senior Systems Administrator for a full-time, permanent position.


Requirements


You are the ideal candidate for this job if you have:



  • At least 5 years of experience in supporting AWS based production infrastructure.

  • Bachelor's or Master’s degree in Computer Science or equivalent work experience.

  • 3+ years of experience administering UNIX/Linux server required or equivalent work experience.

  • 3+ years of experience with Apache, Tomcat and any other Java application servers and relational database servers like MySQL (LAMP experience is highly preferred).

  • Experience with monitoring tools like Nagios, tripwire, aide etc and other custom monitoring tools.

  • Experience with configuring and securing mission critical production servers.

  • Experience with configuring load balancers and data.

  • Experience in Shell Scripting or Perl, with experience implementing automation and monitoring using shell scripting.

  • Experience in analysis and system performance tuning.

  • Critical thinking skills in a complex IT environment to analyze, troubleshoot, and resolve problems without direction.

  • Outstanding organizational skills and the ability to handle multiple projects simultaneously while meeting deadlines.

  • Excellent verbal and written communication skills.

  • Willingness to work hard (50 hrs per week).


Responsibilities



  • Ensure proper security, monitoring, alerting and reporting for the infrastructure and be the on-call for production servers.

  • Develop security monitoring and other tools to ensure the integrity and availability of our applications, server resources, reviewing system and application logs.

  • Work with the incident team to diagnose and recover from hardware or software failures working with or as the Incident Commander to coordinate and communicate with our internal customers.

  • Assist project teams with technical issues during development efforts.

  • Gather system requirements and support several project teams in evolving, testing and rolling-out new products and services, then transitioning the site or product to post launch operations activities throughout life of the product or service.

  • Work with the application development team and other systems engineers to make improvements to current infrastructure.

  • Document processes and procedures and follow a formal change management procedure.


Our Offer



  • Very competitive salary based on prior experience and qualifications

  • Potential for stock options after the first year

  • Raise and advancement opportunities based on periodic evaluations

  • Health benefits (in case you will be working from our office in Washington DC)

  • Visa sponsorship


Note: This position requires candidates to be living in the US. The position can be performed remotely if you don't live in the Washington DC area.


More about WalletHub


WalletHub is a high-growth fintech company based in Washington, DC that is looking for talented, hard-working individuals to help us reshape personal finance. More specifically, we are harnessing the power of data analytics and artificial intelligence to build the brain of a smart financial advisor, whose services we’re offering to everyone for free. The WalletHub brain enables users to make better financial decisions in a fraction of the time with three unique features:


1) Customized Credit-Improvement Tips: WalletHub identifies improvement opportunities and guides you through the necessary corrections.


2) Personalized Money-Saving Advice: WalletHub’s savings brain constantly scours the market for load-lightening opportunities, bringing you only the best deals.


3) Wallet Surveillance: Personal finance isn’t as scary with 24/7 credit monitoring providing backup, notifying you of important credit-report changes.


In addition to the valuable intelligence the brain provides, WalletHub is the first and only service to offer free credit scores and full credit reports that are updated on a daily basis absent of user interaction, rather than weekly or monthly and only when a user logs in. Some other services hang their hats on free credit scores and reports, yet they’re still inferior to what WalletHub considers minor pieces to a much larger puzzle.


How to Apply


To get our attention, all you need to do is send us a resume. If we believe that you will be a good match, we'll contact you to arrange the next steps. You can apply directly on Stackoverflow or email your application to jobs.dev@wallethub.com

Limelight Networks
  • Phoenix, AZ

Job Purpose:

The Sr. Data Services Engineer assists in maintaining the operational aspects of Limelight Networks platforms, provides guidance to the Operations group and acts as an escalation point for advanced troubleshooting of systems issues. The Sr. Data Services Engineer assists in the execution of tactical and strategic operational infrastructure initiatives by building and managing complex computing systems and processes that facilitate the introduction of new products and services while allowing existing services to scale.


Qualifications: Experience and Education (minimums)

  • Bachelors Degree or equivalent experience.
  • 2+ years experience working with MySQL (or other relational databases: Mongo DB, Cassandra, Hadoop, etc.) in a large-scale enterprise environment.
  • 2+ years Linux Systems Administration experience.
  • 2+ years Version Control and Shell scripting and one or more scripting languages including Python, Perl, Ruby and PHP.
  • 2+ Configuration Management Systems, using Puppet, Chef or SALT.
  • Experienced w/MySQL HA/Clustering solutions; Corosync, Pacemaker and DRBD preferred.
  • Experience supporting open-source messaging solutions such as RabbitMQ or ActiveMQ preferred.

Knowledge, Skills & Abilities

  • Collaborative in a fast-paced environment while providing exceptional visibility to management and end-toend ownership of incidents, projects and tasks.
  • Ability to implement and maintain complex datastores.
  • Knowledge of configuration management and release engineering processes and methodologies.
  • Excellent coordination, planning and written and verbal communication skills.
  • Knowledge of the Agile project management methodologies preferred.
  • Knowledge of a NoSQL/Big Data platform; Hadoop, MongoDB or Cassandra preferred.
  • Ability to participate in a 24/7 on call rotation.
  • Ability to travel when necessary.

Essential Functions:

  • Develop and maintain core competencies of the team in accordance with applicable architectures and standards.
  • Participate in capacity management of services and systems.
  • Maintain plans, processes and procedures necessary for the proper deployment and operation of systems and services.
  • Identify gaps in the operation of products and services and drive enhancements.
  • Evaluate release processes and tools to find areas for improvement.
  • Contribute to the release and change management process by collaborating with the developers and other Engineering groups.
  • Participate in development meetings and implement required changes to the operational architecture, standards, processes or procedures and ensure they are in place prior to release (e.g., monitoring, documentation and metrics).
  • Maintain a positive demeanor and a high level of professionalism at all times.
  • Implement proactive monitoring capabilities that ensure minimal disruption to the user community including: early failure detection mechanisms, log monitoring, session tracing and data capture to aid in the troubleshooting process.
  • Implement HA and DR capabilities to support business requirements.
  • Troubleshoot and investigate database related issues.
  • Maintain migration plans and data refresh mechanisms to keep environments current and in sync with production.
  • Implement backup and recovery procedures utilizing various methods to provide flexible data recovery capabilities.
  • Work with management and security team to assist in implementing and enforcing security policies.
  • Create and manage user and security profiles ensuring application security policies and procedures are followed.

ettain group
  • Raleigh, NC

Role: Network Engineer R/S

Location: RTP, primarily onsite but some flexibility for remote after initial rampup

Pay Rate: 35-60/hr depending on experience.

Interview Process:
Video WebEx (30 min screen)
Panel Interview with 3-4 cpoc engineers- in depth technical screen

Personality:

·         Customer facing

·         Experience dealing with high pressure situations

·         Be able to hand technology at the level the customer will throw at them

·         Customers test the engineers to see if tech truly is working

·         Have to be able to figure out how to make it work

Must have Tech:

·         Core r/s

·         Vmware


Who You'll Work With:

The POV Services Team (dCloud, CPOC, CXC, etc) provides services, tools and content for Cisco field sales and channel partners, enabling them to highlight Cisco solutions and technologies to customers.

What You'll Do

As a Senior Engineer, you are responsible for the development, delivery, and support of a wide range of Enterprise Networking content and services for Cisco Internal, Partner and Customer audiences.

Content Roadmap, Design and Project Management 25%

    • You will document and scope all projects prior to entering project build phase.
    • Youll work alongside our platform/automation teams to review applicable content to be hosted on Cisco dCloud.
    • You specify and document virtual and hardware components, resources, etc. required for content delivery.
    • You can identify and prioritize all project-related tasks while working with Project Manager to develop a timeline with high expectations to meet project deadlines.\
    • You will successfully collaborate and work with a globally-dispersed team using collaboration tools, such as email, instant messaging (Cisco Jabber/Spark), and teleconferencing (WebEx and/or TelePresence).

Content Engineering and Documentation 30%

    • Document device connectivity requirements of all components (virtual and HW) and build as part of pre-work.
    • Work with the Netops team to rack, cabling, imaging, and access required for the content project.
    • As part of the development cycle, the developer will work collaboratively with the business unit technical marketing engineers (TME) and WW EN Sales engineers to configure solution components, including Cisco routers, switches, wireless LAN controllers (WLC), SD-Access, DNA Center, Meraki, SD-WAN (Viptela), etc.
    • Work with BU, WW EN Sales and marketing resources to draft, test and troubleshoot compelling demo/lab/story guides that contribute to the field sales teams and generate high interest and utilization.
    • Work with POV Services Technical Writer to format/edit/publish content and related documents per POV Services standards.
    • Work as the liaison to the operations and support teams to resolve issues identified during the development and testing process, providing technical support and making design recommendations for fixes.
    • Perform resource studies using VMware vCenter to ensure an optimal balance of content performance, efficiency and stability before promoting/publishing production content.

Content Delivery 25%

    • SD-Access POV, SD-WAN POV Presentations, Webex and Video recordings, TOI, SE Certification Proctor, etc.
    • Customer engagement at customer location, Cisco office, remote delivering proof of value and at Cisco office delivering Test Drive and or Technical Solutions Workshop content.
    • Deliver training, TOI, and presentations at events (Cisco Live, GSX, SEVT, Partner VT, etc).
    • Work with the POV Services owners, architects, and business development team to market, train, and increase global awareness of new/revised content releases.

Support and Other 20%

    • You provide transfer of information and technical support to Level 1 & 2 support engineers, program managers and others ensuring that content is understood and in working order.
    • You will test and replicate issues, isolate the root cause, and provide timely workarounds and/or short/long term fixes.
    • You will be monitoring any support trends for assigned content. Track and log critical issues effectively using Jira.
    • You provide Level 3 user support directly/indirectly to Cisco and Partner sales engineers while supporting and mentoring peer/junior engineers as required.

Who You Are

    • You are well versed in the use of standard design templates and tools (Microsoft Office including Visio, Word, Excel, PowerPoint, and Project).
    • You bring an uncanny ability to multitask between multiple projects, user support, training, events, etc. and shifting priorities.
    • Demonstrated, in-depth working knowledge/certification of routing, switching and WLAN design, configuration and deployment. Cisco Certifications including CCNA, CCNP and or CCIE (CCIE preferred) in R&S.
    • You possess professional or expert knowledge/experience with Cisco Service Provider solutions.
    • You are an Associate or have professional knowledge with Cisco Security including Cisco ISE, Stealthwatch, ASA, Firepower, AMP, etc.
    • You have the ability to travel to Cisco internal, partner and customer events, roadshows, etc. to train and raise awareness to drive POV Services adoption and sales. Up to 40% travel.
    • You bring VMWare/ESXi experience building servers, install VMware, deploying virtual appliances, etc.
    • You have Linux experience or certifications including CompTIA Linux+, Red Hat, etc.
    • Youre experience using Tool Command Language (Tcl), PERL, Python, etc. as well as Cisco and 3rd party traffic, event and device generation applications/tools/hardware. IXIA, Sapro, Pagent, etc.
    • Youve used Cisco and 3rd party management/monitoring/troubleshooting solutions; Cisco: DNA Center, Cisco Prime, Meraki, Viptela, CMX.
    • 3rd party solutions: Solarwinds, Zenoss, Splunk, LiveAction or other to monitor and/or manage an enterprise network.
    • Experience using Wireshark and PCAP files.

Why Cisco

At Cisco, each person brings their unique talents to work as a team and make a difference.

Yes, our technology changes the way the world works, lives, plays and learns, but our edge comes from our people.

    • We connect everything people, process, data and things and we use those connections to change our world for the better.
    • We innovate everywhere - From launching a new era of networking that adapts, learns and protects, to building Cisco Services that accelerate businesses and business results. Our technology powers entertainment, retail, healthcare, education and more from Smart Cities to your everyday devices.
    • We benefit everyone - We do all of this while striving for a culture that empowers every person to be the difference, at work and in our communities.
ettain group
  • Raleigh, NC

Role: R/S Network Engineer

Pay: 50-60/hr

Location: Raleigh, NC (some flexibility with remote after inital rampup)

18 month contract


Who You'll Work With:

The POV Services Team (dCloud, CPOC, CXC, etc) provides services, tools and content for Cisco field sales and channel partners, enabling them to highlight Cisco solutions and technologies to customers.

What You'll Do

As a Senior Engineer, you are responsible for the development, delivery, and support of a wide range of Enterprise Networking content and services for Cisco Internal, Partner and Customer audiences.

Content Roadmap, Design and Project Management 25%

  • You will document and scope all projects prior to entering project build phase.
  • Youll work alongside our platform/automation teams to review applicable content to be hosted on Cisco dCloud.
  • You specify and document virtual and hardware components, resources, etc. required for content delivery.
  • You can identify and prioritize all project-related tasks while working with Project Manager to develop a timeline with high expectations to meet project deadlines.\
  • You will successfully collaborate and work with a globally-dispersed team using collaboration tools, such as email, instant messaging (Cisco Jabber/Spark), and teleconferencing (WebEx and/or TelePresence).

Content Engineering and Documentation 30%

  • Document device connectivity requirements of all components (virtual and HW) and build as part of pre-work.
  • Work with the Netops team to rack, cabling, imaging, and access required for the content project.
  • As part of the development cycle, the developer will work collaboratively with the business unit technical marketing engineers (TME) and WW EN Sales engineers to configure solution components, including Cisco routers, switches, wireless LAN controllers (WLC), SD-Access, DNA Center, Meraki, SD-WAN (Viptela), etc.
  • Work with BU, WW EN Sales and marketing resources to draft, test and troubleshoot compelling demo/lab/story guides that contribute to the field sales teams and generate high interest and utilization.
  • Work with POV Services Technical Writer to format/edit/publish content and related documents per POV Services standards.
  • Work as the liaison to the operations and support teams to resolve issues identified during the development and testing process, providing technical support and making design recommendations for fixes.
  • Perform resource studies using VMware vCenter to ensure an optimal balance of content performance, efficiency and stability before promoting/publishing production content.

Content Delivery 25%

  • SD-Access POV, SD-WAN POV Presentations, Webex and Video recordings, TOI, SE Certification Proctor, etc.
  • Customer engagement at customer location, Cisco office, remote delivering proof of value and at Cisco office delivering Test Drive and or Technical Solutions Workshop content.
  • Deliver training, TOI, and presentations at events (Cisco Live, GSX, SEVT, Partner VT, etc).
  • Work with the POV Services owners, architects, and business development team to market, train, and increase global awareness of new/revised content releases.

Support and Other 20%

  • You provide transfer of information and technical support to Level 1 & 2 support engineers, program managers and others ensuring that content is understood and in working order.
  • You will test and replicate issues, isolate the root cause, and provide timely workarounds and/or short/long term fixes.
  • You will be monitoring any support trends for assigned content. Track and log critical issues effectively using Jira.
  • You provide Level 3 user support directly/indirectly to Cisco and Partner sales engineers while supporting and mentoring peer/junior engineers as required.

Who You Are

  • You are well versed in the use of standard design templates and tools (Microsoft Office including Visio, Word, Excel, PowerPoint, and Project).
  • You bring an uncanny ability to multitask between multiple projects, user support, training, events, etc. and shifting priorities.
  • Demonstrated, in-depth working knowledge/certification of routing, switching and WLAN design, configuration and deployment. Cisco Certifications including CCNA, CCNP and or CCIE (CCIE preferred) in R&S.
  • You possess professional or expert knowledge/experience with Cisco Service Provider solutions.
  • You are an Associate or have professional knowledge with Cisco Security including Cisco ISE, Stealthwatch, ASA, Firepower, AMP, etc.
  • You have the ability to travel to Cisco internal, partner and customer events, roadshows, etc. to train and raise awareness to drive POV Services adoption and sales. Up to 40% travel.
  • You bring VMWare/ESXi experience building servers, install VMware, deploying virtual appliances, etc.
  • You have Linux experience or certifications including CompTIA Linux+, Red Hat, etc.
  • Youre experience using Tool Command Language (Tcl), PERL, Python, etc. as well as Cisco and 3rd party traffic, event and device generation applications/tools/hardware. IXIA, Sapro, Pagent, etc.
  • Youve used Cisco and 3rd party management/monitoring/troubleshooting solutions; Cisco: DNA Center, Cisco Prime, Meraki, Viptela, CMX.
  • 3rd party solutions: Solarwinds, Zenoss, Splunk, LiveAction or other to monitor and/or manage an enterprise network.
  • Experience using Wireshark and PCAP files.

Why Cisco

At Cisco, each person brings their unique talents to work as a team and make a difference.

Yes, our technology changes the way the world works, lives, plays and learns, but our edge comes from our people.

  • We connect everything people, process, data and things and we use those connections to change our world for the better.
  • We innovate everywhere - From launching a new era of networking that adapts, learns and protects, to building Cisco Services that accelerate businesses and business results. Our technology powers entertainment, retail, healthcare, education and more from Smart Cities to your everyday devices.
  • We benefit everyone - We do all of this while striving for a culture that empowers every person to be the difference, at work and in our communities.
Acxiom
  • Austin, TX
As a Hadoop Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You must be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You must be a self-starter to continuously evaluate new technologies, innovate and deliver solutions for business critical applications. 


 

What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Own the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Manage and review Hadoop log files with the help of  Log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with Vendor support teams on support tasks


Do you have?


  • Bachelor's degree in related field of study, or equivalent experience
  • 3+ years of Big Data Administration experience
  • Extensive knowledge of Hadoop based data manipulation/storage technologies such as HDFS, MapReduce, Yarn, HBASE, HIVE, Pig, Impala and Sentry
  • Experience in capacity planning, cluster designing and deployment, troubleshooting and performance tuning
  • Great operational expertise such as good troubleshooting skills, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
  • Experience in Hadoop cluster migrations or upgrades
  • Strong Linux/SAN administration skills and RDBMS/ETL knowledge
  • Good Experience in Cloudera/Horton Works/MapR versions along with Monitoring/Alerting tools (Nagios, Ganglia, Zenoss , Cloudera Manager)
  • Scripting skills in Perl, Python, Shell Scripting, and/or Ruby on Rails
  • Knowledge of JAVA/J2EE and other web technologies
  • Understanding of On-premise and Cloud network architectures
  • DevOps experience is a great plus (CHEF, Puppet and Ansible)
  • Excellent verbal and written communication skills


 

Samsung SARC & ACL
  • Austin, TX
Responsibilities:

· Hands-on responsibility from netlist to GDS delivery

· Floorplan, Place & Route in chip-level and hierarchical physical implementation environment

· Support sign-off tasks, including block and full chip level timing/power/area closure, power integrity, signal integrity and physical sign-off flow

· Develop physical implementations flows and recipes for high performance and ultra-low-power GPU development

· Communicate with other engineering teams to discuss the process technology and design methodology

· Develop repeatable and predictable flow to reduce the P&R, ECO and sign-off schedule

· Assist in improving P&R timing/power correlation with sign-off tools and silicon

· Assist in exploring different floorplans and PG grid topologies and the impact to power, frequency, and reliability



Background/Experience:

· BSEE or MSEE and 5+ years relevant experience preferred (or equivalent education and experience)

· Solid understanding of the GPU design/integration flow with extensive experience in taping out designs

· Experience with 16nm finfet or smaller process nodes is strongly preferred; knowledge on design implementation margining

· Hands-on experience with block and full chip integration with the latest industry practiced P&R/STA flows and tools

· Hands-on experience with different clocking techniques including CTS, multi-source CTS, and clock mesh is preferred

· Experience in full chip and block level floorplanning, power planning and area/congestion optimization

· Sign-off experience with reliability, signal integrity, noise, timing, power, physical and DFM closure

· Experience in structure datapath development is preferred

· Strong scripting/programming skills in Tcl, Perl, Shell, and/or Python is strongly preferred

· Solid understanding of Electrical Engineering fundamentals, analytical aptitude and excellent attention to detail

· Good skills on: communication, team player working in collaborative work environment, discipline and planning; ability to balance innovation with crisp execution


Samsung provides Equal Employment Opportunity for all individuals regardless of race, color, religion, gender, age, national origin, marital status, sexual orientation, status as a protected veteran, genetic information, status as a qualified individual with a disability or any other characteristic protected by law.
Accenture
  • Atlanta, GA
Come join Accenture and Innovate in a company with AI and data analytics in its DNA. Leverage our unparalleled scale, scope, investment and global footprint to solve clients business needs. Youll drive results accessing 750+ industrialized apps and solutions, 800+ analytics and 300+ AI patents/patents-pending, and the Applied Intelligence Platform (API) that combines cutting edge and advanced analytics and automated AI with an integrated suite of leading tools and technologies.
Accenture Applied Intelligence is the worlds largest team in applying data science, machine learning, and AI with deep industry experience to solve clients most sophisticated and difficult challenges. We are a team of experts in data science, data engineering, artificial intelligence and human ingenuity with industry knowledge that spans every industrialized area -- energy, health care, transportation, retail, social media, and more. By deploying AI responsibly and combining it with our deep industry and analytics expertise, we enable the digital transformation of organizations, extend human capabilities, and make intelligent products and services a reality. Follow @AccentureAI and visit accenture.com/appliedintelligence .
Role Description: Data Scientist
As a consultant working at the Accenture you will work on a team with diverse clients and industries delivering analytics solutions and help clients turn data into actionable insights that drive tangible outcomes, improve performance and help clients lead the market. This position requires in-depth understanding and use of statistical and data analysis tools.
Key Responsibilities
    • Effectively utilize statistical, data mining, machine learning, and/or deep learning techniques in delivering data science insights
    • Work closely with internal Accenture teams and clients to understand challenges and create solutions
    • Provide thought leadership within projects and leading technologies
    • Stay abreast of technology trends in artificial intelligence
Basic Qualifications
These are the minimum requirements for an aspirant to be considered for the position applying to:
    • Minimum of 2 years in Healthcare and/or Life Science industry
    • Minimum of 3 years of experience in advanced modeling environment strong understanding of statistical concepts and predictive modeling. e.g., AI neural networks, multi scalar dimensional models, logistic regression techniques, machine-based learning, etc.
    • Minimum of 3 years leveraging and synthesizing large volumes and variety of data enhancing the businesses understanding of individual population segments, propensities, outcomes, and decision points
    • Minimum of 3 years designing, implementing, and evaluating advanced statistical models and approaches for application in the businesses most complex issues
    • Minimum of 3 years building econometric and statistical models for various problems inclusive of projections, classification, clustering, pattern analysis, sampling, simulations
    • Minimum of 2 years working and conceptual knowledge in data structures, algorithms, statistics, machine learning, natural language processing and programming in Python, R, Scala, Julia, SAS, or other equivalent languages/tools
    • Minimum of 2 years in data mining and predictive modeling inclusive of linear and non-linear regression, logistic regression, and time series analysis models
    • Ability to travel up to 100%
    • Bachelor's Degree in any related field of study data science, mathematics, economics, statistics, engineering or information management
Preferred Qualifications
    • Demonstrated ability for designing and implementing successful data analysis solutions within a business Payer industry experience
    • Strong knowledge of data mining techniques and an ability to apply these techniques in practical real-world business issues
    • SQL and scripting languages such as Python and Perl as well as familiarity with statistical analysis, data visualization, and data cleansing tools and techniques
    • Proven ability to work independently as well as with a team.
    • Good communication skills, both written and oral.
    • Proven ability to build, manage and foster a team-oriented environment
    • Proven ability to work creatively and analytically in a problem-solving environment
    • Excellent communication (written and oral) and interpersonal skills
    • Excellent leadership and management skills
Applicants for employment in the US must have work authorization that does not now or in the future require sponsorship of a visa for employment authorization in the United States and with Accenture (i.e., H1-B visa, F-1 visa (OPT), TN visa or any other non-immigrant status).
Candidates who are currently employed by a client of Accenture or an affiliated Accenture business may not be eligible for consideration.
Accenture is an EEO and Affirmative Action Employer of Females/Minorities/Veterans/Individuals with Disabilities.
Equal Employment Opportunity
All employment decisions shall be made without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by federal, state, or local law.
Accenture is committed to providing veteran employment opportunities to our service men and women.
Planet Pharma
  • San Diego, CA

SUMMARY

The Bioinformatics Scientist will be based at our client's R&D laboratory. This resource will work collaboratively with bench scientists to build tools, analyze sequence results, and integrate data. We are seeking a Bioinformatics Scientist who is passionate about making real differences in the field of molecular diagnostics and is enthusiastic about working in an exciting startup environment. This position requires a person with a strong commitment to science and technology and with demonstrated knowledge in genomics, bioinformatics, statistics, and programming. The candidate should be energetic, open-minded, detail-oriented and results driven.


Primary Job Responsibilities:

  • Implement analysis workflows for the identification and interpretation of actionable cancer somatic mutations from different types of tumor and normal tissue samples of cancer patients.
  • Data analysis projects involving the results from large cohorts of patients involved in clinical trials.
  • Select, test, and implement bioinformatics pipelines for the analysis, annotation, and interpretation of cancer genomic data
  • Perform analysis of aggregated results from patient cohorts, applying the appropriate multivariate statistical analysis, machine learning, and visualization methods
  • Mine sequencing data and provide feedback to R&D team.
  • Present results of analyses to internal stakeholders and customers/collaborators
  • Implement quality control metrics and procedures to prevent and detect errors
  • Insure that all work is properly documented, code is under change control, and provenance of data is maintained
  • Work closely with IT specialists to build and maintain robust infrastructure


REQUIREMENTS

  • PhD/MSc in bioinformatics, computer science, applied mathematics/physics, genetics, and/or quantitative biological sciences.
  • 3-5 year expertise in computational biology and bioinformatics focused on NGS applications including genome variation, cancer genomics, transcriptomics, etc.
  • In-depth knowledge of NGS genomic data analysis and bioinformatics tools (BWA, SamTools, GATK, FreeBayes, MuTect, VarScan etc.) and current data formats (e.g. VCF, BAM/SAM).
  • Scripting or programming expertise for bioinformatics (Perl/Python, Java, C/C++) is a must.
  • Versed in the inner workings and limitations of modern high-throughput sequencing platforms (Illumina, Ion Torrent).
  • Proven ability to develop data analysis methods & algorithms, use of common machine learning tools (e.g. SciKit, Weka), and proficiency in the use of common statistical analysis tools (e.g. R, MatLab) is highly desirable.
  • Ability to quantify accuracy and performance of algorithms/tools/pipelines with respect to metrics, datasets, and the literature.
  • UNIX environment expertise including clustering and parallelization of analysis jobs is necessary.
  • Experience with cloud computing environments (e.g. AWS, Google), distributed computing tools (StartCluster, Hadoop, Spark), and containerization (e.g. Docker) is highly desirable.
  • Exposure to public data sources, such as TCGA, CCLE, Ensembl, GTEx, Achilles, etc.
  • Prior industry experience as well as record of developing clinical pipelines is desirable.
  • Track record of successful interactions with bench biologists, a scientific publication record, and excellent communication skills are pluses.
  • Ability to prioritize and deliver research in a fast-paced, milestone-driven environment.
  • Strong work ethic, emphasizing both efficiency and quality of work.
  • Publication of original scientific work in relevant journals.
Tekberry
  • Atlanta, GA

Title: R&D DevOps Specialist
City: Atlanta
State: GA
ZIP: 30308
Job Type: Contract
Hours: 40
Job Code: EB-1473668477

Tekberry is looking for a highly qualified and motivated R&D DevOps Specialist to work on-site with our client, a Fortune-1000 electronics company in Atlanta, GA.


This is a contract position that will see the ideal candidate working alongside industry-leading talent in a world-class environment.

Job Description:

    • Join a fun, hardworking, team that is dedicated to building a system that is always reliable and available to our client's customers.
    • Mature and refactor a software release infrastructure to support an Agile development environment.
    • Strong technical consultation depth. Able to evaluate & influence out-of-the-box vs customized solutions as applicable.
    • Eloquent articulator of industry trends, thoughts & own ideas.
    • Constantly lookout for cost-effective solutions.


Qualifications:

    • Bachelors degree or higher in Computer Science, Computer Engineering or Electrical Engineering. A Masters degree is preferred.
    • Scripting (Bash/Python/Ruby/Perl/PHP).
    • Continuous integration/deployment in AWS and cloud environments using Jenkins.
    • Familiarity with Atlassian tool chain, Jira, Bitbucket, git, etc.
    • C/C++ development using Visual Studio and gcc and CMake.
    • Experience in Systems Administration & understanding of various Operating Systems platforms & technologies:
      o Windows
      o Linux
      o Web Services (IIS, Apache, tomcat) (optional)
      o Application monitoring & performance tuning


The work must be done on-site, so telecommuting will not be possible. Please submit your resume with salary requirements. Principals only; no third parties or off-shore companies. No phone calls please.

As a W2 employee you will have access to health and 401k benefits.

Tekberry Inc. is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability or any other protected categories under all applicable laws.

Quest Groups LLC
  • Austin, TX

The Senior Data Engineer is responsible for overseeing junior data engineering activities and aiding in building the business data collection systems and processing pipelines. The role of the Senior Data Engineer is responsible for building and maintaining optimized and highly available data pipelines that facilitate deeper analysis and reporting by the Data and Analytics department.

The Senior Data Engineer builds data processing frameworks that handle the businesss growing database. He works with senior data science leadership as well as other Data and Analytics teams in leveraging data with reporting and scientific tools, for example, Tableau, R, and Spark. The Senior Data Engineer strives to continuously develop new and improved data engineering capabilities.


Objectives and Responsibilities of the Senior Data Engineer

Management and Strategy: The managerial role of the Senior Data Engineer is primarily for overseeing activities of the junior data engineering teams, ensuring proper execution of their duties and alignment with business vision and objectives. He provides senior-level contribution to a team that is responsible for the design, deployment, and maintenance of the businesss data platforms.

However, the Senior Data Analyst will also implement strategies directed at acquiring data and promoting the development of new insights across the business. The Senior Data Engineer owns and extends the businesss data pipeline through the collection, storage, processing, and transformation of large data-sets.

It is his duty to monitor the existing metrics, analyze data, and lead partnership with other Data and Analytics teams in an effort to identify and implement system and process improvements.

The Senior Data Engineer will additionally develop queries for ad hoc business projects, as well as ongoing reporting. In this capacity, the Senior Data Engineer builds a metadata system where all available data is maintained and cataloged. The Senior Data Engineer also plays a major role in the development of reliable data pipelines that translate raw data into powerful features and signals.

He designs, architects, implements, and supports key datasets that avail structured and timely access to actionable business insights. The Senior Data Engineer is additionally tasked with developing ETL processes that convert data into formats through a team of data analysts and dashboard charts.


Collaboration and Support: The Senior Data Engineer plays a collaborative role where he works closely with the businesss Data and Analytics teams, gathering technical requirements for exceptional data governance across the department and the business at large.

In this collaboration, the Senior Data Engineer works the data analysts, data warehousing engineers, and data scientists in finding and applying best practices within the Data and Analytics department as well as defining the businesss data requirements, which will ensure that the collected data is of a high quality and optimal for use across the department and the business at large.

The Senior Data Engineer will also work with senior data science management and departments beyond the Data and Analytics department in analyzing and understanding data sources, participating in design, and providing insights and guidance on database technology and data modeling best practices.

In this capacity, the Senior Data Engineer will further be required to draw performance reports and strategic proposals form his gathered knowledge and analyses results for senior data science leadership.

Analytics: The Senior Data Engineering plays an analytical role where he develops and manages scalable data processing platforms that he uses for exploratory data analysis and real-time analytics. It is also the role of the Senior Data Engineer to oversee, design, and develop algorithms for real-time data processing within the business and to create the frameworks that enable quick and efficient data acquisition.

In this capacity, the Senior Data Engineer retrieves and analyzes data through the use of SQL, Excel, among other data management systems. He also builds data loading services for the purpose of importing data from numerous disparate data sources, inclusive of APIs, logs, relational, and non-relational databases.

Knowledge and Opportunity: The Senior Data Engineer is tasked with the responsibility of contributing to the continual improvement of the businesss data platforms through his observations and well-researched knowledge. He keeps track of industry best practices and trends and through his acquired knowledge, takes advantage of process and system improvement opportunities.


The Senior Data Engineer performs similar duties as he deems fit for the proper execution of his duties and duties as delegated by the Head of Data Science, Director Data Science, Chief Data Officer, or the Employer.


Required Qualifications of the Senior Data Engineer

Education: The Senior Data Engineer must have a bachelors degree (masters preferred) in Computer Science, Applied Mathematics, Engineering, or any other technology related field. An equivalent of this educational requirement in working experience is also acceptable.


Experience: A candidate for this position must have had at least 5 years of working experience working in a data engineering department, preferably as a Data Engineer in a fast-paced environment and complex business setting. The candidate must have a demonstrated experience in building and maintaining reliable and scalable ETL on big data platforms as well as experience working with varied forms of data infrastructure inclusive of relational databases such as SQL, Hadoop, Spark and column-oriented databases such as Redshift, MySQL, and Vertica.

The candidate must also have had experience in data warehousing inclusive of dimensional modeling concepts and demonstrate proficiency in scripting languages, for example, Python, Perl, and so forth. A suitable candidate will also demonstrate machine learning experience and experience with big data infrastructure inclusive of MapReduce, Hive, HDFS, YARN, HBase, Oozie, etc. The candidate will additionally demonstrate substantial experience and a deep knowledge of data mining techniques, relational, and non-relational databases.


Communication Skills: Communication Skills for the Senior Data Engineer are just as important as they are for the Data Engineer, both in verbal and written form. The Senior Data Engineer oversees and manages junior data engineering teams and to ensure effective management, he must be capable of conveying information and instructions clearly down the line to the junior team.

Communication skills are also imperative for the Senior Data Engineer in his collaborative role where he will have to interact cross-functionally with non-technical departments. To enable effective collaborations, the Senior Data Engineer will have an exceptional ability to convey complex messages in a clear, simplified, and understandable manner.

He will also be required to draft reports and prepare presentations for senior data science leadership. These reports and presentation must be clear, concise, unambiguous, engaging and convincing, which will demand exceptional communication skills on the Senior Data Engineers part.


Skills: A candidate for this position will also demonstrate strong computer skills and a deep passion for analytics. The candidate for this position must possess an ability to perform complex data analyses with large data volumes. He will be an expert in SQL, Java, and have a keen understanding of data models and data warehouse concepts.


The candidate will demonstrate an ability translate algorithms provided by senior data science management and implement them in as well as strong knowledge in Linux, OS tools, and file-system level troubleshooting. The candidate must have substantial experience working with big data infrastructure tools such as Python, SQS, and Redshift. A suitable candidate will also be proficient Scala, Spark, Spark Streaming, AWS, and EMR.


The Senior Data Engineer must have certain preferable personal attributes that will make him that much more suited for the position. The Senior Data Engineer will be a result-driven individual, be passionate and a self-starter, be proactive requiring minimal supervision, be highly organized, have an ability to handle multiple tasks and meet tight deadlines, be a creative and strategic thinker, work comfortably work in a collaborative setting, work comfortably with senior departmental leadership, and demonstrate an ability to remain calm during times of uncertainty and stress, inspiring the same in his team.


The candidate must be a people person who is able to form strong, meaningful, and lasting connections with others, enabling smooth and continued collaborative relationships, earning him the trust of his juniors who will readily follow in his directives, and gaining the confidence of senior data science leadership.

Expedia, Inc.
  • Bellevue, WA

Do you love building creative, highly-scalable and expansible AI platforms and solutions? Is revolutionizing the travel experience for a multi-million dollar business using AI/ML challenging and attractive to you? Expedia Group AI Labs is a newly created AI Center of Excellence for Expedia Group. We focus on AI for travel data intelligence R&D, build high performance AI platforms, and work closely together with our brands to deliver high impact AI solutions. We work in start-up fashion, with flat org structure and refreshing culture – we value integrity, creativity, dedication, and positive energy within our team and across teams. We favor the best ideas/working solutions regardless of ranking/seniority. We create a unique, fast paced environment for you to learn, grow, and extend your full potential. Together, we bring AI fueled business value to our company, to our travelers, and partners. We believe in the power of Data Intelligence and AI innovation.


We are seeking a Machine Learning Engineer I to build platforms and AI solutions that support all of Expedia Group. We’re on the leading edge combining the advances in machine learning and cloud infrastructure to deliver AI capabilities at cloud scale. If you have a strong ML/statistics background and can roll up your sleeves to do the engineering work required to ship the high quality platform to our customers, this is a unique opportunity to tackle the challenging problems in both AI and cloud space.


WHO YOU ARE:
Recent or expected MS/PhD degree in Electrical Engineering, Computer Science, Mathematics, or related technical field
Experience with programming languages such as C/C++, Java, Perl or Python and open-source technologies (Apache, Hadoop)
Experience with machine learning and/or deep learning methods, like Sklearn/Tensorflow/PyTorch
Experience with OO design and common design pattern
Experience with data structures, algorithm design, problem solving, and complexity analysis
Academic and/or industry experience with advanced AI and ML techniques preferred
Experience in developing cloud software services and an understanding of design for scalability, performance and reliability (K8s, YARN) preferred
Experience in defining system architectures and exploring technical feasibility trade-offs preferred


WHAT YOU WILL DO/ROLE & RESPONSIBILITY:
Translate business and functional requirements into concrete deliverables with the design, development, testing, and deployment of highly scalable distributed services
Partner with scientists and other engineers to help invent, implement, and connect sophisticated algorithms to our cloud-based engines
Design, develop and maintain core platform features, services and engines
Help define product features, and system architecture
Work with other engineers and research teams to investigate design approaches, and evaluate technical feasibility
Develop new technology prototypes to help with future exploration
Collaborate with scientists to drive cutting-edge research into product realization


WHY JOIN US: 


Expedia Group recognizes our success is dependent on the success of our people. We are the world's travel platform, made up of the most knowledgeable, passionate, and creative people in our business. Our brands recognize the power of travel to break down barriers and make people's lives better – that responsibility inspires us to be the place where exceptional people want to do their best work, and to provide them the tools to do so. 


Whether you're applying to work in engineering or customer support, marketing or lodging supply, at Expedia Group we act as one team, working towards a common goal; to bring the world within reach. We relentlessly strive for better, but not at the cost of the customer. We act with humility and optimism, respecting ideas big and small. We value diversity and voices of all volumes. We are a global organization but keep our feet on the ground, so we can act fast and stay simple. Our teams also have the chance to give back on a local level and make a difference through our corporate social responsibility program, Expedia Cares.


If you have a hunger to make a difference with one of the most loved consumer brands in the world and to work in the dynamic travel industry, this is the job for you.


Our family of travel brands includes: Brand Expedia®, Hotels.com®, Expedia® Partner Solutions, Egencia®, trivago®, HomeAway®, Orbitz®, Travelocity®, Wotif®, lastminute.com.au®, ebookers®, CheapTickets®, Hotwire®, Classic Vacations®, Expedia® Media Solutions, CarRentals.com™, Expedia Local Expert®, Expedia® CruiseShipCenters®, SilverRail Technologies, Inc., ALICE and Traveldoo®.


*LI-PP1


Expedia is committed to creating an inclusive work environment with a diverse workforce.   All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status.  This employer participates in E-Verify. The employer will provide the Social Security Administration (SSA) and, if necessary, the Department of Homeland Security (DHS) with information from each new employee's I-9 to confirm work authorization.

Arm
  • Austin, TX
Job Description
Arm is looking for a remarkable system performance modeling and analysis architect to join the Enterprise Performance team. You will work closely with IP and systems teams across Arm to help define high-performance systems incorporating current and next generation Arm Processors, scalable coherent interconnects, and high-bandwidth memory controllers. This is an unusual opportunity to work through the brilliance of our Arm team members.
What will you be accountable for?
    Model
    • ing, analysis, and projections You will identify new performance features and system performance bottlenecks using performance and RTL models starting from product definition phase through release. You will generate and correlate projections and scaling factors for appropriate workloads for partners to help both external and internal customers identify optimal design points of a system.
    • Partner engagement and mentorship You will engage with internal and external partners through all stages of the product in establishing high confidence in Arm IP and system level performance. You will collaborate closely with other teams at Arm including mentoring and encouraging junior engineers to deliver performance collateral through analysis and correlation for evolving new usages and workloads
    • Subsystem model configurations. You will build and maintain consistent system model configurations for use by our partners that deliver the best performance
Job Requirements
What skills, experience, and qualifications do I need?
  • Bachelors, Masters, or Ph.D. degree in Electrical Engineering, Computer Engineering, or Computer Science with a strong computer architecture, microarchitecture, performance modeling, and analysis background
  • Minimum 8 years of experience in performance modeling and analysis of processors, interconnects, caches and memory controllers at component and system level
  • Strong at C++ programming for large-scale software development and experience with RTL/emulation performance analysis and correlation
  • Proficient with Perl or Python scripting language skills
  • Knowledge of Verilog, System Verilog, and testbenches, experience interacting with RTL/emulation design, verification and architecture teams across multiple products
  • Knowledge of SystemC Transaction Level Modeling would be a plus
  • Excellent interpersonal skills, strong initiative, and open in engaging and learning new concepts and share.
Desired behaviors for this role
At Arm, we are guided by our core beliefs that reflect our rare culture and guide our decisions, defining how we work together to defy ordinary and craft extraordinary
We not I
  • Take daily responsibility to make the Global Arm community thrive
  • No individual is responsible for the right answer. Brilliance is collective
  • Information is important, share it
  • Realize that we win when we collaborate and that everyone misses out when we dont
Passion for progress
  • Our differences are our strength. Widen and mix up your network of connections
  • Difficult things can take unexpected directions. Stick with it
  • Make feedback positive and expansive, not negative and narrow
  • The essence of progress is that it cant stop. Grow with it and be responsible for your own progress
Be your brilliant self
  • Be quirky not egocentric
  • Recognize the power in saying I dont know
  • Make trust our default position
  • Hold strong opinions lightly
Arm
  • Austin, TX
Job Description
Working side by side with leading technology companies across the globe, the CPU group specifies, designs and validates all of ARMs processor IP. As an engineer on the Austin-based CPU Performance team, you join a team responsible for optimizing the performance of next-generation ARM Cortex-A class CPUs utilizing modern techniques for microarchitecture modeling, full system simulation tools, workload bring up and analysis.
What will I be accountable for?
  • Workload analysis and bring up on full simulation tools (both Linux and Android)
  • Analysis and profiling tools development
  • Engaging with ARMs design teams to shape next-generation CPU microarchitecture through a workload-driven methodology
  • Travel occasionally for training or customer meetings
Job Requirements
What skills, experience, and qualifications do I need?
  • MS or BS in Computer Science, Electrical Engineering or Computer Engineering
  • Masters preferred
  • Minimum 6+ years of work experience in full system workload performance analysis and bring up. Familiarity with microarchitecture at the system level is preferred.
  • ARM experience is not required, it is a plus.
  • Experience building, configuring, and bringing up of Linux and Android on virtual environments
  • Demonstrated experience working with performance models and server or mobile workload performance analysis
  • Good understanding of Linux OS
  • C/C++ and assembly level programming
  • Experience with python or PERL for scripting
  • Understanding of microarchitecture at the system and SoC levels is desirable
  • Prior knowledge of SystemC / TLM2 is desirable
Desired behaviors for this role
At Arm, we are guided by our core beliefs that reflect our rare culture and guide our decisions, defining how we work together to defy ordinary and shape extraordinary
We not I
  • Take daily responsibility to make the Global Arm community thrive
  • No individual owns the right answer. Brilliance is collective
  • Information is crucial, share it
  • Realize that we win when we collaborate and that everyone misses out when we dont
Passion for progress
  • Our differences are our strength. Widen and mix up your network of connections
  • Difficult things can take unexpected directions. Stick with it
  • Make feedback positive and expansive, not negative and narrow
  • The essence of progress is that it cant stop. Grow with it and own your own progress
Be your brilliant self
  • Be quirky not egocentric
  • Recognize the power in saying I dont know
  • Make trust our default positionHold strong opinions lightly
Acxiom
  • Austin, TX
As an Enterprise Big Data Administrator, you will assist leadership for projects related to Big Data technologies and software development support for client research projects. You will analyze latest Big Data Analytic technologies and their innovative applications in both business intelligence analysis and new service offerings. You will bring these insights and best practices to Acxiom's Big Data Projects. You must be able to benchmark systems, analyze system bottlenecks and propose solutions to eliminate them. You will develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. You must be a self-starter to continuously evaluate new technologies, innovate and deliver solutions for business critical applications.


What you will do:


  • Responsible for implementation and ongoing administration of Hadoop infrastructure
  • Provide technical leadership and collaboration with engineering organization, develop key deliverables for Data Platform Strategy - Scalability, optimization, operations, availability, roadmap.
  • Own the platform architecture and drive it to the next level of effectiveness to support current and future requirements
  • Cluster maintenance as well as creation and removal of nodes using tools like Cloudera Manager Enterprise, etc.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines
  • Screen Hadoop cluster job performances and capacity planning
  • Help optimize and integrate new infrastructure via continuous integration methodologies (DevOps CHEF)
  • Manage and review Hadoop log files with the help of  Log management technologies (ELK)
  • Provide top-level technical help desk support for the application developers
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality, availability and security
  • Collaborating with application teams to perform Hadoop updates, patches, version upgrades when required
  • Mentor Hadoop engineers and administrators
  • Work with Vendor support teams on support tasks


What you will need:


  • Extensive knowledge of Hadoop based data manipulation/storage technologies HDFS, MapReduce, Yarn, HBASE, HIVE, Pig, Impala and Sentry
  • 3+ years of Big Data Administration experience
  • Experience in Capacity Planning, Cluster Designing and Deployment, Troubleshooting and Performance Tuning
  • Great operational expertise such as good troubleshooting skills, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and networks
  • Experience in Hadoop Cluster migrations or Upgrades
  • Strong Linux/SAN Administration skills and RDBMS/ETL knowledge
  • DevOps experience is a great plus (CHEF, Puppet and Ansible)
  • Good Experience in Cloudera/Horton Works/MapR versions along with Monitoring/Alerting tools (Nagios, Ganglia, Zenoss , Cloudera Manager)
  • Strong scripting skills in Perl / Python / Shell Scripting / Ruby on Rails
  • Strong knowledge of JAVA/J2EE and other web technologies
  • Solid Understanding on premise and Cloud network architectures
  • Excellent verbal and Written Communication Skills

Visa
  • Austin, TX
Company Description
Common Purpose
Everyone at Visa works with one goal in mind making sure that Visa is the best way to pay and be paid, for everyone everywhere. This is our global vision and the common purpose that unites the entire Visa team. As a global payments technology company, tech is at the heart of what we do: Our VisaNet network processes over 13,000 transactions per second for people and businesses around the world, enabling them to use digital currency instead of cash and checks.
We are also global advocates for financial inclusion, working with partners around the world to help those who lack access to financial services join the global economy. Visas sponsorships, including the Olympics and FIFA World Cup, celebrate teamwork, diversity, and excellence throughout the world. If you have a passion to make a difference in the lives of people around the world, Visa offers an uncommon opportunity to build a strong, thriving career. Visa is fueled by our team of talented employees who continuously raise the bar on delivering the convenience and security of digital currency to people all over the world. Join our team and find out how Visa is everywhere you want to be.
Visa customers trust us with the richest data on earth about global commerce. Working on data at Visa is unique opportunity at a time when the payments industry is undergoing a digital transformation with data as a critical differentiator.
We offer you the opportunity to be at the center of innovation in the payments industry and unleash the power of data through applying data sciences to business problems.
Job Description
The Position
As a data scientist based in Austin, this position is responsible developing and delivering predictive analytic capabilities that get incorporated into Visa products in a variety of domain such as Risk & Fraud, Commercial and Merchant areas.
This role will be part of a group that works in tight collaboration with product engineering, product management, and operations to ensure business effectiveness of products.
We desire candidates with deep expertise in machine learning or statistics and experience in delivering predictive systems on big data.
Responsibilities
The Following Are The Group Responsibilities
  • Formulate business problems as technical data problems while ensuring key business drivers are captured in collaboration with Risk, Commercial and Merchant product management.
  • Work with product development (engineering) to ensure implementability of solutions. Deliver prototypes and production code based on need.
  • Work with Data platform to drive availability of relevant data, tools, and infrastructure for group for experimental and development purposes.
  • Experiment with in-house and third party data sets to test hypotheses on relevance and value of data to business problems.
  • Build needed data transformations on structured and un-structured data.
  • Build and experiment with modeling and scoring algorithms. This includes development of custom algorithms as well as use of packaged tools based on machine learning, data mining and statistical techniques.
  • Devise and implement methods for adaptive learning with controls on effectiveness, methods for explaining model decisions where necessary, model validation, A/B testing of models.
  • Devise and implement methods for efficiently monitoring model effectiveness and performance in production.
  • Devise and implement methods for automation of all parts of the predictive pipeline to minimize labor in development and production.
  • Contribute to development and adoption of shared predictive analytics infrastructure
The responsibilities above are group responsibilities and specific individuals will be assigned responsibilities based on the group's needs and individual skills and preference Qualifications.
Qualifications
    Recen
    • t graduate in PhD in Computer Science, Operations Research, Statistics or highly quantitative field (or equivalent experience) with strength in Deep Learning, Machine Learning, Data Mining, Statistical or other mathematical analysis.
    • Relevant coursework in modeling techniques such as logistic regression, Naïve Bayes, SVM, decision trees, or neural networks.
    • Deep learning experience with TensorFlow is a plus.
    • Strong understanding of algorithms and data structures.
    • Strong analytic and problem solving capability combined with ambition to solve real-world problems.
    • Results orientation with ability to plan work and work in a team
    • Strong verbal and written communication skills.
    • Experience working with large datasets using tools like Hadoop, MapReduce, Pig, or Hive is a plus.
    • Ability to program in one or more scripting languages such as Perl or Python and one or more programming languages such as Java, C++ or C#.
    • Experience with one or more common statistical tools such SAS, R, KNIME, Matlab.
    • Publications or presentation in recognized Machine Learning and Data Mining journals/conferences is a plus.
Additional Information
All your information will be kept confidential according to EEO guidelines.
Job Number: REF15519A
Russell Tobin
  • Salt Lake City, UT

Our client, a fortune 100 financial services company is looking for a Database Administrator


Responsibilities:
  • Perform Database Administration activities as part of the Firm's Technical Infrastructure team.
  • Troubleshoot disk backup and tape backup failures to determine root cause of failure Provide production support for Sybase ASE, Sybase IQ, UDB DB2, Sybase Replication, UDB DB2 SQL Rep and Q Replication, Paraccel, Postgres, Hadoop, MSSQL, Mongo DB, Oracle database infrastructure globally.
  • Support the following hardware: Sun v240/440/4800/420r, Dell r610/710, HP 385/585/480 Blades Continually evaluate the operations of the environment and assist in the optimization and delivery of server infrastructure Schedule maintenance jobs using Autosys Communicate and coordinate with application support and other IT support teams to provide timely responses for critical requests.
  • Involves in Business Continuity plan tests across all regions related to Database infrastructure
  • Adhere to company change management requirements and procedures.


Skills and Qualifications:
  • Supporting the following platforms: Red Hat Enterprise Linux v3/4/5/6 and Sun Solaris 5.8/10 Database Platform: Sybase ASE, Sybase IQ, Sybase Replication, UDB/DB2.
  • Familiarity with disk backup/tape backup audit controls.
  • Extensive exposure to Sybase ASE 15.x, ASE 12.x database administration under Linux platform.
  • Strong Knowledge in Performance Tuning for Databases.
  • Strong knowledge in Database Maintenance.
  • Flexible to work in weekend shifts.
  • Experience in Investment Banking sector will be a plus point.
  • Good Interpersonal skills to interact with global Teams.
  • Good experience with Incident management.
  • Automation skills utilizing scripting languages, including ksh or bash and either python or perl.
  • Knowledge in job scheduler; Autosys, cron jobs etc...
  • Certifications such as CCNA and RHCSA or equivalent are preferred.