OnlyDataJobs.com

Apple Inc.
  • Seattle, WA
Job Summary:
The iTunes Store is seeking a Software Engineer to provide new tools in order to support its dynamic growth. Join our exciting engineering team that has been leading the digital distribution industry by constantly developing creative features since its launch in April 2003. The position requires someone with all aspects of the software design cycle and has a focus on data modeling and handling large data sets in an inspiring environment.

Key Qualifications:
* Our role requires 5+ years of hands on software engineering experience in a dynamic environment
* You'll have strong development skills in Java, Scala - with a history of architect-level experience.
* Experience crafting and building multi-datacenter distributed systems.
* Experience working with Big Data solutions (e.g. spark, mapreduce, hive, etc.)
* Deep understanding of storage solutions and when to use them (e.g. Graph, Cassandra, Solr, relational database etc.)
* Deep understanding of different data formats (e.g. avro, xml, json, parquet etc) and ETL processes.
* You will understands graph computation, data search and record mapping/matching algorithms.
 

Description:
Our exciting and growing team is looking for a self starting, ambitious individual who is not afraid to question assumptions. You will have excellent written and oral skills. You should have several years of experience developing server-side software using Java. You should also be familiar with Big Data patterns and solutions. You will have the ability to effectively work and communicate technical concepts with all levels of an organization including corporate CTOs, CIOs and Developers.
 

Education:
BSCS or equivalent.

Apple is an Equal Opportunity Employer that is committed to inclusion and diversity. We also take affirmative action to offer employment and advancement opportunities to all applicants, including minorities, women, protected veterans, and individuals with disabilities. Apple will not discriminate or retaliate against applicants who inquire about, disclose, or discuss their compensation or that of other applicants.

Perficient, Inc.
  • Phoenix, AZ
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient Data Solutions is looking for an experienced Hadoop Administrator with experience administering Cloudera on AWS. This postition is located in Boston, however the candidate can be located in any well connected city. Perficient is on a mission to help enterprises take advantage of modern data and analytics architectures, tools, and patterns to improve the business operations and better engage customers. This is an excellent opportunity for the right individual to assist Perficient and its customers to grow the capabilities necessary to improve care through better use of data and information, and in the process take their career to the next level.
Job Overview
The Hadoop System Administrator (SA) is responsible for effective provisioning, installation/configuration, operation, and maintenance of systems hardware and software and related infrastructure to enable Hadoop and analytics on Big Data. This individual participates in technical research and development to enable continuing innovation within the infrastructure. This individual ensures that system hardware, operating systems, software systems, and related procedures adhere to organizational values, enabling staff, volunteers, and Partners.
This individual will assist project teams with technical issues in the Initiation and Planning phases of our standard Project Management Methodology. These activities include the definition of needs, benefits, and technical strategy; research & development within the project life-cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. Participation on projects is focused on smoothing the transition of projects from development staff to production staff by performing operations activities within the project life-cycle.
This individual is accountable for the following systems: Linux and Windows systems that support GIS infrastructure; Linux, Windows and Application systems that support Asset Management; Responsibilities on these systems include SA engineering and provisioning, operations and support, maintenance and research and development to ensure continual innovation.
Responsibilities
  • Provide end to end vision and hands on experience with Cloudera and AWS Platforms especially best practices around HIVE and HBASE
  • Experience automating common adminstratvie tasks in Cloudera and AWS
  • Troubleshoot and develop on Hadoop technologies including HDFS, Kafka, Hive, Pig, Flume, HBase, Spark, Impala and Hadoop ETL development via tools such as ODI for Big Data and APIs to extract data from source. Troubleshooting for AWS Technologies like EWR, EC2, S3, Cloud Foundation, etc.
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Administration of Cloudera clusters on AWS services, security, scalability, configuration and availability and access
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data Modeling
  • Performance tune HIVE and HBASE jobs
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components is a plus
  • Lead capacity planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
Qualifications
  • 3 Plus years of Hadoop Administration
  • Cloudera and AWS certifications are strongly desired.
  • Bachelor's degree, with a technical major, such as engineering or computer science.
  • Four to six years of Linus/Unix system administration experience.
  • Ability to travel up to 50 percent, preferred.
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Perficient, Inc.
  • Detroit, MI
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient Data Solutions is looking for an experienced Hadoop Administrator with experience administering Cloudera on AWS. This postition is located in Boston, however the candidate can be located in any well connected city. Perficient is on a mission to help enterprises take advantage of modern data and analytics architectures, tools, and patterns to improve the business operations and better engage customers. This is an excellent opportunity for the right individual to assist Perficient and its customers to grow the capabilities necessary to improve care through better use of data and information, and in the process take their career to the next level.
Job Overview
The Hadoop System Administrator (SA) is responsible for effective provisioning, installation/configuration, operation, and maintenance of systems hardware and software and related infrastructure to enable Hadoop and analytics on Big Data. This individual participates in technical research and development to enable continuing innovation within the infrastructure. This individual ensures that system hardware, operating systems, software systems, and related procedures adhere to organizational values, enabling staff, volunteers, and Partners.
This individual will assist project teams with technical issues in the Initiation and Planning phases of our standard Project Management Methodology. These activities include the definition of needs, benefits, and technical strategy; research & development within the project life-cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. Participation on projects is focused on smoothing the transition of projects from development staff to production staff by performing operations activities within the project life-cycle.
This individual is accountable for the following systems: Linux and Windows systems that support GIS infrastructure; Linux, Windows and Application systems that support Asset Management; Responsibilities on these systems include SA engineering and provisioning, operations and support, maintenance and research and development to ensure continual innovation.
Responsibilities
  • Provide end to end vision and hands on experience with Cloudera and AWS Platforms especially best practices around HIVE and HBASE
  • Experience automating common adminstratvie tasks in Cloudera and AWS
  • Troubleshoot and develop on Hadoop technologies including HDFS, Kafka, Hive, Pig, Flume, HBase, Spark, Impala and Hadoop ETL development via tools such as ODI for Big Data and APIs to extract data from source. Troubleshooting for AWS Technologies like EWR, EC2, S3, Cloud Foundation, etc.
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Administration of Cloudera clusters on AWS services, security, scalability, configuration and availability and access
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data Modeling
  • Performance tune HIVE and HBASE jobs
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components is a plus
  • Lead capacity planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
Qualifications
  • 3 Plus years of Hadoop Administration
  • Cloudera and AWS certifications are strongly desired.
  • Bachelor's degree, with a technical major, such as engineering or computer science.
  • Four to six years of Linus/Unix system administration experience.
  • Ability to travel up to 50 percent, preferred.
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Perficient, Inc.
  • Dallas, TX
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient Data Solutions is looking for an experienced Hadoop Administrator with experience administering Cloudera on AWS. This postition is located in Boston, however the candidate can be located in any well connected city. Perficient is on a mission to help enterprises take advantage of modern data and analytics architectures, tools, and patterns to improve the business operations and better engage customers. This is an excellent opportunity for the right individual to assist Perficient and its customers to grow the capabilities necessary to improve care through better use of data and information, and in the process take their career to the next level.
Job Overview
The Hadoop System Administrator (SA) is responsible for effective provisioning, installation/configuration, operation, and maintenance of systems hardware and software and related infrastructure to enable Hadoop and analytics on Big Data. This individual participates in technical research and development to enable continuing innovation within the infrastructure. This individual ensures that system hardware, operating systems, software systems, and related procedures adhere to organizational values, enabling staff, volunteers, and Partners.
This individual will assist project teams with technical issues in the Initiation and Planning phases of our standard Project Management Methodology. These activities include the definition of needs, benefits, and technical strategy; research & development within the project life-cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. Participation on projects is focused on smoothing the transition of projects from development staff to production staff by performing operations activities within the project life-cycle.
This individual is accountable for the following systems: Linux and Windows systems that support GIS infrastructure; Linux, Windows and Application systems that support Asset Management; Responsibilities on these systems include SA engineering and provisioning, operations and support, maintenance and research and development to ensure continual innovation.
Responsibilities
  • Provide end to end vision and hands on experience with Cloudera and AWS Platforms especially best practices around HIVE and HBASE
  • Experience automating common adminstratvie tasks in Cloudera and AWS
  • Troubleshoot and develop on Hadoop technologies including HDFS, Kafka, Hive, Pig, Flume, HBase, Spark, Impala and Hadoop ETL development via tools such as ODI for Big Data and APIs to extract data from source. Troubleshooting for AWS Technologies like EWR, EC2, S3, Cloud Foundation, etc.
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Administration of Cloudera clusters on AWS services, security, scalability, configuration and availability and access
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data Modeling
  • Performance tune HIVE and HBASE jobs
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components is a plus
  • Lead capacity planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
Qualifications
  • 3 Plus years of Hadoop Administration
  • Cloudera and AWS certifications are strongly desired.
  • Bachelor's degree, with a technical major, such as engineering or computer science.
  • Four to six years of Linus/Unix system administration experience.
  • Ability to travel up to 50 percent, preferred.
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Genoa Employment Solutions
  • Detroit, MI

Solution Architect

If you are someone who:

  • Is a creative thinker and great teammate who can come up with innovative approaches to help resolve complex issues.
  • Has good analytical and problem solving skills and is able to break down a solution into smaller units of work and produce a solution roadmap.
  • Has written high quality, well-tested shared components that can be leveraged by multiple systems.
  • Takes pride in software craftsmanship, diving deep into code and constantly innovating.
  • Has extensive experience in back-end development, service design, data modeling and web development.
  • Takes requirements (business features, technical debts and internal enhancements) and designs resilient solutions.
  • Can support and collaborate with multiple development teams and provide technical guidance.
  • Can step into specific projects to supply additional management, coding and engineering capacity as needed to make projects successful.
  • Has expert knowledge in distributed systems with a heavy focus in conversational semantics for large scale distributed systems.
  • Is passionate about webscale technologies as applied to large scale growing businesses.


Must Have Skills:

  • Excellent verbal and written communication skills and is able to explain a complex technical solution to business stakeholders.
  • Demostrated ability to translate customer needs into well documented requirements, architectural plans and produce near production ready protoypes.
  • Expert at producing sequence flow diagrams, solution diagrams, architectural component diagrams
  • Two years of experience of mentoring team leads and engineers.
  • Demonstrated willingness to learn from peers and coworkers junior to them.
  • Ability to enforce responsible engineering practices (including automated unit and stress testing, engineering for data security, resiliency, scalability, etc.)
  • Proficient in multiple programming languages like: Java, Python, Ruby, Scala, Groovy, GO, BASH
  • Expert knowledge of Java or Scala or Erlang with 7+ years of experience.
  • In depth experience developing high volume transactions and distributed applications both real-time and batch.
  • A deep understanding of performance tuning and scalability.
  • Development experience with REST WebServices and various data interchange and representation formats such as JSON, XML, HTML etc.
  • Development experience with RDBMS, distributed cache (Memcached, Redis)  and NoSQL database.
  • Deep end to end architectural understanding of distributed applications.
  • Experience with containerization technologies (such as Docker) and familiarity with micro-service architecture and development patterns.
  • A deep and demonstrable understanding of design patterns.
  • Knowledge and understanding of application servers such as JBoss, Tomcat and Weblogic.
  • Development experience with security such as securing the users and their data.
  • Development experience of writing batch jobs with performing high volume transactions.
  • Knowledge and understanding of work in modern CI environments: version control, build tool, CI server
  • Knowledge of Open Source libraries, tools and frameworks. Experience with any modern open source libraries would be an added advantage.
  • Experience with Agile development methodology.

Highly desirable Skills:

  • Experience in HIPAA and PCI security Domain.
  • Development experience with modern technologies such Elastic Search, Kafka, Kibana,
    Logstash, Hibernate/JPA, Spring, Angular. Experience with any modern technologies would be an added advantage.
  • Experience building and deploying software onto AWS or Openstack using Chef, or similar technologies.
  • Experience with big data and data analytics applications, or similar systems programming experience.
  • Strong expertise in text parsing, analytics and machine learning.
  • Has worked extensively on parsing and generating  EDI formats.
  • Experience with SAFE framework.
  • Experience with Java Message Service (JMS) and Message Driven Bean (MDB) development is preferred.
  • Expert knowledge of JDBC and managing transactions.
  • Understanding of Service Oriented Architecture.
  • US Citizenship is preferred.
  • Experience in the insurance industry, specifically with the health care industry.
  • Bachelor of Science in Computer Science, Information Systems, Engineering or a related field or comparable work experience.
Code North America
  • Dallas, TX

Strong leader. Cloud PaaS thought leader. Key influencer. Great collaborator. Solid technologist.

We are looking for a Technology R&D Director who can lead a Cloud services R&D team through the architecture, prototyping, technical coding specification and Product Development oversight aspects of our Product delivery lifecycle. This position works closely with our clients and agency teams to understand needs, Product Management and deliver solid Product features in alignment with the Code Product Roadmap.

The Technology R&D Director works with Product Management, Client Management and Technology stakeholders to build a holistic technology viewof the Code NAs Product strategy, processes, information and technical assets to ensure Client, Product and Technology alignment.

Position isnothands-on and is a critical upper level management role, providing leadership and guidance to the Technology R&D Team. Candidate needs to build strong relationships and must have solid business facing skills to influence and communicate critical details to solution architects and the teams the candidate will be working with.

Code is innovating rapidly in this space to grow its share of this market by providing Omnicom agencies and their clients with a state-of-the-art marketing platform. Code is at the core of this effort, responsible for research and development of all the Product components in our marketing technology stack.

Reports To: Chief Technology Officer

What Youll Do:

  • Lead the R&D team in recommending new technologies for Product domains based upon business value drivers and return on investment; drives new technologies toward implementation and exploitation
  • Establish overall Cloud systems architecture vision and ensure specific components are appropriately designed and leveraged
  • Take responsibility for health of overall Cloud architecture aligned to the Product Roadmap
  • Maintain components of Product Cloud architecture strategy and vision
  • Coordinate all Product-level conceptual Cloud Architecture components (e.g., data architecture, services architecture, technical architecture, service management architecture)
  • Monitor usage of Cloud architectural components and assume responsibility for reuse
  • Lead Cloud architecture strategy and vision to be aligned to execution of the Product Roadmap
  • Direct and manage the R&D team for all PaaS activities (Cloud architecture, technical story development, spec coding oversight, quality assurance and automated deployment into an established DevOps service delivery capability) in Azure
  • Lead architecture and drive engineering for Cloud, PaaS and serverless functions on Azure
  • Lead the development of Cloud platform architecture that includes areas such as micro services, containers, triggers, data ingestion, analytics, security and privacy, etc.
  • Lead the design of Product features that the DevOps team can manage, monitor, secure and service
  • Lead the Technology R&D Team planning strategy for Cloud Product initiatives. Must have done enterprise architecture and supported R&D application and Cloud development projects

What Youve Got:

Bachelors degree in Computer Science, Information Technology or Computer Applications in related field; eight years of experience in architecture/design in relevant technology disciplines; 10+ years in information technology with a significant background in Cloud service, preferably with Azure. MS/MA degree preferred.

    Stron
    • g leadership skills Deep
    • understanding of Cloud computing technologies, business drivers and emerging computing trends
    • Working knowledge of AGILE development, SCRUM and Application Lifecycle Management (ALM) utilizing at least one of the following programming languages: .NET, C++, Java, JSON, PHP, Perl, Python, Ruby on Rails and/or Pig/Hive
    • Proven track record of driving decisions collaboratively, resolving conflicts, and ensuring follow through, with exceptional verbal and written communication
    • Oversight experience on major transformation projects and successful transitions to implementation support teams
    • Life-long learner who is comfortable with uncertainty and taking calculated risks
    • Ability to engage in senior-level business and technology discussions regarding digital transformation, solution business value and end-to-end architecture
    • Passion for enabling DevOps concepts of continuous integration, continuous delivery and configuration management
    • Strong presenter with the ability to concisely articulate ideas confidently to both large and small audiences
    • Drive innovation through the selection and implementation of new technologies  
    • Guide teams through the architecture governance process, delivering artifacts that meet standards and highlight dependencies, risks and risk-mitigation options
    • Experience working in a global environment preferred
Cheetah Digital
  • Cyberjaya, Malaysia

Cheetah Digital is hiring Full Stack C#/.NET Engineers to join its fast growing innovation engineering team in Kuala Lumpur. The C#/.NET Engineer is responsible for designing, developing, deploying, and supporting Cheetah Digital’s cloud-based platform and solutions used by leading brands in North America, Europe, and Asia. Cheetah’s Marketing Suite Platform processes and analyzes billions of transactions per day on an Apache Hadoop and .NET platform hosted on AWS and Azure. The .NET Engineer will work closely with Cheetah’s product management, quality assurance, operations, and customer success teams on a daily basis.


The ideal candidate will possess strong technical foundation in C#/.NET, Microsoft SQL server, APIs, and performance tuning in cloud environments. The ideal candidate should also have an aptitude for quality and a collaborative mindset to learn and contribute while working closely with team members.

RESPONSIBILITIES:



  • Translate business requirements into specifications and detailed designs

  • Develop and support Cheetah Digital’s .NET applications and RESTful web services by writing efficient, maintainable code to meet requirements and adhere to security standards

  • Work through all phases of the software development life cycle, including analysis, design, implementation, testing, deployment, and maintenance

  • Conduct large-scale performance benchmarks and tune the system for high throughput

  • Investigate, analyze and address reported defects in a timely manner


QUALIFICATIONS:



  • Bachelor’s Degree in Computer Science or related field, with a minimum A- GPA, from a top technical university

  • 7+ years programming experience in C#/.NET, or other enterprise, high-scale framework and fundamental understanding of the core server-side development concepts

  • Proficient in writing and performance tuning complex T-SQL

  • Advanced relational DB experience with Microsoft SQL Server, Oracle, or Postgres

  • Experience building and integrating with web services (REST, SOAP), APIs, JSON, XML

  • Strong knowledge of multi-tier web application design

  • Experience with Hadoop components, such as Hbase, Spark, Kafka, Hive, Storm is a huge plus

  • Pass a strict criminal background check and provide strong references


COMMUNICATION SKILLS:



  • Excellent communication skills, both verbal and written

  • Demonstrated ability to collaborate with local and remote teams in different time zones

  • Demonstrated ability to compose clear and concise technical documentation


TECHNICAL QUALIFICATIONS:



  • Languages: C#, C++, Javascript, PowerShell

  • Frameworks: .NET, MVC, WebApi, git

  • UI: HTML, JS, CSS, JQuery, Angular, React


Databases: SQLServer, Oracle, Postgres

Staff Smart Inc.
  • San Diego, CA

Our client is recognized as a global leader in interactive and digital entertainment, with a commitment to delivering superior gaming experiences. Their business division has locations in San Diego, San Francisco, London, and Tokyo. Everyone is committed to delivering an industry-leading, enhanced gaming experience built on imagination, creativity, and the teams profound passion for gaming. Be a part of a company that thrives on the cutting edge of technology, and join them in shaping the future of interactive entertainment.
 

About the role and how you will spend your time:

  1. Lead a team of highly engaged software engineers to develop a new high volume e-commerce global payment processing system
  2. Be the key contributor to the architectural direction for large-scale commerce systems.
  3. Become a component owner who works both hands-on and provides technical oversight to others in order to implement interactive web-based services and commerce capabilities using sound technology choices.
  4. Maintain operational ownership of mission-critical components running in the production environment.
  5. Become a subject matter expert and provide strong technical leadership and mentoring.
 

Essentials:

  • BS degree in Computer Science or equivalent
  • 10+ years large scale programming and systems experience using Java
  • Experience building scalable systems with low latency and high throughput, including operationalization and monitoring.
  • Experience developing applications on Unix/Linux platforms
  • Experience in full life-cycle agile software development.
  • Experience in object-oriented analysis and design.
  • Strong knowledge of algorithms, data structures, design patterns, and implementation approaches
  • Hands-on development experience in architecting and building a data pipeline
  • Experience developing web services (e.g. REST, SOAP, JSON).

Additional Desired Attributes:

  • Experience with eCommerce, especially with the design and development of global payment processing systems, and integrating 3rd party acquiring platforms.
  • Understanding of best practices used in building software within PCI requirements.
  • Hands-on experience with different types of NoSQL data stores, messaging or pub-sub queuing systems.
  • Experience leading engineering teams and mentoring others.
  • Experience with AWS services (such as kinesis, Elasticsearch, Dynamodb, HBase, Aurora).
  • Experience with caching solutions such as
  • Experience with Java Application Servers/Containers.
  • Experience using source control and bug tracking systems in a team environment

About you:

  • You possess a drive and passion for quality with the ability to inspire, excite and motivate other team members.
  • You have outstanding verbal and written communication skills and are able to work with others at all levels.
  • Youre effective at working with geographically remote and culturally diverse teams.
  • Your curiosity drives you to go beyond your immediate assignments and look for ways to make things better. Youre not afraid to ask questions.
  • You have natural leadership skills and the ability to motivate yourself and others to drive toward excellence


Applicants must be authorized to work for any U.S. employer. Sponsorship/Relocation assistance is not available for this position.

Staff Smart, Inc. is an Equal Opportunity/Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, pregnancy, sexual orientation, gender identity, national origin, age, protected veteran status, or disability status.
No 3rd Party Vendors, please.

Wipro Limited
  • Dallas, TX
  • ·         7+ yrs on Design/Architecture/implementation/consulting experience on Enterprice Data warehouse -On-premise/Cloud Platform

    ·         Experience in Designing best architecture for EWD/Data Lake on Public/Private Cloud and select the most appropriate tools and techniques for implementation of cost effective and optimised solutions technologically

    ·         Formulate conceptual architectures and communicate architectural vision, goals and design objectives to multiple audiences.

    ·         Should have good expertise on non-functional aspects (performance, HA, scalability, volume, security)

    ·         Work on Architectural review discussions and prepare design solutions that can be submitted for stakeholder approvals, and subsequently taken up with scrum teams for implementation

    ·         Should have hands-on experience in Database migration tools of cloud service providers

    ·         Should have hands on experience in data orchestrations using cloud service provider tools

    ·         Solid hands on experience in implementing EDW/Data lake application on Cloud platform - AWS-S3/EC2/Redshift/ EMR/ Glue, Snowflake / Azure / Google Cloud)

    ·         Create architectural principles/blueprint to support business goals, and develop IT frameworks that support EDW applications.

    ·         Solid hands experience on Big Data Technology Stack (Hadoop, Spark, Kafka)

    ·         Relational and NoSQL databases. (Mongo DB , Hbase, Casandra)

    ·         Stream-processing systems such as Storm and Spark-Streaming

    ·         One or more programming languages such as Python, Java, Perl

    ·         Good Knowledge and understanding of JSON , Web REST Full API services

    ·         Deep understanding of EDW data modeling using Star-schema/Snowflake and data architecting , Capacity Planning and Sizing

    ·         Effectively evaluate the various tools available in the marketplace (open source and commercial) and suggests the right tools to use to accomplish the project objectives in terms of documenting the requirements of the project

    ·         Provide periodic feedback to the Competency / Center of Excellence groups on patterns of requirements or use cases or other insights collected through various forums, pre-sales activities.

    ·         Work with tech team to code and implement solutions in production and QA environments for permanent resolutions

    ·         Excellent presentation skills with a high degree of comfort speaking with internal and external executives, IT management, and technical teams including software development groups

Cottonwood Financial
  • Dallas, TX

Reporting to our Business Intelligence Manager, the BI Analyst (a.k.a. BI Developer I - Analyst) is part of the team maintaining and operating the Data Warehouse and BI solutions. The BI Analyst is responsible for delivering timely and high-quality data, reports, and/or preliminary data analysis. This person will be seen as a subject-matter expert in the location, accuracy, and meaning of all data produced and consumed by Cottonwood Financial. The BI Analyst, along with the BI Manager, will be conduits between the assigned business departments and the BI development team. The role entails a high-level of engagement with the business and therefore requires exceptional communication skills and business acumen. The successful candidate will employ an analytical approach to requests and proactively seek resolutions in a timely manner. This is a great opportunity for a talented person to take his/her next career step in BI and take on greater responsibility in learning to deliver value to business units through data, reports, and preliminary data analysis. This position is based at our Administrative Office (HQ) in Irving (Las Colinas), Texas.

KEY RESPONSIBILITIES

Data Management

  • Ensure stewardship and quality of data available to the appropriate departments at the time of need. Maintain standards for accuracy while continually improving data collection, processing and reporting
  • Create data QA processes, drive QA testing, ensure data quality levels, and monitor the quality of data; validating that sources are reliable and appropriate
  • Develop and own a working data dictionary and map of data warehouse and external sources of data such as Google Analytics and CRM. Ensure data definitions are correct and effectively communicate and document updates to business groups
  • Help to create and maintain data warehouse(s) encompassing financial data, prospect marketing data, customer profile data, transactional data, and performance data

Data Extraction and Wrangling
  • Work closely with the highly analytical departments to support both their planning needs for an effective and efficient data warehouse and their ad-hoc requests for parsing and loading new data sources into the data warehouse 
  • Translate mission needs into an end-to-end data wrangling approach to achieve results
  • Perform the data collection and understanding, data cleansing and integration, and data storage and retrieval to support the business teams
  • Act as an internal resource for all data within Cottonwood Financial, be able to provide answers to ad-hoc questions about the data, and to proactively recommend appropriate uses of data not already used by the other departments

Reporting and Real-Time Alerts
  • Work closely with business teams to devise thoughtful and actionable reports and real-time alerts that will drive decisions
  • Under the guidance and suggestions of business leaders, develop actionable reports for use by the senior leadership team, where senior leadership is reliant upon you to deeply understand the data and business perspective well enough to cleanse and structure the data aggregation with minimal direction from the business
  • Deploy reports and maintain their timeliness and integrity so that they can be relied upon by store employees, district managers, regional managers, Area Operations, and Accounting to maximize operational effectiveness and facilitate their processes
  • Interpret existing reports and dashboards by understanding and explaining to others their sources, calculations, and parameters; automate and streamline processes and procedures for efficient data and reporting production

REQUIREMENTS
  • Bachelors degree in Computer Science, Computer Information Systems, Management Information Systems, Decision Science; an analytical field such as Statistics, Econometrics, Applied Mathematics; or business experience with a concentration in one of the aforementioned areas
  • 2+ years in the use of T-SQL for developing stored procedures, triggers, tables, functions, indexes, and SSRS reports etc.
  • Experience working with structured and semi-structured data formats such as fixed width, CSV, Excel, JSON, XML, and MSSQL and transforming from one format to another
  • Proficient in MS Excel, Visio, PowerPoint, and VBA code
  • Experience in developing and administering dashboards in Power BI and/or Tableau 
  • Experience in relational data modeling
  • Comprehension of scenarios and requirements within business processes for building scorecards and dashboards
  • Proven track record of delivering reports that can be used to make critical business decisions
  • Possess service-first mentality with a strong desire to meet and exceed expectations 
  • Ability to quickly grasp both technology and business concepts in an ever-changing environment 
  • Good understanding of basic consumer credit and financial concepts (e.g. simple interest, return on investment, etc.)
  • Local (Dallas/Fort Worth area) candidates only no relocation
  • Must be currently authorized to work in the United States without sponsorship and not require sponsorship in the future


PREFERRED QUALIFICATIONS
  • Knowledge of developing SSIS and dimensional data modeling
  • Knowledge of Kimball Data Warehouse design and build
  • Follow the defined System Development Lifecycle process which includes creation of required documentation and required approvals for each phase
  • Previous analysis and/or development experience using an Agile methodology
  • Experience managing work in a project tracking tool such as Team Foundation Server or Jira

COMPENSATION
  • Annual salary $73,400

BENEFITS
  • Medical, dental, and vision
  • Voluntary life/ AD&D
  • Short-term & long-term disability
  • 401K with company match
  • Paid vacation, holidays, and sick time
  • Paid maternity, paternity, extended medical leave, and jury duty
  • Corporate discount program on personal cell phone accounts with select providers
  • Business casual work environment

ABOUT COTTONWOOD
Founded in 1996, Cottonwood Financial is one of the largest privately held retail consumer finance companies in the United States. We have zero debt, have been profitable every year since inception, and our growth is funded entirely through internally generated capital. Headquartered in Irving (Las Colinas), Texas, we have company-owned locations, under our Cash Store brand, across the country. Through this national brick-and-mortar footprint, we provide best-in-class customer service and offer an innovative mix of financial products and services to our customers.

We have been named multiple times to the Inc. 5000 list of Americas fastest-growing private companies, as well as to the Dallas 100 list of the fastest-growing private companies in North Texas.
R1 RCM
  • Salt Lake City, UT

Healthcare is at an inflection point. Businesses are quickly evolving, and new technologies are reshaping the healthcare experience. We are R1 - a revenue cycle management company that is passionate about simplifying the patient experience, removing the paperwork hassle and demystifying financial obligations. Our success enables our healthcare clients to focus on what matters most - providing excellent clinical care.

Great people make great companies and we are looking for a great Lead Software Engineer to join our team in Murray, UT. Our approach to building software is disciplined and quality-focused with an emphasis on creativity, craftsmanship and commitment. We are looking for smart, quality-minded individuals who want to be a part of a high functioning, dynamic team. We believe in treating people fairly and your compensation should reflect that. Bring your passion for software engineering and help us disrupt ourselves as we build the next generation healthcare revenue cycle management products and platforms. Now is the right time to join R1!

We are seeking a highly experienced Lead Platform Engineer to join our team. The lead platform engineer will be responsible for building and maintaining real time, scalable, and resilient platform for product teams and developers. This role will be responsible for performing and supervising design, development and implementation of platform services, tools and frameworks. You will work with other software architects, software engineers, quality engineers, and other team members to design and build platform services. You will also provide technical mentorship to software engineers/developers and related groups.      


Responsibilities:


  • Be responsible for designing and developing software solutions with engineering mindset
  • Ensures SOLID principles and standard design patterns are applied across the organization to system architectures and implementations
  • Acts as a technical subject matter expert: helping fellow engineers, demonstrating technical expertise and engage in solving problems
  • Collaborate with stakeholders to help set and document technical standards
  • Evaluates, understands and recommends new technology, languages or development practices that have benefits for implementing.
  • Participate in and/or lead technical development design sessions to formulate technical designs that minimize maintenance, maximize code reuse and minimize testing time

Required Qualifications:


  • 8+ years experience in building scalable, highly available, distributed solutions and services
  • 4+ experience in middleware technologies: Enterprise Service Bus (ESB), Message Queuing (MQ), Routing, Service Orchestration, Integration, Security, API Management, Gateways
  • Significant experience in RESTful API architectures, specifications and implementations
  • Working knowledge of progressive development processes like scrum, XP, Kanban, TDD, BDD and continuous delivery using Jenkins
  • Significant experience working with most of the following technologies/languages: Java, C#, SQL, Python, Ruby, PowerShell, .NET/Core, WebAPI, Web Sockets, Swagger, JSON, REST, GIT
  • Hand-on experience in micro-services architecture, Kubernetes, Docker
  • Familiarity with Middleware platform Software AG WebMethods is a plus
  • Concept understanding on Cloud platforms, BIG Data, Machine Learning is a major plus
  • Knowledge of the healthcare revenue cycle, EMRs, practice management systems, FHIR, HL7 and HIPAA is a major plus


Desired Qualifications:


  • Strong sense of ownership and accountability for delivering well designed, high quality enterprise software on schedule
  • Prolific learner, willing to refactor your understanding of emerging patterns, practices and processes as much as you refactor your code
  • Ability to articulate and illustrate software complexities to others (both technical and non-technical audiences)
  • Friendly attitude and available to mentor others, communicating what you know in an encouraging and humble way
  • Continuous Learner


Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions.  Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests.


Our associates are given valuable opportunities to contribute, to innovative and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package.  To learn more, visit: r1rcm.com

Visa
  • Austin, TX
Company Description
Visa operates the world's largest retail electronic payments network and is one of the most recognized global financial services brands. Visa facilitates global commerce through the transfer of value and information among financial institutions, merchants, consumers, businesses and government entities. We offer a range of branded payment product platforms, which our financial institution clients use to develop and offer credit, charge, deferred debit, prepaid and cash access programs to cardholders. Visa's card platforms provide consumers, businesses, merchants and government entities with a secure, convenient and reliable way to pay and be paid in 170 countries and territories.
Job Description
At Visa University, our mission is to turn our learning data into insights and get a deep understanding of how people use our resources to impact the product, strategy and direction of Visa University. In order to help us achieve this we are looking for someone who can build and scale an efficient analytics data suite and also deliver impactful dashboards and visualizations to track strategic initiatives and enable self-service insight delivery. The Staff Software Engineer, Learning & Development Technology is an individual contributor role within Corporate IT in our Austin-based Technology Hub. In this role you will participate in design, development, and technology delivery projects with many leadership opportunities. Additionally, this position provides application administration and end-user support services. There will be significant collaboration with business partners, multiple Visa IT teams and third-party vendors. The portfolio includes SaaS and hosted packaged applications as well as multiple content providers such as Pathgather (Degreed), Cornerstone, Watershed, Pluralsight, Lynda, Safari, and many others.
The ideal candidate will bring energy and enthusiasm to evolve our learning platforms, be able to easily understand business goals/requirements and be forward thinking to identify opportunities that may be effectively resolved with technology solutions. We believe in leading by example, ownership with high standards and being curious and creative. We are looking for an expert in business intelligence, data visualization and analytics to join the Visa University family and help drive a data-first culture across learning.
Responsibilities
  • Engage with product managers, design team and student experience team in Visa University to ensure that the right information is available and accessible to study user behavior, to build and track key metrics, to understand product performance and to fuel the analysis of experiments
  • Build lasting solutions and datasets to surface critical data and performance metrics and optimize products
  • Build and own the analytics layer of our data environment to make data standardized and easily accessible
  • Design, build, maintain and iterate a suite of visual dashboards to track key metrics and enable self-service data discovery
  • Participate in technology project delivery activities such as business requirement collaboration, estimation, conceptual approach, design, development, test case preparation, unit/integration test execution, support process documentation, and status updates
  • Participate in vendor demo and technical deep dive sessions for upcoming projects
  • Collaborate with, and mentor, data engineers to build efficient data pipelines and impactful visualizations
Qualifications
  • Minimum 8 years of experience in a business intelligence, data analysis or data visualization role and a degree in science, computer science, statistics, economics, mathematics, or similar
  • Significant experience in designing analytical data layers and in conducting ETL with very large and complex data sets
  • Expertise with Tableau desktop software (techniques such as LOD calculations, calculated fields, table calculations, and dashboard actions)
  • Expert in data visualization
  • High level of ability in JSON, SQL
  • Experience with Python is a must and experience with data science libraries is a plus (NumPy, Pandas, SciPy, Scikit Learn, NLTK, Deep Learning(Keras)
  • Experience with Machine Learning algorithms (Linear Regression, Multiple Regression, Decision Trees, Random Forest, Logistic Regression, Naive Bayes, SVM, K-means, K-nearest neighbor, Hierarchical Clustering)
  • Experience with HTML and JavaScript
  • Basic SFTP and encryption knowledge
  • Experience with Excel (Vlookups, pivots, macros, etc.)
  • Experience with xAPI is a plus
  • Ability to leverage HR systems such as Workday, Salesforce etc., to execute the above responsibilities
  • Understanding of statistical analysis, quantitative aptitude and the ability to gather and interpret data and information
  • You have a strong business sense and you are able to translate business problems to data driven solutions with minimal oversight
  • You are a communicative person who values building strong relationships with colleagues and stakeholders, enjoys mentoring and teaching others and you have the ability to explain complex topics in simple terms
Additional Information
All your information will be kept confidential according to EEO guidelines.
Job Number: REF15081Q
Pyramid Consulting, Inc
  • Atlanta, GA

Job Title: Tableau Engineer

Duration: 6-12 Months+ (potential to go perm)

Location: Atlanta, GA (30328) - Onsite

Notes from Manager:

We need a data analyst who knows Tableau, scripting (JSON, Python), Altreyx API, AWS, Analytics.

Description

The Tableau Software engineer will be a key resource to work across our Software Engineering BI/Analytics stack to ensure stability, scalability, and the delivery of valuable BI & Analytics solutions for our leadership teams and business partners. Keys to this position are the ability to excel in identification of problems or analytic gaps and mapping and implementing pragmatic solutions. An excellent blend of analytical, technical and communication skills in a team based environment are essential for this role.

Tools we use: Tableau, Business Objects, AngularJS, OBIEE, Cognos, AWS, Opinion Lab, JavaScript, Python, Jaspersoft, Alteryx and R packages, Spark, Kafka, Scala, Oracle

Your Role:

·         Able to design, build, maintain & deploy complex reports in Tableau

·         Experience integrating Tableau into another application or native platforms is a plus

·         Expertise in Data Visualization including effective communication, appropriate chart types, and best practices.

·         Knowledge of best practices and experience optimizing Tableau for performance.

·         Experience reverse engineering and revising Tableau Workbooks created by other developers.

·         Understand basic statistical routines (mean, percentiles, significance, correlations) with ability to apply in data analysis

·         Able to turn ideas into creative & statistically sound decision support solutions

Education and Experience:

·         Bachelors degree in Computer Science or equivalent work experience

·         3-5 Years of hands on experience in data warehousing & BI technologies (Tableau/OBIEE/Business Objects/Cognos)

·         Three or more years of experience in developing reports in Tableau

·         Have good understanding of Tableau architecture, design, development and end user experience.

What We Look For:

·         Very proficient in working with large Databases in Oracle & Big Data technologies will be a plus.

·         Deep understanding & working experience of data warehouse and data mart concepts.

·         Understanding of Alteryx and R packages is a plus

·         Experience designing and implementing high volume data processing pipelines, using tools such as Spark and Kafka.

·         Experience with Scala, Java or Python and a working knowledge of AWS technologies such as GLUE, EMR, Kinesis and Redshift preferred.

·         Excellent knowledge with Amazon AWS technologies, with a focus on highly scalable cloud-native architectural patterns, especially EMR, Kinesis, and Redshift

·         Experience with software development tools and build systems such as Jenkins

The HT Group
  • Austin, TX

Full Stack Engineer, Java/Scala Direct Hire Austin

Do you have a track record of building both internal- and external-facing software services in a dynamic environment? Are you passionate about introducing disruptive and innovative software solutions for the shipping and logistics industry? Are you ready to deliver immediate impact with the software you create?

We are looking for Full Stack Engineers to craft, implement and deploy new features, services, platforms, and products. If you are curious, driven, and naturally explore how to build elegant and creative solutions to complex technical challenges, this may be the right fit for you. If you value a sense of community and shared commitment, youll collaborate closely with others in a full-stack role to ship software that delivers immediate and continuous business value. Are you up for the challenge?

Tech Tools:

  • Application stack runs entirely on Docker frontend and backend
  • Infrastructure is 100% Amazon Web Services and we use AWS services whenever possible. Current examples: EC2 Elastic Container Service (Docker), Kinesis, SQS, Lambda and Redshift
  • Java and Scala are the languages of choice for long-lived backend services
  • Python for tooling and data science
  • Postgres is the SQL database of choice
  • Actively migrating to a modern JavaScript-centric frontend built on Node, React/Relay, and GraphQL as some of our core UI technologies

Responsibilities:

  • Build both internal and external REST/JSON services running on our 100% Docker-based application stack or within AWS Lambda
  • Build data pipelines around event-based and streaming-based AWS services and application features
  • Write deployment, monitoring, and internal tooling to operate our software with as much efficiency as we build it
  • Share ownership of all facets of software delivery, including development, operations, and test
  • Mentor junior members of the team and coach them to be even better at what they do

Requirements:

  • Embrace the AWS + DevOps philosophy and believe this is an innovative approach to creating and deploying products and technical solutions that require software engineers to be truly full-stack
  • Have high-quality standards, pay attention to details, and love writing beautiful, well-designed and tested code that can stand the test of time
  • Have built high-quality software, solved technical problems at scale and believe in shipping software iteratively and often
  • Proficient in and have delivered software in Java, Scala, and possibly other JVM languages
  • Developed a strong command over Computer Science fundamentals
Atlassian
  • Bengaluru, India

Atlassian is searching for a talented full-stack senior software engineer to join a new Marketplace team in Bengaluru, India responsible for one of the largest enterprise ecosystems in the world. The Marketplace is crucial not only to Atlassian's own success, but also to the success of our customers who find value across the 3500+ apps built by 1000+ partner developers.

This is a highly technical engineering position where you will have autonomy to dream up and implement great features and services. This role is based in Bengaluru, and reports to the Engineering Manager who is also located in Bengaluru. You will work closely with the Ecosystem teams based in Sydney, Australia and San Francisco, USA.

Want to unleash the full potential of Atlassian's customers and our partner developer community? Join our team to expand the footprint of your impact!

Where you'll make an impact:

You and your team will be responsible for the Atlassian Marketplace, which has over $350M in lifetime sales. You'll implement, operate, and optimize the code that powers the Atlassian Marketplace's storefront, user experience, data model, analytics, service APIs, and more.

You'll champion new features and improvements while also reducing technical debt through all phases of the software development lifecycle.

More about you?

- 5+ years of experience designing and building a production-level web application, including:

- Mastery of standard front-end technologies like modern HTML, CSS, JavaScript (we use React, Redux, Webpack and more), REST, and JSON

- Experience with Node.js

- Deep architectural understanding of web applications

- Great creative and innovative problem-solving skills

- Initiative and the ability to work independently and in a team

- Interest to learn more about new technologies (such as languages and frameworks)

If you've got some of these skills, even better:

- Hands-on experience working with or building e-commerce products or platforms

- Experience applying static typing in Javascript (especially TypeScript or Flow)

- An understanding of functional programming, in particular Scala

- Experience with NoSQL databases (especially MongoDB)

- Experience monitoring and operating a production-level service

- Excitement about the latest trends in application design

- Experience with agile software development methodologies like Kanban or Scrum

More about our benefits

Whether you work in an office or a distributed team, Atlassian is highly collaborative and yes, fun! To support you at work (and play) we offer some fantastic perks: ample time off to relax and recharge, flexible working options, five paid volunteer days a year for your favourite cause, an annual allowance to support your learning & growth, unique ShipIt days, a company paid trip after five years and lots more.

More about Atlassian

Software is changing the world, and were at the center of it all. With a customer list that reads like a who's who in tech, and a highly disruptive business model, were advancing the art of team collaboration with products like Jira Software, Confluence, Bitbucket, and Trello. Driven by honest values, an amazing culture, and consistent revenue growth, were out to unleash the potential of every team. From Amsterdam and Austin to Sydney and San Francisco, were looking for people who are powered by passion and eager to do the best work of their lives in a highly autonomous yet collaborative, no B.S. environment.

Additional Information

We believe that the unique contributions of all Atlassians is the driver of our success. To make sure that our products and culture continue to incorporate everyone's perspectives and experience we never discriminate on the basis of race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status.

All your information will be kept confidential according to EEO guidelines.

Comcast
  • Englewood, CO

Comcast's Technology & Product organization works at the intersection of media and technology. Our innovative teams are continually developing and delivering products that transform the customer experience. From creating apps like TVGo to new features such as the Talking Guide on the X1 platform, we work every day to make a positive impact through innovation in the pursuit of building amazing products that are enjoyable, easy to use and accessible across all platforms. The team also develops and supports our evolving network architecture, including next-generation consumer systems and technologies, infrastructure and engineering, network integration and management tools, and technical standards.

Job Summary:

Software engineering skills combined with the demands of a high volume, highly-visible analytics platform make this an exciting challenge for the right candidate.

Are you passionate about digital media, entertainment, and software services? Do you like big challenges and working within a highly motivated team environment?

As a software engineer in the Data Experience (DX) team, you will research, develop, support, and deploy solutions in real-time distributing computing architectures. The DX big data team is a fast-moving team of world-class experts who are innovating in providing user-driven, self-service tools for making sense and making decisions with high volumes of data. We are a team that thrives on big challenges, results, quality, and agility.

Who does the data engineer work with?

Big Data software engineering is a diverse collection of professionals who work with a variety of teams ranging from other software engineering teams whose software integrates with analytics services, service delivery engineers who provide support for our product, testers, operational stakeholders with all manner of information needs, and executives who rely on big data for data backed decisioning.

What are some interesting problems you'll be working on?

Develop systems capable of processing millions of events per second and multi-billions of events per day, providing both a real-time and historical view into the operation of our wide-array of systems. Design collection and enrichment system components for quality, timeliness, scale and reliability. Work on high-performance real-time data stores and a massive historical data store using best-of-breed and industry-leading technology.

Where can you make an impact?

Comcast DX is building the core components needed to drive the next generation of data platforms and data processing capability. Running this infrastructure, identifying trouble spots, and optimizing the overall user experience is a challenge that can only be met with a robust big data architecture capable of providing insights that would otherwise be drowned in an ocean of data.

Success in this role is best enabled by a broad mix of skills and interests ranging from traditional distributed systems software engineering prowess to the multidisciplinary field of data science.

Responsibilities:

  • Develop solutions to big data problems utilizing common tools found in the ecosystem.
  • Develop solutions to real-time and offline event collecting from various systems.
  • Develop, maintain, and perform analysis within a real-time architecture supporting large amounts of data from various sources.
  • Analyze massive amounts of data and help drive prototype ideas for new tools and products.
  • Design, build and support APIs and services that are exposed to other internal teams
  • Employ rigorous continuous delivery practices managed under an agile software development approach
  • Ensure a quality transition to production and solid production operation of the software

Skills & Requirements:

  • 5+ years programming experience
  • Bachelors or Masters in Computer Science, Statistics or related discipline
  • Experience in software development of large-scale distributed systems including proven track record of delivering backend systems that participate in a complex ecosystem.
  • Experience working on big data platforms in the cloud or on traditional Hadoop platforms
  • AWS Core
  • Kinesis
  • IAM
  • S3/Glacier
  • Glue
  • DynamoDB
  • SQS
  • Step Functions
  • Lambda
  • API Gateway
  • Cognito
  • EMR
  • RDS/Auora
  • CloudFormation
  • CloudWatch
  • Languages
  • Python
  • Scala/Java
  • Spark
  • Batch, Streaming, ML
  • Performance tuning at scale
  • Hadoop
  • Hive
  • HiveQL
  • YARN
  • Pig
  • Scoop
  • Ranger
  • Real-time Streaming
  • Kafka
  • Kinesis
  • Data File Formats:
  • Avro, Parquet, JSON, ORC, CSV, XML
  • NoSQL / SQL
  • Microservice development
  • RESTful API development
  • CI/CD pipelines
  • Jenkins / GoCD
  • AWS
    • CodeCommit
    • CodeBuild
    • CodeDeploy
    • CodePipeline
  • Containers
  • Docker / Kubernetes
  • AWS
    • Lambda
    • Fargate
    • EKS
  • Analytics
  • Presto / Athena
  • QuickSight
  • Tableau
  • Test-driven development/test automation, continuous integration, and deployment automation
  • Enjoy working with data data analysis, data quality, reporting, and visualization
  • Good communicator, able to analyze and clearly articulate complex issues and technologies understandably and engagingly.
  • Great design and problem solving skills, with a strong bias for architecting at scale.
  • Adaptable, proactive and willing to take ownership.
  • Keen attention to detail and high level of commitment.
  • Good understanding in any: advanced mathematics, statistics, and probability.
  • Experience working in agile/iterative development and delivery environments. Comfort in working in such an environment. Requirements change quickly and our team needs to constantly adapt to moving targets.

About Comcast DX (Data Experience):

Data Experience(DX) is a results-driven, data platform research and engineering team responsible for the delivery of multi-tenant data infrastructure and platforms necessary to support our data-driven culture and organization. The mission of DX is to gather, organize, make sense of Comcast data, and make it universally accessible to empower, enable, and transform Comcast into an insight-driven organization. Members of the DX team define and leverage industry best practices, work on extremely large-scale data problems, design and develop resilient and highly robust distributed data organizing and processing systems and pipelines as well as research, engineer, and apply data science and machine intelligence disciplines

Comcast is an EOE/Veterans/Disabled/LGBT employer

Perficient, Inc.
  • Dallas, TX
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient currently has a career opportunity for a Senior MapR Solutions Architect.
Job Overview
One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics.
A Senior Solutions Architect is expected to be knowledgeable in two or more technologies within (a given Solutions/Practice area). The Solutions Architect may or may not have a programming background, but will have expert infrastructure architecture, client presales / presentation, team management and thought leadership skills.
You will provide best-fit architectural solutions for one or more projects; you will assist in defining scope and sizing of work; and anchor Proof of Concept developments. You will provide solution architecture for the business problem, platform integration with third party services, designing and developing complex features for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Sales and various pursuits focused on our clients' business needs.
You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains. This role is considered part of the Business Unit Senior Leadership team and may mentor junior architects and other delivery team members.
Responsibilities
  • Provide vision and leadership to define the core technologies necessary to meet client needs including: development tools and methodologies, package solutions, systems architecture, security techniques, and emerging technologies
  • HANDS ON ARCHITECT with VERY STRONG Map R, HBASE, AND HIVE Skills
  • Ability to architect and design end to end on data architecture (ingestion to semantic layer). Identify best ways to export the data to the reporting/analytic layer
  • Recommend best practices and approach on Distributed architecture (Doesnt have to be Map R specific)
  • Most recent project/job to be the Architect of an end to end Big Data implementation which is deployed.
  • Need to articulate best practices on building framework for Data layer (Ingesting, Curating), Aggregation layer, and Reporting layer
  • Understand and articulate DW principles on Hadoop landscape (not just data lake)
  • Performed data model design based HBase and Hive
  • Background of database design for DW on RDBMS is preferred
  • Ability to look at the end to end and suggest physical design remediation on Hadoop
  • Ability to design solutions for different use cases
  • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
Qualifications
  • Apache framework (Kafka, Spark, Hive, HBase)
  • Map R or similar distribution (Optional)
  • Java
  • Data formats (Parquet, Avro, JSON, XML, etc.)
  • Microservices
Responsibilities
  • At least 10+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
  • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
  • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
  • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
  • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
  • Experience with end-to-end solution architecture for data capabilities including:
  • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
  • Ability to produce high quality work products under pressure and within deadlines with specific references
  • VERY strong communication, solutioning, and client facing skills especially non-technical business users
  • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
  • At least 5+ years of working with a complex Big Data environment
  • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
Preferred Skills And Education
Masters degree in Computer Science or related field
Certification in Azure platform
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Perficient, Inc.
  • Houston, TX
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient currently has a career opportunity for a Senior MapR Solutions Architect.
Job Overview
One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics.
A Senior Solutions Architect is expected to be knowledgeable in two or more technologies within (a given Solutions/Practice area). The Solutions Architect may or may not have a programming background, but will have expert infrastructure architecture, client presales / presentation, team management and thought leadership skills.
You will provide best-fit architectural solutions for one or more projects; you will assist in defining scope and sizing of work; and anchor Proof of Concept developments. You will provide solution architecture for the business problem, platform integration with third party services, designing and developing complex features for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Sales and various pursuits focused on our clients' business needs.
You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains. This role is considered part of the Business Unit Senior Leadership team and may mentor junior architects and other delivery team members.
Responsibilities
  • Provide vision and leadership to define the core technologies necessary to meet client needs including: development tools and methodologies, package solutions, systems architecture, security techniques, and emerging technologies
  • HANDS ON ARCHITECT with VERY STRONG Map R, HBASE, AND HIVE Skills
  • Ability to architect and design end to end on data architecture (ingestion to semantic layer). Identify best ways to export the data to the reporting/analytic layer
  • Recommend best practices and approach on Distributed architecture (Doesnt have to be Map R specific)
  • Most recent project/job to be the Architect of an end to end Big Data implementation which is deployed.
  • Need to articulate best practices on building framework for Data layer (Ingesting, Curating), Aggregation layer, and Reporting layer
  • Understand and articulate DW principles on Hadoop landscape (not just data lake)
  • Performed data model design based HBase and Hive
  • Background of database design for DW on RDBMS is preferred
  • Ability to look at the end to end and suggest physical design remediation on Hadoop
  • Ability to design solutions for different use cases
  • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
Qualifications
  • Apache framework (Kafka, Spark, Hive, HBase)
  • Map R or similar distribution (Optional)
  • Java
  • Data formats (Parquet, Avro, JSON, XML, etc.)
  • Microservices
Responsibilities
  • At least 10+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
  • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
  • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
  • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
  • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
  • Experience with end-to-end solution architecture for data capabilities including:
  • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
  • Ability to produce high quality work products under pressure and within deadlines with specific references
  • VERY strong communication, solutioning, and client facing skills especially non-technical business users
  • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
  • At least 5+ years of working with a complex Big Data environment
  • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
Preferred Skills And Education
Masters degree in Computer Science or related field
Certification in Azure platform
Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
More About Perficient
Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
Select work authorization questions to ask when applicants apply
  • Are you legally authorized to work in the United States?
  • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
Perficient, Inc.
  • Dallas, TX
At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
Perficient currently has a career opportunity for a Big Data Engineer(Microservices Developer),
Job Overview
One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics. As a lead developer, you will be responsible for Microservices development.
Responsibilities
  • Ability to focus on framework for Dev Ops, Ingestion, and Reading/writing into HDFSWorked with different data formats (Parquet, Avro, JSON, XML, etc.)
  • Worked on containerized solutions (Kubernetes..)
  • Provide end to end vision and hands on experience with MapR Platform especially best practices around HIVE and HBASE
  • Should be a Rockstar in HBase and Hive Best Practices
  • Ability to focus on framework for Dev Ops, Ingestion, and Reading/writing into HDFS
  • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
  • Worked on containerized solutions (Spring Boot and Docker)
  • Translate, load and present disparate data-sets in multiple formats and from multiple sources including JSON, Avro, text files, Kafka queues, and log data.
  • Lead workshops with many teams to define data ingestion, validation, transformation, data engineering, and Data MOdeling
  • Performance tune HIVE and HBASE jobs with a focus on ingestion
  • Design and develop open source platform components using Spark, Sqoop, Java, Oozie, Kafka, Python, and other components
  • Lead the technical planning & requirements gathering phases including estimate, develop, test, manage projects, architect and deliver complex projects
  • Participate and lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates
  • Contribute to the thought capital through the creation of executive presentations, architecture documents and articulate them to executives through presentations
Qualifications
    • Spring, Docker, Hibernate /Spring , JPA, Pivotal, Kafka, NoSQL,
      Hadoop Containers Docker, work, Spring boot .
    • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
    • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
    • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
    • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
    • Experience with end-to-end solution architecture for data capabilities including:
    • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
    • Ability to produce high quality work products under pressure and within deadlines with specific references
    • VERY strong communication, solutioning, and client facing skills especially non-technical business users
    • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
    • At least 5+ years of working with a complex Big Data environment
    • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
  • Preferred Skills And Education
    Masters degree in Computer Science or related field
    Certification in Azure platform
    Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
    More About Perficient
    Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
    Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
    Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
    Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
    Select work authorization questions to ask when applicants apply
    • Are you legally authorized to work in the United States?
    • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?
    Perficient, Inc.
    • San Diego, CA
    At Perficient youll deliver mission-critical technology and business solutions to Fortune 500 companies and some of the most recognized brands on the planet. And youll do it with cutting-edge technologies, thanks to our close partnerships with the worlds biggest vendors. Our network of offices across North America, as well as locations in India and China, will give you the opportunity to spread your wings, too.
    Were proud to be publicly recognized as a Top Workplace year after year. This is due, in no small part, to our entrepreneurial attitude and collaborative spirit that sets us apart and keeps our colleagues impassioned, driven, and fulfilled.
    Perficient currently has a career opportunity for a Senior MapR Solutions Architect.
    Job Overview
    One of our large clients has made strategic decision to move all order management and sales data from their existing EDW into MapR platform. The focus is fast ingestion and streaming analytics. This is a multiyear roadmap with many components that will piece into a larger Data Management Platform. Perficient subject matter expert will work with the client team to move this data into new environment in a fashion that will meet requirements for applications and analytics.
    A Senior Solutions Architect is expected to be knowledgeable in two or more technologies within (a given Solutions/Practice area). The Solutions Architect may or may not have a programming background, but will have expert infrastructure architecture, client presales / presentation, team management and thought leadership skills.
    You will provide best-fit architectural solutions for one or more projects; you will assist in defining scope and sizing of work; and anchor Proof of Concept developments. You will provide solution architecture for the business problem, platform integration with third party services, designing and developing complex features for clients' business needs. You will collaborate with some of the best talent in the industry to create and implement innovative high quality solutions, participate in Sales and various pursuits focused on our clients' business needs.
    You will also contribute in a variety of roles in thought leadership, mentorship, systems analysis, architecture, design, configuration, testing, debugging, and documentation. You will challenge your leading edge solutions, consultative and business skills through the diversity of work in multiple industry domains. This role is considered part of the Business Unit Senior Leadership team and may mentor junior architects and other delivery team members.
    Responsibilities
    • Provide vision and leadership to define the core technologies necessary to meet client needs including: development tools and methodologies, package solutions, systems architecture, security techniques, and emerging technologies
    • HANDS ON ARCHITECT with VERY STRONG Map R, HBASE, AND HIVE Skills
    • Ability to architect and design end to end on data architecture (ingestion to semantic layer). Identify best ways to export the data to the reporting/analytic layer
    • Recommend best practices and approach on Distributed architecture (Doesnt have to be Map R specific)
    • Most recent project/job to be the Architect of an end to end Big Data implementation which is deployed.
    • Need to articulate best practices on building framework for Data layer (Ingesting, Curating), Aggregation layer, and Reporting layer
    • Understand and articulate DW principles on Hadoop landscape (not just data lake)
    • Performed data model design based HBase and Hive
    • Background of database design for DW on RDBMS is preferred
    • Ability to look at the end to end and suggest physical design remediation on Hadoop
    • Ability to design solutions for different use cases
    • Worked with different data formats (Parquet, Avro, JSON, XML, etc.)
    Qualifications
    • Apache framework (Kafka, Spark, Hive, HBase)
    • Map R or similar distribution (Optional)
    • Java
    • Data formats (Parquet, Avro, JSON, XML, etc.)
    • Microservices
    Responsibilities
    • At least 10+ years of experience in designing, architecting and implementing large scale data processing/data storage/data distribution systems
    • At least 3+ years of experience on working with large projects including the most recent project in the MapR platform
    • At least 5+ years of Hands-on administration, configuration management, monitoring, performance tuning of Hadoop/Distributed platforms
    • Should have experience designing service management, orchestration, monitoring and management requirements of cloud platform.
    • Hands-on experience with Hadoop, Teradata (or other MPP RDBMS), MapReduce, Hive, Sqoop, Splunk, STORM, SPARK, Kafka and HBASE (At least 2 years)
    • Experience with end-to-end solution architecture for data capabilities including:
    • Experience with ELT/ETL development, patterns and tooling (Informatica, Talend)
    • Ability to produce high quality work products under pressure and within deadlines with specific references
    • VERY strong communication, solutioning, and client facing skills especially non-technical business users
    • At least 5+ years of working with large multi-vendor environment with multiple teams and people as a part of the project
    • At least 5+ years of working with a complex Big Data environment
    • 5+ years of experience with Team Foundation Server/JIRA/GitHub and other code management toolsets
    Preferred Skills And Education
    Masters degree in Computer Science or related field
    Certification in Azure platform
    Perficient full-time employees receive complete and competitive benefits. We offer a collaborative work environment, competitive compensation, generous work/life opportunities and an outstanding benefits package that includes paid time off plus holidays. In addition, all colleagues are eligible for a number of rewards and recognition programs including billable bonus opportunities. Encouraging a healthy work/life balance and providing our colleagues great benefits are just part of what makes Perficient a great place to work.
    More About Perficient
    Perficient is the leading digital transformation consulting firm serving Global 2000 and enterprise customers throughout North America. With unparalleled information technology, management consulting and creative capabilities, Perficient and its Perficient Digital agency deliver vision, execution and value with outstanding digital experience, business optimization and industry solutions.
    Our work enables clients to improve productivity and competitiveness; grow and strengthen relationships with customers, suppliers and partners; and reduce costs. Perficient's professionals serve clients from a network of offices across North America and offshore locations in India and China. Traded on the Nasdaq Global Select Market, Perficient is a member of the Russell 2000 index and the S&P SmallCap 600 index.
    Perficient is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national, origin, disability status, protected veteran status, or any other characteristic protected by law.
    Disclaimer: The above statements are not intended to be a complete statement of job content, rather to act as a guide to the essential functions performed by the employee assigned to this classification. Management retains the discretion to add or change the duties of the position at any time.
    Select work authorization questions to ask when applicants apply
    • Are you legally authorized to work in the United States?
    • Will you now, or in the future, require sponsorship for employment visa status (e.g. H-1B visa status)?