Careers

About Us


Parallon Technology Solutions, LLC. (PTS) provides EHR clinical, IT help desk, application support, IT managed services, hosting, technical staffing and strategic IT consulting services to hospitals, outpatient facilities, and large physician groups nationwide. With a team of over 400 clinical, financial and technical professionals, PTS has implemented EHR systems in more than 300 facilities. PTS offers staffing and remote support services for all major EHR acute and ambulatory platforms as well as their ancillary applications.

Lead Big Data Engineer

Location: Nashville, TN
Date Posted: 08-31-2018
Classification: Permanent 
Level: Consultant
Job ID: 11317601
 
At Parallon Technology Solutions (PTS), we serve and enable those who care for and improve human life in their communities. Visit our website to learn more about us!

Parallon Technology Solutions is seeking a Lead Big Data Developer to join our team in Nashville, TN. 

Responsibilities:
This role will provide leadership and deep technical expertise in all aspects of solution design and application development for specific business environments. Focus on setting technical direction on groups of applications and similar technologies as well as taking responsibility for technically robust solutions encompassing all business, architecture, and technology constraints.
  • Responsible for building and supporting a Hadoop-based ecosystem designed for enterprise-wide analysis of structured, semi-structured, and unstructured data. 
  • Manage and optimize Hadoop/Spark clusters, which may include many large HBase instances
  • Support regular requests to move data from one cluster to another
  • Manage production support teams to make sure service levels are maintained and any interruption is resolved in a timely fashion
  • Bring new data sources into HDFS, transform and load to databases.
Work collaboratively with Data Scientists and business and IT leaders throughout the company to understand Big Data needs and use cases.

Requirements:
  • Bachelor’s degree in Computer Science with at least 7 years of IT work experience
  • Strong understanding of best practices and standards for Hadoop application design and implementation.
  • 2 Years of hands-on experience with Cloudera Distributed Hadoop (CDH) and experience with many of the following components:
    • Hadoop, MapReduce, Spark, Impala, Hive, Solr, YARN
    • HBase or Cassandra
    • Kafka, Flume, Storm, Zookeeper
    • Java, Python, or Scala
    • SQL, JSON, XML
    • RegEx
    • Sqoop
  • Experience with Unstructured Data
  • Data modeling experience using Big Data Technologies.
  • Experience in developing MapReduce programs using Apache Hadoop for working with Big Data.
  • Experience having deployed Big Data Technologies to Production.
  • Understanding of Lambda Design Architectures and Real-Time Streaming
  • Ability to multitask and to balance competing priorities.
  • Requires strong practical experience in agile application development, file systems management, and DevOps discipline and practice using short-cycle iterations to deliver continuous business value.
  • Expertise in planning, implementing, supporting, and tuning Hadoop ecosystem environments using a variety of tools and techniques.
  • Knowledge of all facets of Hadoop ecosystem development including ideation, design, implementation, tuning, and operational support.
  • Ability to define and utilize best practice techniques and to impose order in a fast-changing environment. Must have strong problem-solving skills.
  • Strong verbal, written, and interpersonal skills, including a desire to work within a highly-matrixed, team-oriented environment.
A successful candidate may have:
  • Experience in Healthcare Domain
  • Experience in Patient Data
  • Experience with Predictive Models
  • Experience with Natural Language Processing (NLP)
  • Experience with Social Media Data
Hardware/Operating Systems:
  • Linux
  • UNIX
  • Distributed, highly-scalable processing environments
  • Networking - basic understanding of networking with respect to distributed server and file systems connectivity and troubleshooting of connectivity errors
 Databases
  • RDBMS – Teradata
  • NoSQL, Hbase, Cassandra, MongoDB, In-memory, Columnar, other emerging technologies
  • Other Languages – Java, Python, Scala, R
  • Build Systems – Maven, Ant
  • Source Control Systems – Git, Mercurial
  • Continuous Integration Systems – Jenkins or Bamboo
  • Config/Orchestration – Zookeeper, Puppet, Salt, Ansible, Chef, Oozie, Pig
  • Ability to integrate tools outside of the core Hadoop ecosystem
 Connect with Parallon Technology Solutions on LinkedInFacebook and Twitter!
or
this job portal is powered by CATS