The Senior Software Engineer codes software applications based on business requirements. The Senior Software Engineer work assignments involve moderately complex to complex issues where the analysis of situations or data requires an in-depth evaluation of variable factors.
Summary of Duties & Job Description
Our Platform Engineering team is looking for a Data Engineer that will work on the collecting, storing, processing, and analyzing of huge sets of data. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across teams in our group.
Selecting and integrating any Big Data tools and frameworks required to provide requested capabilities.
Implementing ETL process
Monitoring performance and advising any necessary infrastructure changes
What It's Like To Work Here
We take immense pride in cultivating a strong, person-first culture always looking for ways to be intentionally uncommon. Our team is made up of talented, creative, kind, funny, and energetic folks wired for continuous improvement.
To be great at Humana, we find our team members have these things in common:
Gain energy from working in a fast-paced, creative environment
Decision making that employs a blend of data-driven insights and intuition
Ability to multitask and handle multiple projects concurrently
Resilience and positivity, able to address setbacks and bounce back quickly
Resourcefulness, discovering creative ways to get things done
Joy in making an immediate and positive impact
Skills and Qualifications
Knowledge of various persistence (RDBMS, noSQL, HDFS, Cassandra, Redis)
Understanding Data Catalog, Data Governance, Data Lineage
Experience with security, authentication in data platform
Bachelor's Degree or above (Computer Science, Bio Engineering, Electronics and Electrical Engineering or any related field)
5+ years of experience in data engineering
Experience with building stream-processing systems, using solutions such as Spark-Streaming or Flink
Experience with integration of data from multiple data sources
Experience with building data lakes and data warehouses by leveraging any of the major cloud providers (GCP, AWS or Azure) is highly desirable
Familiar with Hadoop ecosystem (HDFS, HBase etc.), especially Spark
Good knowledge of Big Data querying tools
Knowledge of various ETL techniques
Knowledge of messaging systems, such as Kafka or RabbitMQ
Scheduled Weekly Hours