Description: Software Engineer / Big Data Hadoop Engineer
Location: Charlotte, NC
Duration: 12 months (Hybrid work type)
Position Summary: As a Software Engineer / Big Data Hadoop Engineer, you will consult on complex initiatives with broad impact, focusing on large-scale planning for Software Engineering. Your role involves reviewing and analyzing multi-faceted, larger-scale, or longer-term Software Engineering challenges. You'll contribute to resolving complex situations, drawing on your solid understanding of function, policies, procedures, and compliance requirements. Collaboration with client personnel is essential.
Job Expectations:
Design and implement an automated spark-based framework for data ingestion, transformation, and consumption.
Implement security protocols, including Kerberos Authentication and data encryption at rest. Utilize role-based access control via Apache Ranger.
Develop an automated testing framework for data validation.
Enhance existing spark-based frameworks to address tool limitations and meet consumer expectations.
Build a high-performing, scalable data pipeline platform using Hadoop, Apache Spark, MongoDB, Kafka, and object storage architecture.
Collaborate with Infrastructure Engineers, System Administrators, application partners, Architects, Data Analysts, and Modelers.
Work effectively in a hybrid environment with legacy ETL and Data Warehouse applications alongside new big-data applications.
Support ongoing data management efforts across Development, QA, and Production environments.
Provide tool support and assist consumers in troubleshooting pipeline issues.
Leverage industry trends to create best-in-class technology for competitive advantage.
Required Qualifications:
5+ years of software engineering experience
5+ years of experience delivering complex enterprise-wide IT solutions
5+ years of experience with ETL, data warehouse, and data analytics on big-data architecture (e.g., Hadoop)
5+ years of Apache Spark design and development experience (Scala, Java, Python, or Data Frames)
6+ years of ETL (Extract, Transform, Load) programming experience
2+ years of Kafka or equivalent experience
2+ years of NoSQL DB experience (e.g., Couchbase, MongoDB)
Strong SQL skills and experience with performance tuning
Desired Qualifications:
3+ years of Agile experience
2+ years of reporting or analytics experience
Familiarity with operational risk, credit risk, or compliance domains
Experience integrating with RESTful APIs
Exposure to CI/CD tools
Contact: rwest@judge.com
This job and many more are available through The Judge Group. Find us on the web at www.judge.com