We are looking for self-motivated, responsive individuals who are passionate about data. You will build data solutions to address complex business questions, taking data through its lifecycle-from the pipeline for data processing and data infrastructure to creating dataset data products.
Core Responsibilities:
Design and build ETL jobs to support the Enterprise Data Warehouse.
Write Extract-Transform-Load (ETL) jobs using standard tools.
Partner with business teams to understand the business requirements, assess the impact on existing systems, and design and implement new data provisioning pipeline processes for Finance/External reporting domains.
Monitor and troubleshoot operational or data issues in the data pipelines.
Drive architectural plans and implementations for future data storage, reporting, and analytic solutions.
Qualifications
6-8 Years experience in Data Engineering
3+ years of experience in implementing big data processing technology: AWS / Azure / GCP, Apache Spark, Python.
Experience writing and optimizing SQL queries in a business environment with large-scale, complex datasets.
Working knowledge of higher abstraction ETL tooling (e.g., AWS Glue Studio, Talend, Informatica).
Detailed knowledge of databases like Oracle, DB2, SQL Server, data warehouse concepts, and technical architecture, infrastructure components, ETL, and reporting/analytic tools and environments.
Hands-on experience in cloud technologies (AWS, Google Cloud, Azure) related to data ingestion tools (both real-time and batch-based), CI/CD processes, cloud architecture understanding, and big data implementation.
Must have: DataBricks certification and Azure Cert
NICE TO HAVES:
AWS certification or experience with
Working knowledge of Glue, Lambda, S3, Athena, Redshift, Snowflake.
Strong verbal and written communication skills, excellent organizational and prioritization skills.