Immediate Start
As a Senior Data Engineer, you will lead the design and development of robust data pipelines, integrating and transforming data from diverse data sources such as APIs, relational databases, and files. Collaborating closely with business and analytics teams, you will ensure high-quality deliverables that meet the strategic needs of our organization. Your expertise will be pivotal in maintaining the quality, reliability, security and governance of the ingested data, therefore driving our mission of Collaboration, Innovation, & Transformation.
Key Responsibilities:
Develop and maintain data pipelines.
Integrate data from various sources (APIs, relational databases, files, etc.).
Collaborate with business and analytics teams to understand data requirements.
Ensure quality, reliability, security and governance of the ingested data.
Follow modern DataOps practices such as Code Versioning, Data Tests and CI/CD
Document processes and best practices in data engineering.
Required Skills and Qualifications:
Must-have Skills:
Proven experience in building and managing large-scale data pipelines in Databricks (PySpark, Delta Lake, SQL).
Strong programming skills in Python and SQL for data processing and transformation.
Deep understanding of ETL/ELT frameworks, data warehousing, and distributed data processing.
Hands-on experience with modern DataOps practices: version control (Git), CI/CD pipelines, automated testing, infrastructure-as-code.
Familiarity with cloud platforms (AWS, Azure, or GCP) and related data services.
Strong problem-solving skills with the ability to troubleshoot performance, scalability, and reliability issues.
Proficiency in Git.
Discovering Direct IT Contract Opportunities for Contract Spy members.