Job Title: Data Engineer
Location: London, UK
Job Type: Contract
Work Mode: Hybrid (2 -3 Days a week to London office)
Key Skills: Data Bricks(Spark), SQL , Python & AWS services
We at Coforge are hiring for Data Engineer with the following skillset:
Key Responsibilities
- Develop, maintain, and optimize data pipelines and ETL workflows using Databricks (Spark).
- Should have 3+ years of experience on Databricks projects.
- Good problem solving skills.
- Write efficient, optimized SQL queries for data extraction, transformation, and validation.
- Develop Python‑based scripts for automation, data processing, and integration tasks.
- Work with AWS Cloud services (S3, Glue, EC2, Lambda, IAM, Athena, EMR) to build scalable data solutions.
- Collaborate with business and technical teams to understand data requirements and translate them into technical specifications.
- Perform data validation, quality checks, and root‑cause analysis for data‑related issues.
- Ensure performance tuning, reliability, and scalability of data pipelines.
- Contribute to design reviews, architecture discussions, and best‑practice implementations.
- Prepare and maintain technical documentation.
Technical Skills
- Strong SQL: Complex queries, performance tuning, stored procedures, indexing.
- Python: Experience with data processing (Pandas, PySpark), scripting, automation.
- Databricks:
- Spark/Delta Lake
- Notebooks
- Job orchestration
- ETL/ELT pipeline development
- AWS Cloud (working knowledge in at least 3+ services):
- S3, Glue, Lambda, EC2, EMR, Athena, Redshift, IAM.
Additional Skills (Good to Have)
- Experience in Git, CI/CD pipelines.
- Knowledge of data modeling concepts.
- Exposure to Agile methodologies.
- Experience with job scheduling tools (Airflow, Cron, etc.)