Laravel
Laravel
Senior Data Engineer – HCLTech , Københavns Kommune
Bliv en del af HCLTech som senior data engineer. Bidrag med relevante kompetencer og nyd en fleksibel arbejdskultur.
Jobinformation
Titel

Senior Data Engineer

Virksomhed
HCLTech
Lokation

Københavns Kommune, Region Hovedstaden, Danmark

Opslået Dato

Aug 2, 2025

Påkrævede Færdigheder

Ingen specifikke færdigheder listet for denne stilling.

Tjek jobbeskrivelsen for færdighedskrav.

Virksomhedsinformation
Jobvurdering
Job Position Lukket

Denne jobopslag accepterer ikke længere ansøgninger. Positionen er blevet lukket og kan ikke vurderes på nuværende tidspunkt.

Lukket den Sep 22, 2025 at 11:11 AM

Kompetenceudvikling
Kompetenceudvikling
Forbedre Job Færdigheder

Brug AI til automatisk at uddrage og identificere tekniske færdigheder (sprog, frameworks, databaser og værktøjer) fra denne jobbeskrivelse.

Ansøgning
Ansøgning (Dansk)
Generer Ansøgning

Brug avanceret AI (GPT-4o) til at generere en personaliseret ansøgning på dansk til denne jobansøgning. Brevet vil være skræddersyet til din profil, de specifikke jobkrav og omfattende virksomhedsinformation for maksimal effekt.

Jobbeskrivelse

Job Summary:We are seeking a highly skilled and experienced Senior Data Engineer with deep expertise in Databricks and Azure Data Factory to join our data team. The ideal candidate will be responsible for designing and implementing robust data pipelines that ingest, transform, and model data from our SAP systems into scalable and reusable data products. This role requires strong proficiency in PySpark and SQL, a solid understanding of data modelling principles, and hands-on experience with Databricks.Key Responsibilities: Design, develop, and maintain scalable data pipelines using Databricks and Azure Data Factory.Ingest and process data from SAP systems, ensuring data quality, consistency, and reliability.Model raw data into structured, reusable data products that support business intelligence and analytics use cases.Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and deliver solutions.Optimize data workflows for performance and cost-efficiency in a cloud-native environment.Implement best practices for data engineering, including version control, testing, and documentation.Monitor and troubleshoot data pipelines to ensure high availability and performance.Communicate proactively and clearly with stakeholders, demonstrating a “can-do” attitude and a collaborative mindset.

Primary Skills (Must Have): Databricks (5/5) – Extensive experience building and managing data pipelines and notebooks.Azure Data Factory (5/5) – Strong expertise in orchestrating data workflows and integrating with various data sources.Data Modelling (4/5) – Proficient in designing logical and physical data models for analytics and reporting.PySpark (5/5) – Advanced programming skills for large-scale data processing.SQL (5/5) – Expert-level skills in writing complex queries and optimizing performance.Azure Synapse Analytics (4/5) – Experience with data warehousing and analytics solutions.

Secondary Skills (Nice to Have): CI/CD for Data Pipelines (3/5) – Familiarity with DevOps practices in data engineering.Delta Lake (3/5) – Knowledge of ACID transactions and time travel in Databricks.

Experience: Total Experience: 5+ years in data engineering and analytics.Relevant Experience: 3+ years working with Databricks and Azure Data Factory.

Kildehistorik
Vurderet den: N/A
Rapporter Fejl
Hvad er problemet?
Yderligere Detaljer (Valgfrit) Maksimalt 1000 tegn