Microsoft Fabric Data Engineer - Roles & Responsibilities
Job Description:
- Minimum 3–5 years of experience in designing, implementing, and supporting Data Warehousing and Business Intelligence solutions on Microsoft Fabric data pipelines
- Design and implement scalable and efficient data pipelines using Azure Data Factory, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes.
- Implement ETL processes to extract data from diverse sources, transform it into suitable formats, and load it into the data warehouse or analytical systems.
- Hands-on experience in design, development, and implementation of Microsoft Fabric, Azure Data Analytics Service (Azure Data Factory – ADF, Data Lake, Azure Synapse, Azure SQL, and Databricks)
- Experience in writing optimized SQL queries on MS Azure Synapse Analytics (dedicated, serverless resources in queries, etc.)
- Troubleshoot, resolve and suggest a deep code-level analysis of Spark to address complex customer issues related to Spark core internals, Spark SQL, Structured Streaming, and Delta.
- Continuously monitor and fine-tune data pipelines and processing workflows to enhance overall performance and efficiency, considering large-scale data sets.
- Experience with hybrid cloud deployments and integration between on-premises and cloud environments.
- Ensure data security and compliance with data privacy regulations throughout the data engineering process.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Conceptual knowledge of data and analytics, such as dimensional modeling, ETL, reporting tools, data governance, data warehousing, and structured and unstructured data.Role & responsibilities.
- Understanding of data engineering best practices like code modularity, documentation, and version control.
- Collaborate with business stakeholders to gather requirements and create comprehensive technical solutions and documentation