Jobgether

We use an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Our system identifies the top-fitting candidates, and this shortlist is then shared directly with the hiring company. The final decision and next steps (interviews, assessments) are managed by their internal team. We appreciate your interest and wish you the best! Data Privacy Notice: By submitting your application, you acknowledge that Jobgether will process your personal data to evaluate your candidacy and share relevant information with the hiring employer. This processing is based on legitimate interest and pre-contractual measures under applicable data protection laws (including GDPR). You may exercise your rights (access, rectification, erasure, objection) at any time. #LI-CL1 We may use artificial intelligence (AI) tools to support parts of the hiring process, such as reviewing applications, analyzing resumes, or assessing responses. These tools assist our recruitment team but do not replace human judgment. Final hiring decisions are ultimately made by humans. If you would like more information about how your data is processed, please contact us.

Microsoft Fabric Data Engineer

Data EngineerData EngineerFull TimeRemote

Location

United States

Posted

1 day ago

Salary

Not specified

No structured requirement data.

Job Description

This description is a summary of our understanding of the job description. Click on 'Apply' button to find out more.

Role Description

We are looking for a skilled Microsoft Fabric Data Engineer to design, build, and optimize enterprise-scale data solutions that enable data-driven decision-making. In this role, you will:

  • Develop scalable data pipelines.
  • Implement Lakehouse architectures.
  • Integrate diverse data sources across cloud platforms.
  • Work closely with IT, analytics, and business teams to translate complex requirements into robust data solutions.
  • Mentor junior engineers while delivering high-quality, reliable data services.

The environment is collaborative, fast-paced, and focused on modern data engineering practices, including automation, real-time processing, and advanced analytics. This is a remote role with milestone-based travel requirements.

Accountabilities:

  • Design, build, and maintain distributed, scalable data pipelines using Microsoft Fabric and Apache Spark to process structured and unstructured data.
  • Integrate data from multiple internal and external systems, ensuring consistency, reliability, and proper lineage.
  • Optimize ETL/ELT workloads to improve throughput, cost efficiency, and performance of large-scale analytics environments.
  • Implement and enforce data quality, metadata management, governance, and compliance standards.
  • Collaborate with data scientists, analysts, architects, and business stakeholders to deliver insights and integrate analytical models.
  • Document pipeline architectures, workflows, schemas, and operational processes, while troubleshooting and ensuring enterprise-grade reliability.
  • Explore emerging technologies to enhance data engineering practices, including Lakehouse architecture, real-time processing, and automation.

Qualifications

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
  • 7+ years of experience in data engineering, data architecture, or large-scale data platform development.
  • Strong expertise in Apache Spark for batch and streaming processing.
  • Hands-on experience with Microsoft Fabric, Data Factory, pipelines, and Lakehouse implementations.
  • Advanced proficiency in SQL, Python, and/or Scala.
  • Experience with cloud platforms such as Azure, AWS, or GCP.
  • Solid understanding of distributed systems, lakehouse architecture, and data modeling.
  • Proven ability to design and optimize complex ETL/ELT pipelines.
  • Strong communication, leadership, and mentoring skills.

Requirements

  • Preferred: Certifications in Azure Data Engineering, Apache Spark, or Microsoft Fabric.
  • Experience with real-time streaming technologies (Kafka, Azure Event Hubs).
  • DevOps practices including CI/CD and Infrastructure as Code.
  • Knowledge of Power BI or Tableau.

Benefits

  • Competitive salary and performance-based incentives.
  • Flexible remote work with milestone-based travel opportunities.
  • Comprehensive healthcare and retirement plans.
  • Opportunities for professional development and skill growth.
  • Collaborative and innovative technology environment.
  • Access to cutting-edge data engineering tools and cloud technologies.

Job Requirements

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
  • 7+ years of experience in data engineering, data architecture, or large-scale data platform development.
  • Strong expertise in Apache Spark for batch and streaming processing.
  • Hands-on experience with Microsoft Fabric, Data Factory, pipelines, and Lakehouse implementations.
  • Advanced proficiency in SQL, Python, and/or Scala.
  • Experience with cloud platforms such as Azure, AWS, or GCP.
  • Solid understanding of distributed systems, lakehouse architecture, and data modeling.
  • Proven ability to design and optimize complex ETL/ELT pipelines.
  • Strong communication, leadership, and mentoring skills.
  • Preferred: Certifications in Azure Data Engineering, Apache Spark, or Microsoft Fabric.
  • Experience with real-time streaming technologies (Kafka, Azure Event Hubs).
  • DevOps practices including CI/CD and Infrastructure as Code.
  • Knowledge of Power BI or Tableau.

Benefits

  • Competitive salary and performance-based incentives.
  • Flexible remote work with milestone-based travel opportunities.
  • Comprehensive healthcare and retirement plans.
  • Opportunities for professional development and skill growth.
  • Collaborative and innovative technology environment.
  • Access to cutting-edge data engineering tools and cloud technologies.

Related Categories

Related Job Pages

More Data Engineer Jobs

Data Engineer

Tinybird

Tinybird is a serverless analytical backend for developers. Build low-latency APIs in minutes with nothing but SQL.

Data Engineer1 day ago
Full TimeRemoteTeam 11-50Since 2019H1B No Sponsor

Defining data architecture for clients as a Data Engineer at Tinybird

SQL
Remote
$140K - $208K / year
Full TimeRemoteTeam 10,001+H1B Sponsor

The role involves 100% hands-on development using Azure SQL, Cosmos DB, and DataBricks to develop and unit test database code, including T-SQL, stored procedures, functions, and views. Responsibilities also include owning/maintaining DataBricks data ingestion/output for the Customer Insights CDP platform and creating/maintaining ADF pipelines.

United States
$150K - $170K / year
Data Engineer1 day ago
Full TimeRemoteTeam 10,001+Since 1936H1B Sponsor

The role involves building, maintaining, and optimizing Python-based data processing workflows, including ETL/ELT pipelines, and engineering/optimizing data models and warehouse structures within Snowflake. Responsibilities also include designing and developing Power BI dashboards and collaborating with operations customers to translate requirements into actionable analytics.

United States
$91.2K - $189K / year
Full TimeRemoteTeam 10,001+Since 1898H1B Sponsor

The Senior Data Engineer will design, build, and optimize data architecture and pipelines to power strategic decision-making, focusing on data quality and integrity for internal teams and automotive OEM clients. Key duties involve designing robust data architectures, building scalable pipelines using cloud technologies, developing automated tests for ETL workflows, and transforming data using AI/ML and big data techniques.

United States
$101K - $169K / year