Databricks Data Engineer - 3+ Years - Mumbai

Location: Mumbai
Discipline: Analytics
Job type: Permanent
Contact name: Aayushi Goyal

Contact email: aayushi.goyal@crescendogroup.in
Job ref: 87751
Published: about 12 hours ago

Databricks Data Engineer - 3+ Years - Mumbai

An exciting opportunity for an experienced Databricks Data Engineer to design, build, and optimize scalable data pipelines and analytics solutions on the Databricks Lakehouse Platform. This role is ideal for professionals with strong expertise in Spark, PySpark, Delta Lake, and modern data engineering practices. You will collaborate with cross-functional teams to deliver robust, high-quality data solutions that power advanced analytics and business intelligence.

Your Future Employer - You will be working with a prestigious organization known for its commitment to diversity, equality and inclusion. They offer a dynamic work environment, opportunities for career growth, and a supportive team culture.

Responsibilities:

  1. Develop and maintain scalable data pipelines using Databricks and Apache Spark (PySpark).
  2. Build and optimize Delta Lake tables for analytics and ML workloads.
  3. Integrate data from APIs, cloud storage, databases, and streaming platforms such as Kafka.
  4. Transform raw data into structured formats following best practices in ETL/ELT.
  5. Implement and maintain data quality checks to ensure accuracy and consistency.
  6. Work closely with data scientists, analysts, and business stakeholders to support analytics needs.
  7. Automate data workflows using Databricks Workflows, job clusters, and schedulers.
  8. Monitor & optimize Spark jobs for performance and cost-efficiency.
  9. Document data models, pipeline designs, and technical workflows.

Requirements:

  1. Bachelor’s degree in Computer Science, Engineering, or a related field.
  2. 3+ years of experience in data engineering/software development.
  3. 1+ years of hands-on production experience with Databricks & Apache Spark.
  4. Strong proficiency in PySpark, SQL, ETL/ELT, and data warehousing concepts.
  5. Experience with cloud platforms (Azure/AWS/GCP), Delta Lake, Databricks Workflows, MLflow, Kafka, Git, and CI/CD.
  6. Exposure to data governance tools (Unity Catalog, Purview) and BI tools (Power BI, Tableau).
  7. Databricks Certified Data Engineer Associate/Professional is a plus.

 

What is in it for you:

  • Opportunity to work on modern data engineering using Databricks Lakehouse.
  • Exposure to large-scale, high-impact data ecosystems.
  • Collaborative work environment with cross-functional teams.
  • Strong learning and career-growth potential.

Reach us: If you think this role is aligned with your career, kindly write to us with your updated CV at aayushi.goyal@crescendogroup.in for a confidential discussion on the role.


Disclaimer: Crescendo Global specializes in senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging and memorable hiring experience. We do not discriminate based on race, religion, gender, age, marital status, sexual orientation, veteran status, or disability.

Note: We receive a high volume of applications daily. If you do not hear back within a week, please assume your profile has not been shortlisted. Scammers may misuse Crescendo Global’s name for fake job offers—we never ask for money or purchases. Verify all openings at our official website and stay alert.


Scammers can misuse Crescendo Globals name for fake job offers. We never ask for money, purchases, or system upgrades. Verify all opportunities at www.crescendo-global.com and report fraud immediately. Stay alert!

 

Profile Keywords: Crescendo Global, Databricks Jobs, Data Engineer Jobs Mumbai, Spark Developer, PySpark Engineer, Delta Lake, ETL/ELT Jobs, Big Data Engineer, Cloud Data Engineering, Kafka Streaming, Azure Data Engineer, Data Lakehouse Jobs.