Senior Data Engineer-DataBricks

Job Title: Senior Data Engineer-DataBricks
Location: Chennai,Bangalore,Hyderabad
Experience: 5 - 8 Years



Job Summary

We are looking for a highly skilled and experienced Senior Data Engineer to join our growing data engineering team. The ideal candidate will have a strong background in building and optimizing data pipelines and data architecture, as well as experience with Azure cloud services. You will work closely with cross-functional teams to ensure data is accessible, reliable, and ready for analytics and business insights.

 

Mandatory Skills

·       Advanced SQL, Python and PySpark for data engineering

·       Azure 1st party services (ADF, Azure Databricks, Synapse, etc.)

·       Data warehousing (Redshift, Snowflake, Big Query)

·       Workflow orchestration tools (Airflow, Prefect, or similar)

·       Experience with DBT (Data Build Tool) for transforming data in the warehouse

·       Hands-on experience with real-time/live data processing frameworks such as Apache Kafka, Apache Flink, or Azure Event Hubs

 

Key Responsibilities

·       Design, develop, and maintain scalable and reliable data pipelines

·       Demonstrate experience and leadership across two full project cycles using Azure Data Factory, Azure Databricks, and PySpark

·       Collaborate with data analysts, scientists, and software engineers to understand data needs

·       Design and build scalable data pipelines using batch and real-time streaming architectures

·       Implement DBT models to transform, test, and document data pipelines

·       Implement data quality checks and monitoring systems

·       Optimize data delivery and processing across a wide range of sources and formats

·       Ensure security and governance policies are followed in all data handling processes

·       Evaluate and recommend tools and technologies to improve data engineering capabilities

·       Lead and mentor junior data engineers as needed

·       Work with cross-functional teams in a dynamic and fast-paced environment

 

Qualifications

·       Bachelor’s or Master’s degree in Computer Science, Engineering, or related field

·       Certifications in Databricks Professional are preferred

 

Technical Skills

·       Programming: Python, PySpark, SQL

·       ETL tools and orchestration (e.g., Airflow, DBT), Cloud platforms (Azure)

·       Real-time streaming tools: Kafka, Flink, Spark Streaming, Azure Event Hubs

·       Data Warehousing: Snowflake, Big Query, Redshift

·       Cloud: Azure (ADF, Azure Databricks)

·       Orchestration: Apache Airflow, Prefect, Luigi

·       Databases: PostgreSQL, MySQL, NoSQL (MongoDB, Cassandra)

·       Tools: Git, Docker, Kubernetes (basic), CI/CD

 

Soft Skills

·       Strong problem-solving and analytical thinking

·       Excellent verbal and written communication

·       Ability to manage multiple tasks and deadlines

·       Collaborative mindset with a proactive attitude

·       Strong analytical skills related to working with unstructured datasets

 

Good to Have

·       Experience with real-time data processing (Kafka, Flink)

·       Knowledge of data governance and privacy regulations (GDPR, HIPAA)

·       Familiarity with ML model data pipeline integration

 

Work Experience

·       Minimum 5 years of relevant experience in data engineering roles

·       Experience with Azure 1st party services across at least two full project lifecycles

 

Compensation & Benefits

·       Competitive salary and annual performance-based bonuses

·       Comprehensive health and optional Parental insurance.

·       Optional retirement savings plans and tax savings plans.

 

Key Result Areas (KRAs)

·       Timely development and delivery of high-quality data pipelines

·       Implementation of scalable data architectures

·       Collaboration with cross-functional teams for data initiatives

·       Compliance with data security and governance standards

 

Key Performance Indicators (KPIs)

·       Uptime and performance of data pipelines

·       Reduction in data processing time

·       Number of critical bugs post-deployment

·       Stakeholder satisfaction scores

·       Successful data integrations and migrations

 

Contact:    hr@bigtappanalytics.com

Contact Informations:

Drop us a message