Senior Data Engineer

Job Description

We are urgently looking for someone who has exposure to ELT processes in the analytics space, and someone who has dealt with data ingestions, transformations, and data curations similar to Azure Synapse analytics. Developing Pyspark notebooks on Databricks using Python is essential. The ideal candidate need to have “hands on” Snowflake experience - Snowflake is going to be the target database used as a single source of truth for all downstream applications.

Requirements

  • Applicable experience is characterized as advanced SQL, data engineering & data modeling techniques
  • Ability to function as a technical lead, working closely with developers and data analysts, as well as hands-on implementation
  • 7+ years of experience developing data integration solutions using tools like Informatica, Talend, Mulesoft, Qlik, etc.
  • Strong experience building out data warehouse and/or data lake
  • 3+ years of experience leading engineering resources.
  • 2+ years of experience working with cloud-native data solutions on Microsoft Azure, AWS, or Google Cloud platform.
  • Strong experience leading full lifecycle, large, complex reporting or data engineering efforts.
  • Strong experience in working with heterogeneous datasets in building and optimizing data pipelines, pipeline architectures and integrated datasets using various data integration technologies (ETL/ELT, data replication/CDC, message-oriented data movement, API design, etc.)
  • Experience with DevOps, CI/CD pipelines and automated testing Required.

 

Competencies/Skills

  • Implement data structures using standards and best practices in data modeling, ETL/ELT processes, SQL, database, and other technologies
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Solid experience building and optimizing data pipelines and data sets.
  • Deep knowledge of the data vault method, model, and architecture
  • Ability to manage the overall data landscape: meta data, processing patterns, and data quality
  • A successful history of manipulating, processing and extracting value from large datasets.
  • Strong working knowledge of message queuing, stream processing, and highly scalable data stores
  • Ability to obtain a clear understanding of business needs and value, developing a detailed vision for the initiative, mapping out the solution, and guiding its implementation
  • Develop test-driven solutions that can be deployed quickly and in an automated fashion
  • Ability to obtain a clear understanding of business needs and value, developing a detailed vision for the initiative, mapping out the solution, and guiding its implementation
  • Demonstrated ability to collaborate across all levels (Engineers, Management, Architects, etc.) & across all skill sets (Data scientists, Data visualization developers, Salesforce developers etc.) particularly in a Product-oriented culture
  • Capable of using agile methodology and implementing Continuous Integration/Continuous Delivery (CI/CD) pipelines
  • Experience working with Databricks (or Spark) (or Qlik technologies) Replicate and Compose
  • Experience with any scripting languages, preferably Python
  • Experience working with BI Tools (Power BI, Tableau etc.) to create dashboards and reports
  • Experience with Snowflake
  • Experience in Manufacturing or Agriculture industry Preferred
  • Troubleshooting skills, ability to determine impacts, ability to resolve complex issues, and ability to exercise sound judgment and initiative in challenging situations.

Job Summary

  • Published on:2024-02-10 6:38 am
  • Vacancy:1
  • Employment Status:Full Time
  • Experience:3 Years
  • Job Location:Lahore
  • Gender:No Preference
  • Application Deadline:2024-12-25