Cargo: Data Engineer
Empresa:
Descrição do Vaga: Detalhes da VagaEscolaridade Não InformadoSegmento Não InformadoSalário Não InformadoÁrea de AtuaçãoDiversos / OutrosO que você irá fazer
- This is a contract position.
- Job DescriptionWe are seeking a skilled Data Engineer to design, develop, and maintain scalable data pipelines and workflows.
- The ideal candidate will have strong expertise in Python, SQL, Snowflake, and Airflow, with experience in building ETL/ELT solutions and optimizing data infrastructure.
- This role involves collaborating with data analysts, scientists, and business stakeholders to ensure data availability, reliability, and efficiency.
- Roles & ResponsibilitiesDesign, build, and maintainscalableETL/ELT pipelinesto process large volumes of structured and unstructured data.
- Develop and optimize SQL querieswithinSnowflakefor efficient data storage and retrieval.
- Implement workflow orchestrationusingApache Airflowto automate data processing tasks.
- Writeefficient, reusable, and scalable Python scriptsfor data extraction, transformation, and loading (ETL).
- Monitor and troubleshoot data pipelinesto ensure high availability and performance.
- Collaborate with data teamsto define best practices for data modeling and maintain a structured data warehouse.
- Work with cloud platforms(AWS, GCP, or Azure) to integrate data sources and manage cloud-based data infrastructure.
- Ensuredata security, governance, and compliancewith industry best practices.
- Required Skills & Qualifications Strong programming skills inPython .
- Expertise inSQLfor querying, transformation, and performance tuning.
- Hands-on experience withSnowflake(schema design, performance optimization, Snowpipe, Streams, and Tasks).
- Experience withApache Airflowfor scheduling and orchestrating data pipelines.
- Knowledge ofETL/ELT processesand best practices in data engineering.
- Experience withcloud platforms(AWS, GCP, or Azure) and their data services.
- Familiarity withdata modeling(Star Schema, Snowflake Schema) and data warehouse concepts.
- Experience withGitandCI/CD pipelines .
- Preferred Skills Experience withbig data processing frameworks(Spark, Databricks).
- Knowledge ofKafka, Kinesis, or other real-time data streaming tools .
- Familiarity withcontainerization(Docker, Kubernetes) for deploying data pipelines.
- Understanding ofData Governance, Data Quality, and Data Security principles .
Informações AdicionaisQuantidade de Vagas 1Jornada Não Informado
Local: Recife – PE
Data do Post da Vaga: Sun, 20 Apr 2025 22:32:48 GMT
Candidatar-se a Vaga