Job SummaryWe are looking for a highly skilled Data Engineer to design, develop, and maintain scalable, high-performance data pipelines and infrastructure. In this role, you will collaborate closely with Data Analysts, Quantitative Researchers, and cross-functional teams to ensure seamless data availability, reliability, and performance.Your responsibilities will include building and optimizing ETL and streaming dataflows, enhancing data lakehouse storage efficiency, and integrating diverse data sources to support advanced analytics, quantitative trading, machine learning, and business intelligence initiatives.This position reports to the Data Engineer Manager / Head of Business Technology Solution.Key Responsibilities • Engage in the full lifecycle of data platform development, including building and maintaining data lakes, warehouses, and streaming pipelines to support analytics, quant trading, and AI-driven decision systems.• Design, implement, and optimize ETL pipelines using Apache Airflow and Spark, ensuring reliability and scalability for large datasets.• Build and manage real-time data ingestion and processing pipelines using Apache Kafka to support low-latency trading and analytical applications.• Develop and maintain data serving layers with Trino for interactive analytics and cross-source query federation.• Integrate and optimize data flows from PostgreSQL, Oracle, and MongoDB databases into unified data pipelines.• Develop FastAPI-based microservices to expose data and insights to internal and external consumers.• Conduct POC (proof of concept) and adopt new data technologies to enhance scalability, throughput, and system efficiency.• Ensure data quality, observability, and governance, implementing validation, lineage, and monitoring best practices.• Maintain compliance with data security, privacy, and regulatory requirements.