Mô Tả Công Việc
Design, develop, and maintain scalable data pipelines and ETL/ELT processes to collect, clean, and process large-scale datasets in data systems.
Collaborate with Data Scientists, Data Analysts, and other stakeholders to understand data requirements and develop Data Lake and Data Warehouse solutions for analytics, reporting, and AI/ML. Utilize BI tools like Grafana, AWS QuickSight, and Google Sheets for data visualization.
Build and optimize data architecture, including Lakehouse (AWS S3, Delta Lake), Data Warehouse (Redshift), and technologies like Kafka for both streaming and batch data processing.
Monitor and troubleshoot data pipeline issues to ensure data integrity and timely delivery. Utilize tools such as Sentry, Grafana, Prometheus, and AWS CloudWatch to track logs and send alerts via Slack.
Document technical processes, maintain data catalogs, and ensure compliance with data governance policies. Use the Atlassian toolset (Jira, Confluence) or Slack for work management and team collaboration.
Optimize the performance, reliability, and cost-efficiency of data systems. Enhance the scalability and efficiency of all components, including Kafka, dbt, Airflow, Airbyte, and storage solutions like PostgreSQL, Elasticsearch, Redis, and AWS S3, as well as data processing and orchestration frameworks.
Continuously research and implement new technologies to optimize the current system, applying best practices in Data Engineering and DataOps.
Xem toàn bộ Mô Tả Công Việc
Yêu Cầu Công Việc
Yêu cầu ứng viên
Bachelor's degree in Computer Science or a related field.
4+ years of experience in data engineering, ETL development, or a similar role.
Proficiency in using advanced SQL queries to extract valuable insights from datasets.
Proficiency in one of the following programming languages: Python, C#, or Java for data processing and scripting.
Experience with relational databases, such as PostgreSQL, MySQL, Oracle, or MS SQL Server.
Experience with cloud computing platforms (e.g., AWS, GCP, Azure).
Knowledge of data architecture concepts and cloud-based storage solutions.
Knowledge of document databases, including Elasticsearch, Redis, MongoDB, or Apache Solr.
Strong problem-solving skills, attention to detail, and the ability to work independently or as part of a team.
Excellent communication and collaboration skills, particularly in an Agile environment.
Proficient in English, with the ability to communicate, read, and write effectively in a professional work environment.
Preferred (but not required)
Knowledge of container orchestration, such as AWS EKS or Kubernetes.
Experience with microservice architecture.
Experience with data visualization tools, such as AWS QuickSight or Grafana.
Experience with data streaming platforms, such as Apache Kafka or Apache Nifi.
Familiarity with data integration from various sources, such as web services, APIs, and file systems.
Familiarity with the Agile toolset, such as GitLab, Jira, Confluence, and Slack.
Xem toàn bộ Yêu Cầu Công Việc
Quyền Lợi
Quyền lợi
We’re not just about building platforms; we’re about creating a workplace where talent thrives. Here's what you’ll gain:
Exceptional Compensation
Up to 30 months of salary per year through competitive salary packages and performance-based bonuses.
Total annual income $40,000, reflecting your expertise and contribution.
Growth Opportunities
Hands-on exposure to cutting-edge technologies and complex system architecture in a global-scale project.
Clear career advancement pathways and access to continuous professional development programs.
Global Vision
Contribute to projects that redefine how brands connect with consumers globally.
Opportunity to work on challenging problems in data analytics, customer engagement, and operational efficiency.
In addition to development tasks, you’ll collaborate closely with other CoEs to understand their unique challenges. With your technical expertise, you’ll continuously propose innovative solutions to elevate their performance and contribute to the overall success of the organization.