Specifications
- Apply from: Apply from overseas
- Salary:
- Location: Not Specified
- Conditions: Fulltime
Requirements
- Japanese: Not required
- English: Business
- Minimum Experience: Mid level or above
- Status:
NEW
Job Description
Responsibilities
-
Data Pipeline Development
- Design, develop, and maintain scalable data ingestion pipelines using Databricks, Airflow, Lambda, and Terraform
- Optimize large-scale data pipelines for performance and reliability
- Implement data workflows with Spark, Delta Lake, PySpark, and Scala
-
Data Management & Automation
- Maintain and enhance Data Lakehouse with Unity Catalog
- Build automation and tooling for end-to-end data workflows
- Ensure data governance, security, and compliance
-
Collaboration & Best Practices
- Work with cross-functional teams for seamless data integration
- Implement observability and monitoring best practices
Qualifications
-
Experience
- 5+ years as a Data Engineer or similar role
- Hands-on with Databricks, Delta Lake, Spark, Scala
- Experience with Data Lakes/Warehouses and data orchestration tools (Airflow, Dagster, Prefect)
- Knowledge of change data capture tools (Canal, Debezium, Maxwell)
- Proficiency in Scala, Python, SQL, and data cataloging (AWS Glue, Lakeformation, Unity Catalog)
- Experience with Terraform for IaC
-
Skills
- Strong problem-solving, debugging, and communication skills
- Ability to make sound decisions and learn quickly in complex technical environments
Apply Now