Rakuten is one of the leading e-commerce companies in the world. We provide several Internet services such as EC marketplace, travel booking services, digital contents and FinTech.
Our mission is to empower people and society through the internet as a Global Innovation Company. Ecosystem Services Department is a part of the Technology Division of Rakuten.
We build scalable platforms that empowers the Rakuten Ecosystem globally. Our mission is to expand the Rakuten ecosystem to create a common platform for all the Rakuten services worldwide.
Incubation Projects Section is a part of Ecosystem Services Department. We find and implement business ideas which help expand Rakuten's ecosystem through analyzing platform service data in membership, points, payments.
Work scope includes data extraction & analysis, providing user insight for building product strategy, data related tool implementation, tracking & monitoring cross use between other Rakuten services by working with internal / external stakeholders.
Our mission is to enable stakeholders to make well-informed business and technology decisions by providing data-backed insights.
We are looking for an ambitious data platform analyst and engineer who is willing to accept new challenges while learning new technologies and technical methodologies.
Please see the following link to learn more about our department : https : / / www.youtube.com / watch?v NDlBjgERHDk Responsibilities : (a) Development and Operation of Data Platform (b) Data Analysis and Finding Business Insights Data extraction & analytics to provide users insight for building strategy for campaign and product improvement Presentation to stakeholders based on created proposal KPI dashboard building, and managing by utilizing Domo, Tableau Simulating business KPI forecast based on data Cross use analytics, and tracking between other Rakuten services Minimum Qualifications : B.
S. in Computer Science or in related fields, or equivalent education and experience 3+ years of development or operational experience in Linux systems Batch-processing data pipelines (or so-called "ETL") composed of Airflow, Digdag, Argo Workflow, Informatica, etc.
Streaming data pipelines composed of subsystems such as Apache NiFi, Apache Kafka, so-called "ELK stack," Splunk, etc. Applications leveraging so-called "Big Data" systems such as Hadoop, Hive, Spark, etc.
Web application backend systems composed of load balancers, Apache web server / Nginx, MySQL / PostgreSQL, Tomcat / Ruby on Rails, etc.
Deployment pipeline composed of systems such as Jenkins, Concourse, etc. 3+ years of experience in developing in any of following programming languages and building ecosystems (if applicable) Java and its building ecosystems such as Ant, Maven or Gradle Python and its building ecosystems such as Pipenv Shell Scripts Preferred Qualifications : Experience working with data or statistical analytics Experience working on data visualizer and analyzer systems like Domo, Tableau, Power BI, Microstrategy, Superset etc.
Experience working on applications integrating container hypervisors or container orchestrators such as Docker, LXC, Kubernetes, Apache Mesos, etc.
Managing systems that have high service level objectives / agreements (SLOs / SLAs.) Experience working on Scala and its building ecosystems such as SBT Soft skills : Question definition, problem solving, solution creation Visualization (PowerPoint, Tableau, Domo) Experience participating open-source projects and data analysis competitions, hackathons Languages : English (Overall - 4 - Fluent)