Develop and maintain scalable and reliable data pipelines to ingest data from a variety of different data sources, ensure right data format and adhere to data quality standards, ensure the downstream users can get the data quickly
Develop and maintain highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data sets from numerous channels. Define and maintain data pipeline, data structure, data format to enable business solution
Develop and enable big data and batch/real-time analytical solutions that leverage emerging technologies
Evaluate new technologies and products, and research to identify opportunities that impact business strategy, business requirements and performance that can accelerate access to data
Ensure proper configuration management and change controls are implemented during code migration
Candidate requirements
Degree in Computer Science or related field
At least 3 years in Data Engineering roles
Knowledgeable in programming languages: SQL, Python, Scala (Spark) , Data bricks
Experience in multinational project, cross-functional engineering team
Able to read/write English documents
Good in thinking, observing and analyzing
Ability of presenting ideas to non-technical people
Flexibility, and adaptability in this fast-growing environment
Nice to have:
Good English communication
Have management experience
Nice to have knowledge: Apache Nifi, ReportServer
Knowledgeable about anything of banking, finance, accounting, business, e-commerce, insurance, securities is preferred
Interest
Attractive salary, suitable for working capacity.
Review performance twice a year (In working process, Candidate can be reviewed continuously before the term if the job is successfully completed)