Connecting...

SVP, Big Data (Development Manager)

Job Title: SVP, Big Data (Development Manager)
Contract Type: Permanent
Location: Singapore
Industry:
Salary: Negotiable
Contact Name: Shashi Natrajan
Contact Email: shashi.natrajan@ellwoodconsulting.com.sg
Job Published: January 18, 2017 17:28

Job Description

About our client
A Leading Bank

 

About the role

  • 12+ years of relevant technology architecture consulting or industry experience to include experience in Information delivery, Analytics and Business Intelligence based on data from hybrid of Hadoop Distributed File System (HDFS), non-relational (NoSQL, MongoDB, Cassandra) and relational Data Warehouses (Oracle, MySql, MariaDB)
  • Design and develop patterns & frameworks for the Data Lake including data architecture, data ingestion, data preparation, data marts, and visualization analytics.
  • At least 5 years hands-on working experience with Big Data technologies like Hadoop, Mahout, Pig, Hive, HBase, Sqoop, Zookeeper, Ambari, MapReduce and R.
  • Expert level skills in distributed computing
  • 5+ years experience delivering Java based solutions.
  • 3+ Years experience delivering solutions on the Hadoop Stack
  • Hands on experience in Kafka/Flume.
  • Strong Knowledge and experience developing in Spark
  • Experience and deep knowledge of multi processing architecture.
  • Experience with code optimization and performance tuning.
  • Experience with Agile & Devops methodologies.
  • Bachelor’s Degree in Computer Science or Electrical Engineering

 

Your responsibilities

  • Big Data Managers will be responsible for designing and implementing strategies, architectures, ingestion, storage, consumption and delivery processes for complex, large-volume, multi-variate, batch and real time data sets used for modeling, and data mining purposes.
  • Design and implement data ingestion techniques for real time and batch processes for different data types (transaction, video, voice, weblog, sensor, machine and social media data) into Hadoop ecosystems and HDFS clusters
  • Perform data studies and data discovery routines for new and existing data sources.
  • Visualize and report data findings creatively in a variety of visual formats that appropriately provides insights to the organization.
  • Responsibilities include, among others, managing development teams and data modelling teams in the identification of business requirements, functional design, process design, prototyping, development, testing, training, and operationalization of support.

 

You will have

  • Experience working as a Data Scientist
  • Experience designing and implementing reporting and visualization for unstructured and structured data sets
  • Experience designing and developing data cleansing routines utilizing typical data quality functions involving standardization, transformation, rationalization, linking and matching
  • Knowledge of data, master data and metadata related standards, processes and technology
  • Experience working with commercial distributions of HDFS (Hortonworks, Cloudera, Pivotal HD, MapR)
  • Experience with Hadoop Cluster Administration
  • Experience with Data Integration on traditional and Hadoop environments