• Delivered trainings and provided support for Hadoop Development. • POC on designing and implementing Big Data Analytics Solution. • Deliver the impromptu projects for clients based on big data with Hadoop. • Completed Hadoop ecosystem, delivered high availability cluster to client with Hadoop and other components installation & deployment across different locations. • Providing trainings on Hadoop Ecosystem, Installation, configuration of Hadoop Cluster, HDFS: Understanding of core components: Name Node, Standby Name Node, Secondary Name Node, Data Node, other Daemons; HDFS storage system, Metadata, File and Block size, Replication Factor, CLI; • YARN: Understanding components : Resource Manager, Node Manager, Job History Server, Scheduler, Application Master, Map-Reduce; • Data Ingestion Tools: SQOOP : Importing , Exporting Data, FLUME: Understanding components: Source, Sink, Channel, Agent, Event, Streaming real time log data; KAFKA: Installing Kafka, Producer-Writing message to kafka, Consumer-Reading data from Kafka, Administrating Kafka: Broker, Topic, Zookeeper, Partitioning, Offset; Kafka Streaming Processing, • SPARK: Basic understanding of core components: RDDs, Spark-submit; • Data Analytics: PIG: Installation and executing queries, HIVE: Understanding components: Hive Server, Hive Metastore; Types of tables, Partitioning, Hive Queries; IMPALA: Understanding components Impala catalog, Impala server, Impala metast.re and Impala shell; • Web User Interface: HUE: Installation, Working with Hadoop User Environment (HUE). • LINUX: Basic understating Linux command, setting up of Linux users. • Oozie Workflow, NOSQL: HBase.