Bigdata and Hadoop Companies in Bangalore by trainedat.in

Bigdata and Hadoop Companies in bangalore.png

Advertisements

i prepared myself some of Questions and Answers

15 node cluster …how many datanodes are in that
How much data u processed in 15 node cluster every day how much data you processing…
      I process very huge data everyday that is separate thing but usually a normal organisation process  GB to 1 TB.
 1) what kind of issues your facing while using cluster.
      On day to day basis you face config issue with respect to specific jobs, user account issue etc.
2) pls mention recommend hard-disk and ram size .
      2 TB HDD and 130 GB RAM in normal cluster.
 3)Hadoop 1 or Hadoop 2 u r using ..
     Now a days most of org use Hadoop 2 only.
4) Are you using any distribution
    Many big org use distribution only(cloudera or Hortonworks , it depands on org need)
5)Are you used oozie and zookeeper in cluster
    Everyone use oozie and zookeeper for day to jobs
 6) if u used what kind of jobs …can u explain me
     To schedule m/r,sqoop or falcon jobs
7) trouble shooting issues
   It genric question and troubleshooting totaly depands on issue.
8) cluster maintenance and backup
    It is mandatory to keepbackup and maintain your cluster.
 9) any monitoring tools have used gangila etc
     Earlier pepole used ganglia but now a days they use Clodera manager or Ambari matrics.
10) what is the roles and responsibilities of your project
     I have this answer on my webesite. http://www.hadoopadmin.co.in/faq/
11) performance tunning
     Based on jobs we have totune sometime cluster parameters or sometime job itslef.
12) planning of Hadoop cluster
     It is also there on website. http://www.hadoopadmin.co.in/planning-of-hadoop-cluster/
 13) what is ranger
14)have used UDF for pig or hive ?
  Everyone has to use UDF because with udf we can not do cleaning our data.
15) have u written any script automated in system for cluster
   Yes for many job scheduling and space cleaning scripts are required.
 15) kerberos installation and configuration
      Yes to secure your data you need to have Kerbrose on your cluster.
16)how well does  Hadoop scaling ?
    It is good but you should be be awrae which tool should be used and where.
17) name upgrade and increase cluster size like commissioning and decommission
     After a certain period of time upgrade is required and commissioning/decommission also not every month or everyday.
18) have u used metrics ?
     Yes to monitor resources metrics are required.
19) how to decide a  cluster size …based on data size…can u tell which formula we are using …
      Ir is repeat question. anyway your answer is here http://www.hadoopadmin.co.in/planning-of-hadoop-cluster/
 20) can u explain me complete Hadoop eco system and it works?