The below are the list of questions which i faced during the hadoop developer interview as i mentioned having good knowledge in spark and scala in my profile…Happy learning Spark.
- Lazy evalution in spark
- Why partitions are immutable in spark
- Difference between sparkSQL And hiveQL
- Difference between map and arraybuffer in scala
- Performance tuning of spark. On what basis we need to decide
- We need to consider all factorsClustersize, inputdata, memory available and cores
- Spark SQL support which SQL standard SQL91, SQL92?
- Spark DAG generation
- How parallel execution happening in spark
- Why we need to go for spark…what is the use of it
- how to iterate objects in a collection in spark(i didn’t understand this question. i told map)