Questions
stringlengths
5
360
Answers
stringlengths
6
2.23k
realtime use cases of companion objects and trait?
null
how to fetch data from a scala list in a parallel manner?
Not exactly the answer, but helpful https://alvinalexander.com/scala/how -to-use-parallel -collections -in-scala -performance
how to increase spark executor memory and hive utilisation memory?
https://stackoverflow.com/questions/26562033/how -to-set-apache -spark -executor -memo ry 23. Different types of Nosql databases.What is the difference between hbase and Cassandra. https://www.3pillarglobal.com/insights/exploring -the-different -types -of-nosql -databases Follow Me : https://www.yo utube.com/c/SauravAgarwa l https://data -flair.training/blogs/hbase -vs-cassandra/
questions regarding different file formats of had oop and when to use?
Useful blog: https://community.hitachivantara.com/community/products -and-solutions/pentaho/blog/2 017/11/ 07/hadoop -file-formats -its-not-just-csv-anymore
what is the difference between hive map join and hive bucket join?
https://data -flair.training/blogs/map -join-in-hive/ https://data -flair.training/blogs/bucket -map-join/ 26. Performance optimization techniques in sqoop,hive and spark. Hive - https://hortonworks.com/blog/5 -ways -make -hive-queries -run-faster/ 27. End to end project flow and the usage of all the hadoop ecosyst em components.
why does apache spark exist and how py spark fits into the picture?
null
what is the filesize you are using in your development and production environment?
null
use cases of accumulators and broadcast variables?
null
explain the difference between internal and external tables?
null
when to use?
1. Use internal table if its data wont be used by other bigdata Ecosystems. Follow Me : https://www.yo utube.com/c/SauravAgarwa l 2. Use external table if its data would be used by other bigdata ecosystems as it wont have any impact just in case of table drop operation
how did you run hive load scripts in production?
Ans: All the hive commands were kept in .sql files ( for ex - load ordersdata.sql) and these files were invoked in in Unix shell script through command: hive -f ordersdata.sql. These unix scripts had few other HDFS commands as well. For ex - To load data into HDFS, make backup on local file system, send email once load was done etc. etc.). These unix scripts were called through Enterprise scheduler ( Control M or Autosys or Zookeeper).
why does hive doesnt store metadata in hdfs?
Ans: 1.Storing metadata in HDFS results in high latency/delay considering the fact of sequential access in HDFS for read/write operations. So its evident to store metadata in Metastore to achieve low lat ency because of random access in metastore(MySQL dB)
which file format works best with hive tables why?
null
how to append files of various dfs?
null
how will you solve this problem and list the steps that i will be taking in order to do so?
null
why map reduce will not run if you run select from table in hive?
null
how to import first 10 records from a rdbms table into hdfs using sq oop how to import all
null
the records except first 20rows records and also last 50recordsusingsqoop import?
null
what is the difference between kafka and flume?
40. How to change the number of replication factors and how to change the number of mappers and
reducers?
null
how the number of partitions and stages get decided in spark?
null
what is the default number of mappers and reducers in map reduce job?
null
how to change the block size while importing the data into hdfs?
Follow Me : https://www.yo utube.com/c/SauravAgarwa l 44. What setting need to be done while doing dynamic partition and bucketing.
how to run map reduce and spark job?
null
what is datasets in spark and how to create and use it?
null
what are the difference between hive and h base hive and rdbms no sql and rdbms?
null
what are the difference between had oop and rdbms?
null
what are the difference between had oop and spark?
50. What are the difference between Scala and Java.
what are the advantages and disadvantages of functional programming?
null
what are the advantages of had oop over distributed filesystems?
53. Core concept of map reduce internal architecture and job flow. 54. Architecture of Hadoop,Yarn and Spark.
what are the advantages of using yarn as cluster manager than me so sands park standalone cm?
Company Specific Questions Follow Me : https://www.yo utube.com/c/SauravAgarwa l Company: Fedility Date: 07 -Aug-2018
what security authentication you are using how you are managing?
null
about c entry security authentication?
3. how do you do schedule the jobs in Fair scheduler 4. prioritizing jobs
how you are doing ac center l control for hdfs?
6. Disaster Recovery activities 7. what issues you are faced so far 8. do you know about puppet 9. hadoop development activities Company: Accenture Dt: 06 -July-2018
what are your daily activities and what are your roles and responsibilities in your current
null
project what are the services that are implemented in your current project?
null
what have you done for performance tunning?
null
what is the block size in your project?
4) Explain your current project process
have you used storm kafka or solr services in your project?
6) Have you used puppet tool
have you used security in your project why do you use security in your cluster?
null
explain how kerberos authentication happens?
null
what is your cluster size and what are the services you are using 10 do you have good hands on
experience in Linux
have you used flume or storm in your project?
Company: ZNA 04-July-2018 1)Roles and responsibilities in current project Follow Me : https://www.yo utube.com/c/SauravAgarwa l 2)What do you monitor in cluster i.e; What do you monitor to ensure that c luster is in healthy
state?
null
what is jvm?
null
what is rack awareness?
null
what is high availability how do you implement high availability on a preexisting cluster with
null
single node what are the requirements to implement ha
null
what is hive how do you install and configure from cli11 what is disc space and disc
Quota 12) How to add data nodes to your cluster without using Cloudera Manager. 13) How to add Disk space to Datanode which is already added to cluster. And how to format the disk before adding it to cluster.
how good ru at shell scripting have you used shell scripting to automate any of your
activities.
what are the activities that r automated using shell scripting in your current project 15 what are the
benefits of YARN compare to Hadoop -1.
difference between mr1andmr2?
18) Most challenges that you went through in your project. 19) Activities performed on Cloudera Manager 20) How you will know about the threshold, do you check manually every time. Do you know about puppet etc., 21) How many clusters and nodes are present in your p roject. 22) You got a call when u r out of office saying there is no enough space i.e., HDFS threshold has been reached. What is the your approach to resolve this issue. 23) Heat beat messages, Are they sequential processing or parallel processing. 24) Wha t is the volume of data you receive to your cluster every day.
what is hdfs?
null
are you upgrading in node how?
3. How do you copy config files to other nodes 4. what security system you follows, what is diff with out kerberos 5. What is JN, HA 6. what is usage of SNN
usage of automatic failover how you do what all r other methods?
8. How do you load data for teradata to Hadoop
are you using impala?
null
could you give me your day today activities?
null
what is the process to integrate meta store for hive could you explain the process?
null
do you have any idea about dfs name dir?
4) What will happend when data node is down. 5) How you will test, whether datanode is working or not. 6) Do you have idea about Zoombie process. 7) How namenode will be knowing datanode is down. Nagios alert, admin -report (command), cloudera manage 8) Heat beat, whether it is sequential processing or parallel processing. 9) What is the volume of data you receive to the cluster. 40 to 50GB 10) How do you receive data to your cluster. 11) What is your cluster size. 12) What is the port number of namenode. 13) What is the port number of Job tracker. 14) How do you install hive, pig, hbase.
what is jvm?
16) How do you do rebalancing. Company: Verizon 02 -Oct-2017 1)How do you dopaswordless SSH in hadoop. 2) Upgrades (Have you done anytime). 3) ClouderaManager port number. 4) what is your cluster size. 5) Versions 6) Map reduce version. 7) Daily activities. 8) What operations, you normally use in cloudera manager. 9) is internet connected to your nodes. 10) Do you have different cloudera managers for dev and production. 11) what are installation s teps Follow Me : https://www.yo utube.com/c/SauravAgarwa l Company: HCL 22-Sep-2017 1) Daily activities. 2) versions. 3) What is decommissioning. 4) What is the procedure to decommission datanode. 5) Difference between MR1 and MR2. 6) Difference between Hadoop1 and Hadoop2. 7) Difference between RDBMS and No -SQL. 8) What is the use of Nagios. Company: Collabera Date: 14 -Mar-2018 1) Provide your roles and responsibilities. 2) What do you do for cluster management. 3) At midnight, you got a call saying there is no enough space i.e., HDFS threshold h as been reached. What is the your approach to resolve this issue. 4) How many clusters and nodes are present in your project. 5) How you will know about the threshold, do you check manually every time. Do you know about puppet etc., 6) Code was tested succ essfully in Dev and Test. When deployed to Productions it is failing. As an
admin how do you track the issue?
null
what is decommissioning?
null
what is the file size youve used?
null
how long does it take to run your script in production cluster?
null
what is the filesize for production environment?
null
are you planning for anything to improve the performance?
null
what size of file do you use for development?
null
what did you do to increase the performance hive pig?
null
what is your cluster size?
null
what are the challenges you have faced in your project give 2 examples?
null
how to debug production issue logs script counters jvm
null
how do you select the ecosystem tools for your project?
null
how many nodes you are using currently?
null
what is the job scheduler you use in production cluster?
More question 1) What are your day to day activities. 2) How do you add datanode to the cluster.
do you have any idea about dfs name dir?
4) What will happend when data node is down. 5) How you will test, whether datanode is working or not. 6) Do you have idea about Zoombie process. 7) How namenode will be knowing datanode is down. Nagios alert, admin -report (command), cloudera manage Follow Me : https://www.yo utube.com/c/SauravAgarwa l 8) Heat beat, whether it is sequential proce ssing or parallel processing. 9) What is the volume of data you receive to the cluster. 40 to 50GB 10) How do you receive data to your cluster. 11) What is your cluster size. 12) What is the port number of namenode. 13) What is the port number of Job tracker. 14) How do you install hive, pig, hbase.
what is jvm?
16) How do you do rebalancing. Company: Verizon 02 -Oct-2017 1)How do you dopaswordless SSH in hadoop. 2) Upgrades (Have you done anytime). 3) ClouderaManager port number. 4) what is your cluster size. 5) Versions 6) Map reduce version. 7) Daily activities. 8) What operations, you normally use in cloudera manager. 9) is internet connected to your nodes. 10) Do you have different cloudera managers for dev and production. 11) what are installation steps Company: HCL 22-Sep-2017 1) Daily activities. 2) versions. 3) What is decommissioning. 4) What is the procedure to decommission datanode. 5) Difference between MR1 and MR2. 6) Difference between H adoop1 and Hadoop2. 7) Difference between RDBMS and No -SQL. Follow Me : https://www.yo utube.com/c/SauravAgarwa l 8) What is the use of Nagios.
youtube videos of interview questions with explanation?
null