Questions
stringlengths
5
360
Answers
stringlengths
6
2.23k
what is spark session?
null
why rdd is an immutable?
null
what is partitioner?
null
what are the benefits of data frames?
null
what is dataset?
null
what are the benefits of data sets?
null
what is shared variable in apache spark?
Shared variables are nothing but the variables that can be used in parallel operations. Follow Me : https://www.yo utube.com/c/SauravAgarwa l Spark supports two types of shared variables: broadcast variables, which can be used to cache a value in memory on all nodes, and accumulators, which are variables that are only added to, such as counters and sums.
how to accumulated metadata in apache spark?
null
what is the difference between dsm and rdd?
null
what is speculative execution in spark and how to enable it?
One more point is, Speculative execution will not stop the slow running task but it launch the new task in parallel. Tabular Form : Spark Property >> Default Value >> Description spark.speculation >> false >> enables ( true ) or disables ( false ) speculative execution of tasks. spark.speculation.interval >> 100ms >> The time interval to use before checking for speculat ive tasks. spark.speculation.multiplier >> 1.5 >> How many times slower a task is than the median to be for speculation. spark.speculation.quantile >> 0.75 >> The percentage of tasks that has not finished yet at which to start speculation.
how is fault tolerance achieved in apache spark?
null
combine by key?
null
explain the map partitions and map partitions with index?
null
explain fold operation in spark?
null
difference between text file vs whole text file?
null
what is co group operation?
null
explain pipe operation?
null
explain coalesce operation?
null
explain the repartition operation?
null
explain the top and take ordered operation?
null
explain the lookup operation?
null
how to kill spark running application?
null
follow me https www you tubecomcsauravagarwal49 how to stop info messages displaying on spark console?
null
where the logs are available in spark on yarn?
null
how to find out the different values in between two spark data frames?
null
what are security options in apache spark?
null
what is scala?
null
what are the types of variable expressions available in scala?
val (aka Values): You can name results of expressions with the val keyword.Once refer a value, it does not re -compute it. Example: val x = 1 + 1 x = 3 // This does not compile. var (aka Variables): Variables are like values, except you can re -assign them. You can define a variable with the var keyword. Example: var x = 1 + 1 x = 3 // This can compile.
what is the difference between method and functions in scala?
null
what is case classes in scala?
null
what is traits in scala?
Traits are used to share interfaces and fields between classes. They are similar to Java 8s interfaces. Classes and objects can extend traits but traits cannot be instantiated and therefore have no parameters. Traits are types containing certain fields and methods. Multiple traits can be combined. A minimal trait is simply the keyword trait and an identifier: Example: trait Greeter { def gr eet(name: String): Unit = println("Hello, " + name + "!") }
what is singleton object in scala?
An object is a class that has exactly one instance is called singleton object. Heres an example of a singleton object with a method: object Logger { def info(message: String): Unit = println("Hi i am Dineshkumar") }
what is companion objects in scala?
An object with the same name as a class is called a companion object. Conversely, the class is the objects companion class. A companion class or object can access the private members of its companion. Use a companion object for methods and values which are not specific to instances of the companion class. Example: import scala.math._ case class Circle(radius: Double) { import Circle._ Follow Me : https://www.yo utube.com/c/SauravAgarwa l def area: Double = calculateArea(radius) } object Circle { private def calculateArea(radius: Double): Double = Pi * pow(radius, 2.0) } val circle1 = new Circle(5.0) circle1.area
what are the special datatype available in scala?
null
what is higher order functions in scala?
null
what is currying function or multiple parameter lists in scala?
Methods may define multiple parameter lists. When a method is called with a fewer number of parameter lists, then this will yield a function taking the missing parameter lists as its arguments. This is formally known as currying. Follow Me : https://www.yo utube.com/c/SauravAgarwa l Example: def foldLeft[B](z: B)(op: (B, A) => B): B val numbers = List(1, 2, 3, 4, 5, 6, 7, 8, 9, 10) val res = numbers.foldLeft(0)((m, n) => m + n) print(res) // 55
what is pattern matching in scala?
null
what are the basic properties avail in spark?
null
what are the configuration properties in spark?
spark.executor.memory :- The maximum possible is managed by the YARN cluster whichcannot exceed the actual RAM available. spark.executor.cores: - Number of cores assigned per Executor which cannot be higher than the cores available in each worker. spark.executor.instances: - Numb er of executors to start. This property is acknowledged by the cluster if spark.dynamicAllocation.enabled is set to false. spark.memory.fraction: - The default is set to 60% of the requested memory per executor. spark.dynamicAllocation.enabled: - Overrides the mechanism that Spark provides to dynamically adjust resources. Disabling it provides more control over the number of the Executors that can be started, which in turn impact the amount of storage available for the session. For more information, please see the Dynamic Resource Allocation page in the official Spark website.
what is sealed classes?
Traits and classes can be marked sealed which means all subtypes must be declared in the same file. This is useful for pattern matching because we dont ne ed a catch all case. This assures that all subtypes are known. Example: sealed abstract class Furniture case class Couch() extends Furniture case class Chair() extends Furniture def findPlaceToSit(piece: Furniture): String = piece match { case a: Couch => "Lie on the couch" case b: Chair => "Sit on the chair" }
what is type inference?
The Scala compiler can often infer the type of an expression so you dont have to declare it explicitly. Example: val Name = "Dineshkumar S" // it consider as S tring val id = 1234 // considered as int
follow me https www you tubecomcsauravagarwal68 when not to rely on defaulttype inference?
The type inferred for obj was Null. Since the only value of that type is null, So it is impossible to assign a different value by default.
how can we debug spark application locally?
null
map collection has key and value then key should be mutable or immutable?
Behavior of a Map is not specified if value of an object is changed in a manner that affects equals comparison while object with the key. So Key should be an immutable.
what is off heap persistence in spark?
One of the most important capabilities in S park is persisting (or caching) datasets in memory across operations. Each persisted RDD can be stored using a different storage level. One of the possibilities is to store RDDs in serialized format off -heap. Compared to storing data in the Spark JVM, off -heap storage reduces garbage collection overhead and allows executors to be smaller and to share a pool of memory. This makes it attractive in environments with large heaps or multiple concurrent applications. Follow Me : https://www.yo utube.com/c/SauravAgarwa l
what is the difference between apache spark and apache f link?
null
how do we measuring the impact of garbage collection?
GC has happened due to use too much of memory on a driver or some executors or it might be where garbage collection becomes extremely costly and slow as large numbers of objects are created in the JVM. You can do by this validation ' -verbose:gc -XX:+ PrintGCDetails -XX:+PrintGCTimeStamps' to Sparks JVM options using the `spark.executor.extraJavaOptions` configuration parameter.
apache spark vs apache storm?
null
how to overwrite the output directory in spark?
refer below command using Dataframes, df.write.mode(SaveMode.Overwrite).parquet(path)
how to read multiple text files into a single rdd?
You can specify whole directories, use wildcards and even CSV of directories and wildcards like below. Eg.: val rdd = sc.textFile("file:///D:/Dinesh.txt, file:///D:/Dineshnew.txt")
can we runs park without base of hdfs?
null
define about generic classes in scala?
null
how to enable tungsten sort shuffle inspark2x?
null
how to prevent spark executors from getting lost when using yarn client mode?
The solution if you're using yarn was to set --conf spark.yarn.executor.memoryOverhead=600, alternatively if your cluster uses mesos you can try --conf spark.mesos.executor.memoryOverhe ad=600 instead.
what is the relationship between the yarn containers and the spark executors?
First important thing is this fact that the number of containers will always be the same as the executors created by a Spark application e.g. via --num-executors parameter in spark -submit. Set by the yarn.scheduler.minimum -allocation -mb every container always allocates at least this amount of memory. This means if parameter --executor -memory is set to e.g. only 1g but yarn.scheduler.minimum -alloca tion-mb is e.g. 6g, the container is much bigger than needed by the Spark application. The other way round, if the parameter --executor -memory is set to somthing higher than the yarn.scheduler.minimum -allocation -mb value, e.g. 12g, the Container will alloc ate more memory dynamically, but only if the requested amount of memory is smaller or equal to yarn.scheduler.maximum -allocation -mb value. The value of yarn.nodemanager.resource.memory -mb determines, how much memory can b e allocated in sum by all containers of one host! So setting yarn.scheduler.minimum -allocation -mb allows you to run smaller containers e.g. for smaller executors (else it would be waste of memory). Setting yarn.scheduler.maximum -allocation -mb to the maximum value (e.g. equal to yarn.nodemanager.resource.memory -mb) allows you to define bigger executors (more memory is Follow Me : https://www.yo utube.com/c/SauravAgarwa l allocated if needed, e.g. by --executor -memory parameter).
how to allocate the memory sizes for the spark jobs in cluster?
null
how autocompletion tab can enable in py spark?
Please import the below libraries in pyspark shell import rlcompleter, readline readline.parse_and_bind("tab: complete")
can we execute two transformations on the same rdd in parallel in apache spark?
null
which cluster type should i choose for spark?
null
what is d streams in spark streaming?
Spark streaming uses a micro batch archite cture where the incoming data is grouped into micro batches called Discretized Streams (DStreams) which also serves as the basic programming abstraction. Follow Me : https://www.yo utube.com/c/SauravAgarwa l The DStreams internally have Resilient Distributed Datasets (RDD) and as a result of this standard RDD transformations and actions can be done.
what is stateless transformation?
null
what is stateful transformation?
null
what is a ws?
Follow Me : https://www.yo utube.com/c/SauravAgarwa l Answer:AWS stands for Amazon Web Services. AWS is a platform that provides on -demand resources for hosting web services, storage, networking, databases and other resources over the internet with a pay -as-you-go pricing.
what are the components of aws?
Answer:EC2 Elastic Compute Cloud, S3 Simple Storage Service, Route53, EBS Elastic Block Store, Cloudwatch, Key -Paris are few of the components of AWS.
what are key pairs?
Answer:Key -pairs are secure login information for your instances/virtual machines. To connect to the instances we use key -pairs that contain a public -key and private -key.
what is s3?
null
what are the pricing models for ec2 instances?
Answer:The different pricing model for EC2 instances are as below, On-demand Reserved Spot Schedu led Dedicated
what are the types of volumes for ec2 instances?
null
what are ebs volumes?
Answer:EBS stands for Elastic Block Stores. They are persistent volumes that you can attach to the instances. With EBS volumes, your data will be preserved even when you stop your instances, unlike your instance store volumes where the data is deleted when you stop the instances.
what are the types of volumes in ebs?
null
what are the different types of instances?
Answer: Following are the typ es of instances, General purpose Computer Optimized Storage Optimized Memory Optimized Accelerated Computing
what is an auto scaling and what are the components?
Answer: Auto scaling allows you to automatically scale -up and scale -down the number of instances depending on the CPU utilization or memory utilization. There are 2 components in Auto scaling, they are Auto -scaling groups and Launch Configuration.
what are reserved instances?
null
what is an ami?
null
what is an eip?
null
what is cloud watch?
Answer: Cloudwatch is a monitoring tool that you can use to monitor your various AWS resources. Like health check, network, Application, etc.
what are the types in cloud watch?
Answer: There are 2 types in cloudwatch. Basic monitoring and detailed monitoring. Basic monitoring is free and detailed monitoring is chargeable.
what are the cloud watch metrics that are available for ec2 instances?
null
what are the different storage classes in s3?
Answer: Following are the types of storage classes in S3, Standar d frequently accessed Standard infrequently accessed One-zone infrequently accessed. Glacier RRS reduced redundancy storage
what is the default storage class ins3?
Answer: The default storage class in S3 in Standard frequently accessed.
what is glacier?
Answer: Glacier is the back up or archival tool that you use to back up your data in S3.
how can you secure the access to your s3 bucket?
Answer: There are two ways that you can control the access to your S3 buckets, ACL Access Con trol List Bucket polices
how can you encrypt data in s3?
Answer: You can encrypt the data by using the below methods, Server Side Encryption S3 (AES 256 encryption) Server Side Encryption KMS (Key management Service) Server Side Encryption C (Client Side)
what are the parameters for s3 pricing?
Answer: The pricing model for S3 is as below, Storage used Number of requests you make Storage management Data transfer Transfer acceleration
what is the prerequisite to work with cross region replication in s3?
Answer: You need to enable versioning on both source bucket and destination to work with cross region replication. Also both the source and destination bucket should be in different region.
what are roles?
Follow Me : https://www.yo utube.com/c/SauravAgarwa l Answer: Roles are used to provide permissions to entities that you trust within your AWS account. Roles are users in another account. Roles are similar to users but with roles you do not need to create any username and password to work with the resources.
what are policies and what are the types of policies?
null
what is cloud front?
Answer: Cloudfront is an AWS web service that provided businesses and application developers an easy and efficient way to distribute their content with low latency a nd high data transfer speeds. Cloudfront is content delivery network of AWS.
what are edge locations?
null
what is the maximum individual archive that you can store in glacier?
Answer: You can store a maximu m individual archive of upto 40 TB.
what is vpc?
null
what is vpc peering connection?
null
what are nat gateways?
Answer: NAT stands for Network Address Translation. NAT gateways enables instances in a private subnet to connect to the internet but prevent the internet from initiating a connection with those instances.
how can you control the security to your vpc?
Answer: You can use security groups and NACL (Network Access Control List) to control the security to your VPC.
follow me https www you tubecomcsauravagarwalq34 what are the different types of storage gateway?
Answer: Following are the ty pes of storage gateway. File gateway Volume gateway Tape gateway
what is a snowball?
Answer: Snowball is a data transport solution that used source appliances to transfer large amounts of data into and out of AWS. Using snowball, you can move huge amount of data from one place to another which reduces your network costs, long transfer time s and also provides better security.
what are the database types in rds?
Answer: Following are the types of databases in RDS, Aurora Oracle MYSQL server Postgresql MariaDB SQL server
what is a redshift?
null
what issns?
Answer: SNS stands for Simple Notification Service. SNS is a web service that makes it easy to notifications from the cloud. You can set up SNS to receive email notification or message notification.
what are the types of routing police sinroute53?
Answer: Following are the types of routing policies in route53, Simple routing Latency routing Failover routing Geolocation r outing Weighted routing Multivalue answer