Spark & Distributed Machine Learning

Learning Notes of CMU Course 15-640 Distributed Systems

Posted by haohanz May 06, 2019 · Stay Hungry · Stay Foolish

Table of Contents


Spark

  • What is linage in spark
    • The dependencies between RDDs will be logged in a graph – the sequence of operations – lineage graph
    • use “toDebugString” would know the lineage graph
    • Enable fault-tolerant
      • With the lineage graph, we can recompute the missing or damaged partitions due to node failure.
    • Alias:
      • RDD operator graphs
      • RDD dependencies graph
      • Output of applying transformations to the spark, a logical execution plan.
    • Source: https://data-flair.training/blogs/rdd-lineage/
  • MapReduce: Pros and Cons
    • Pros:
      • High throughput
      • Fault-tolerance
    • Cons:
      • Response time is large (latency) due to I/O penalty, interactive data analysis is not possible
      • Iterative applications would be slow, 90% time spent on I/O and network (e.g. ML)
      • Abstraction not expressive enough, different use cases has different applications now.
  • What is Spark?
    • Berkeley’s extensions to Hadoop
    • Goal 1: keep and share data sets in main memory
      • Which lead to the problem of fault tolerance
      • Which solved by lineage and small partitioning
    • Goal 2: Unified computation abstraction
      • Which make iterations viable(local work + message passing = BSP = Bulk synchronous parallel)
      • Which lead to the problem of how to balance ease-of-use and generality?
  • Spark: Pros and Cons
    • Pros
      • Fast response time
      • In-memory computation
      • Data sharing and fault tolerance
    • Cons
      • Cannot solve fine-grained operations
      • Cannot solve large datasets which cannot fit into memory task
      • Cannot solve task with requirement of very high efficiency - SIMD, or GPU
  • Why didn’t use in-memory computation and data-sharing before?
    • Because there’s not a way of efficient fault-tolerance if using in-memory computation, you have to
      • Checkpoint often
      • Replicate data across nodes
      • Log each operation to node-local persistent storage
    • The latency is still the same.
  • Spark is fault-tolerance (recovery) efficient
    • Using coarse grained operations - lineage graph instead of storing the real data.
    • In contrast to Hadoop/GFS, RDDs provide an interface based on coarse-grained transformations (e.g., map, filter and join) that apply the same operation to many (small-size partition) data items. This allows them to efficiently provide fault tolerance by logging the transformations used to build a dataset (its lineage) rather than the actual data.1 If a partition of an RDD is lost, the RDD has enough information about how it was derived from other RDDs to recompute  just that partition. Thus, lost data can be recovered, often quite quickly, without requiring costly replication.
    • Source
  • Why Spark RDD is immutable?
    • Consistency model: same RDD caching and sharing across nodes.
    • Enables lineage
      • Recreate RDD anytime
      • Be more strict, need to be deterministic functions of input
    • Compatibility with HDFS storing interface - which allows append only. The modification(write) is not allowed.
  • (HDFS) GFS’s fault tolerance
    • The chunk-server fault tolerance
      • Master grant lease to chunk primary (only during modification operation)
      • The lease would be revoked if file is being renamed or deleted)
      • The lease would be updated each time assigning a new one
      • The lease would be refreshed every 60 second if modification is not finished
    • Why Lease/version number?
      • Network partition
      • Primary failed (What if primary failed?)
      • Revoke lease when expire or rename/delete file
      • Detect outdated chunk server with version number, consider the server with failed (shut down during lease renewal).
    • The master fault tolerance
      • Use the master-slave replica
      • Use WAL
      • Log cannot be too long
      • Checkpoint during some period
  • Lazy evaluation:
    • 3 Operations:
      • Transformation (map)
      • Persist (cache to memory)
      • Action (reduce)
    • Execution(evaluation) is triggered by actions, not transformations (maps)
  • Deployment
    • Master Server
    • Cluster manager
    • worker nodes
      • Executor
        • Cache
        • Task1,…,Taskn
  • Scheduling
    • Partitioning/Cache-aware scheduling to minimize shuffles (data skew)
  • Problems
    • Lineage grows soooo large: manually checkpoint to HDFS
    • The immutable RDD - cannot support randomness
    • When need a lot of memory - not enough
    • Overhead (time and space) of copy data: because no mutate-in-place

Distributed Machine Learning

  • Problem
    • Scale out - more iterations/second - increase throughput
    • The problem: Communication overhead scales badly with number of machines, would increase as # machines increase
    • Solution: P2P
      • Higher network overhead because the transmission time, more hopes between nodes. But less bandwidth - so easier scale-out
      • Data center: put all user nodes under data center, which lead to the center monitoring problems.
  • (& Solution 2) Problem
    • BSP consistency - barrier, wait for stragglers.
    • Nodes can accept slightly stale state, can still converge
    • N-Bounded delay BSP