高性能Spark(影印版) [High Performance Spark]

高性能Spark(影印版) [High Performance Spark] 下載 mobi epub pdf 電子書 2024


簡體網頁||繁體網頁
Holden,Karau,Rachel,Warren 著

下載链接在页面底部


點擊這裡下載
    

想要找書就要到 新城書站
立刻按 ctrl+D收藏本頁
你會得到大驚喜!!

發表於2024-11-27


圖書介紹


齣版社: 東南大學齣版社
ISBN:9787564175184
版次:1
商品編碼:12319832
包裝:平裝
外文名稱:High Performance Spark
開本:16開
齣版時間:2018-02-01
用紙:膠版紙


類似圖書 點擊查看全場最低價

相關圖書





圖書描述

內容簡介

本書描述瞭減少數據基礎設施成本和開發時間的技巧,適用於軟件工程師、數據工程師、開發者和係統管理員。你不僅可以從中獲得關於Spark的全麵理解,也將學會如何讓它運轉自如。

在本書中你將發現:
* Spark SQL的新接口如何在SQL的RDD數據結構上改善性能
* Core Spark和Spark SQL之間的數據拼接選擇
* 充分發揮標準RDD轉換功能的技巧
* 如何處理Spark的鍵/值對範式的相關性能問題
* 編寫高性能Spark代碼,不使用Scala或JVM
* 如何在應用建議的改進措施時測試功能和性能
* 使用Spark MLlib和Spark ML機器學習庫
* Spark的流組件和外部社區軟件包

作者簡介

Holden Karau是一位跨性彆加拿大人,在IBM Spark技術中心擔任軟件開發工程師。她是Spark代碼貢獻者,並且經常提交貢獻代碼,特彆是PySpark和機器學習部分。Holden在多個國際活動中演講Spark相關話題。
Rachel Warren是Alpine Data的軟件工程師和數據科學傢。在日常工作中,她使用Spark來處理真實世界的數據和機器學習問題。她也曾在工業界和學術界擔任分析師和導師。

精彩書評

“《高性能Spark》是幫助你實現Apache Spark産品級解決方案的重要資源。由此不僅可以理解重要的Spark優化方法,還有底層的內部細節。”
—— Denny Lee(微軟資深項目經理,Azure DocumentDB團隊)

目錄

Preface
1.Introduction to High Performance Spark
What Is Spark and Why Performance Matters
What You Can Expect to Get from This Book
Spark Versions
Why Scala?
To Be a Spark Expert You Have to Learn a Little Scala Anyway
The Spark Scala API Is Easier to Use Than the lava API
Scala Is More Performant Than Python
Why Not Scala?
Learning Scala
Conclusion

2.How Spark Works
How Spark Fits into the Big Data Ecosystem
Spark Components
Spark Model of Parallel Computing: RDDs
Lazy Evaluation
In-Memory Persistence and Memory Management
Immutability and the RDD Interface
Types of RDDs
Functions on RDDs: Transformations Versus Actions
Wide Versus Narrow Dependencies
Spark Job Scheduling
Resource Allocation Across Applications
The Spark Application
The Anatomy of a Spark lob
The DAG
Jobs
Stages
Tasks
Conclusion

3.DataFrames, Datasets, and Spark SQL
Getting Started with the SparkSession (or HiveContext or SQLContext)
Spark SQL Dependencies
Managing Spark Dependencies
Avoiding Hive JARs
Basics of Schemas
DataFrame API
Transformations
Multi-DataFrame Transformations
Plain Old SQL Queries and Interacting with Hive Data
Data Representation in DataFrames and Datasets
Tungsten
Data Loading and Saving Functions
DataFrameWriter and DataFrameReader
Formats
Save Modes
Partitions (Discovery and Writing)
Datasets
Interoperability with RDDs, DataFrames, and Local Collections
Compile-Time Strong Typing
Easier Functional (RDD "like") Transformations
Relational Transformations
Multi-Dataset Relational Transformations
Grouped Operations on Datasets
Extending with User-Defined Functions and Aggregate Functions (UDFs,UDAFs)
Query Optimizer
Logical and Physical Plans
Code Generation
Large Query Plans and Iterative Algorithms
Debugging Spark SQL Queries
JDBC/ODBC Server
Conclusion

4.Joins (SQL and Core)
Core Spark Joins
Choosing a Join Type
Choosing an Execution Plan
Spark SQL Joins
DataFrame Joins
Dataset Joins
Conclusion

5.Effective Transformations
Narrow Versus Wide Transformations
Implications for Performance
Implications for Fault Tolerance
The Special Case of coalesce
What Type of RDD Does Your Transformation Return?
Minimizing Object Creation
Reusing Existing Objects
Using Smaller Data Structures
Iterator-to-Iterator Transformations with mapPartitions
What Is an Iterator-to-Iterator Transformation?
Space and Time Advantages
An Example
Set Operations
Reducing Setup Overhead
Shared Variables
Broadcast Variables
Accumulators
Reusing RDDs
Cases for Reuse
Deciding if Recompute Is Inexpensive Enough
Types of Reuse: Cache, Persist, Checkpoint, Shuffle Files
Alluxio (nee Tachyon)
LRU Caching
Noisy Cluster Considerations
Interaction with Accumulators
Conclusion

6.Working with Key/Value Data
The Goldilocks Example
Goldilocks Version 0: Iterative Solution
How to Use PairRDDFunctions and OrderedRDDFunctions
Actions on Key/Value Pairs
What's So Dangerous About the groupByKey Function
Goldilocks Version 1: groupByKey Solution
Choosing an Aggregation Operation
Dictionary of Aggregation Operations with Performance Considerations
Multiple RDD Operations
Co-Grouping
Partitioners and Key/Value Data
Using the Spark Partitioner Object
Hash Partitioning
Range Partitioning
Custom Partitioning
Preserving Partitioning Information Across Transformations
Leveraging Co-Located and Co-Partitioned RDDs
Dictionary of Mapping and Partitioning Functions PairRDDFunctions
Dictionary of OrderedRDDOperations
Sorting by Two Keys with SortByKey
Secondary Sort and repartitionAndSortWithinPartitions
Leveraging repartitionAndSortWithinPartitions for a Group by Key and Sort Values Function
How Not to Sort by Two Orderings
Goldilocks Version 2: Secondary Sort
A Different Approach to Goldilocks
Goldilocks Version 3: Sort on Cell Values
Straggler Detection and Unbalanced Data
Back to Goldilocks (Again)
Goldilocks Version 4: Reduce to Distinct on Each Partition
Conclusion

7.Going Beyond Scala
Beyond Scala within the JVM
Beyond Scala, and Beyond the JVM
How PySpark Works
How SparkR Works
Spark.jl (Julia Spark)
How Eclair JS Works
Spark on the Common Language Runtime (CLR)——C# and Friends
Calling Other Languages from Spark
Using Pipe and Friends
JNI
Java Native Access (JNA)
Underneath Everything Is FORTRAN
Getting to the GPU
The Future
Conclusion

8.Testing and Validation
Unit Testing
General Spark Unit Testing
Mocking RDDs
Getting Test Data
Generating Large Datasets
Sampling
Property Checking with ScalaCheck
Computing RDD Difference
Integration Testing
Choosing Your Integration Testing Environment
Verifying Performance
Spark Counters for Verifying Performance
Projects for Verifying Performance
Job Validation
Conclusion

9.Spark MLlib and ML
Choosing Between Spark MLlib and Spark ML
Working with MLlib
Getting Started with MLlib (Organization and Imports)
MLlib Feature Encoding and Data Preparation
Feature Scaling and Selection
MLlib Model Training
Predicting
Serving and Persistence
Model Evaluation
Working with Spark ML
Spark ML Organization and Imports
Pipeline Stages
Explain Params
Data Encoding
Data Cleaning
Spark ML Models
Putting It All Together in a Pipeline
Training a Pipeline
Accessing Individual Stages
Data Persistence and Spark ML
Extending Spark ML Pipelines with Your Own Algorithms
Model and Pipeline Persistence and Serving with Spark ML
General Serving Considerations
Conclusion

10.Spark Components and Packages
Stream Processing with Spark
Sources and Sinks
Batch Intervals
Data Checkpoint Intervals
Considerations for DStreams
Considerations for Structured Streaming
High Availability Mode (or Handling Driver Failure or Checkpointing)
GraphX
Using Community Packages and Libraries
Creating a Spark Package
Conclusion
A.Tuning, Debugging, and Other Things Developers Like to Pretend Don't Exist
Index
高性能Spark(影印版) [High Performance Spark] 下載 mobi epub pdf txt 電子書 格式

高性能Spark(影印版) [High Performance Spark] mobi 下載 pdf 下載 pub 下載 txt 電子書 下載 2024

高性能Spark(影印版) [High Performance Spark] 下載 mobi pdf epub txt 電子書 格式 2024

高性能Spark(影印版) [High Performance Spark] 下載 mobi epub pdf 電子書
想要找書就要到 新城書站
立刻按 ctrl+D收藏本頁
你會得到大驚喜!!

用戶評價

評分

評分

評分

評分

評分

評分

評分

評分

評分

類似圖書 點擊查看全場最低價

高性能Spark(影印版) [High Performance Spark] mobi epub pdf txt 電子書 格式下載 2024


分享鏈接




相關圖書


本站所有內容均為互聯網搜索引擎提供的公開搜索信息,本站不存儲任何數據與內容,任何內容與數據均與本站無關,如有需要請聯繫相關搜索引擎包括但不限於百度google,bing,sogou

友情鏈接

© 2024 book.cndgn.com All Rights Reserved. 新城書站 版权所有