Apache Flume: Distributed Log Collection for Hadoop

Apache Flume: Distributed Log Collection for Hadoop pdf epub mobi txt 電子書 下載2025

Steve Hoffman has 30 years of software development experience and holds

a B.S. in computer engineering from the University of Illinois Urbana-Champaign

and a M.S. in computer science from the DePaul University. He is currently

a Principal Engineer at Orbitz Worldwide.

More information on Steve can be found at http://bit.ly/bacoboy or on

Twitter @bacoboy .

This is Steve's first book.

出版者:Packt Publishing Ltd
作者:Steve Hoffman
出品人:
頁數:108
译者:
出版時間:2013-7
價格:0
裝幀:
isbn號碼:9781782167914
叢書系列:
圖書標籤:
  • 分布式 
  •  
想要找書就要到 小哈圖書下載中心
立刻按 ctrl+D收藏本頁
你會得到大驚喜!!

Hadoop is a great open source tool for sifting tons of unstructured data into something

manageable, so that your business can gain better insight into your customers, needs.

It is cheap (can be mostly free), scales horizontally as long as you have space and

power in your data center, and can handle problems your traditional data warehouse

would be crushed under. That said, a little known secret is that your Hadoop cluster

requires you to feed it with data; otherwise, you just have a very expensive heat

generator. You will quickly find, once you get past the “playing around” phase

with Hadoop, that you will need a tool to automatically feed data into your cluster.

In the past, you had to come up with a solution for this problem, but no more! Flume

started as a project out of Cloudera when their integration engineers had to keep

writing tools over and over again for their customers to import data automatically.

Today the project lives with the Apache Foundation, is under active development,

and boasts users who have been using it in their production environments for years.

In this book I hope to get you up and running quickly with an architectural overview

of Flume and a quick start guide. After that we’ll deep-dive into the details on many

of the more useful Flume components, including the very important File Channel

for persistence of in-flight data records and the HDFS Sink for buffering and writing

data into HDFS, the Hadoop Distributed File System. Since Flume comes with

a wide variety of modules, chances are that the only tool you’ll need to get started

is a text editor for the configuration file.

By the end of the book, you should know enough to build out a highly available,

fault tolerant, streaming data pipeline feeding your Hadoop cluster.

具體描述

讀後感

評分

評分

評分

評分

評分

用戶評價

评分

數據

评分

數據

评分

工具書籍

评分

工具書籍

评分

工具書籍

本站所有內容均為互聯網搜索引擎提供的公開搜索信息,本站不存儲任何數據與內容,任何內容與數據均與本站無關,如有需要請聯繫相關搜索引擎包括但不限於百度google,bing,sogou

© 2025 qciss.net All Rights Reserved. 小哈圖書下載中心 版权所有