Bandit Algorithms for Website Optimization

Bandit Algorithms for Website Optimization pdf epub mobi txt 電子書 下載2025

出版者:O'Reilly Media
作者:John Myles White
出品人:
頁數:88
译者:
出版時間:2013-1-3
價格:USD 19.99
裝幀:Paperback
isbn號碼:9781449341336
叢書系列:
圖書標籤:
  • Algorithms 
  • 算法 
  • Optimization 
  • Bandit 
  • Website 
  • 計算機科學 
  • 計算機 
  • 機器學習 
  •  
想要找書就要到 小哈圖書下載中心
立刻按 ctrl+D收藏本頁
你會得到大驚喜!!

This book shows you how to run experiments on your website using A/B testing - and then takes you a huge step further by introducing you to bandit algorithms for website optimization. Author John Myles White shows you how this family of algorithms can help you boost website traffic, convert visitors to customers, and increase many other measures of success. This is the first developer-focused book on bandit algorithms, which have previously only been described in research papers. You'll learn about several simple algorithms you can deploy on your own websites to improve your business including the epsilon-greedy algorithm, the UCB algorithm and a contextual bandit algorithm. All of these algorithms are implemented in easy-to-follow Python code and be quickly adapted to your business's specific needs. You'll also learn about a framework for testing and debugging bandit algorithms using Monte Carlo simulations, a technique originally developed by nuclear physicists during World War II. Monte Carlo techniques allow you to decide whether A/B testing will work for your business needs or whether you need to deploy a more sophisticated bandits algorithm.

具體描述

讀後感

評分

multiarmed bandit原本是从赌场中的多臂老虎机的场景中提取出来的数学模型。 是无状态(无记忆)的reinforcement learning。目前应用在operation research,机器人,网站优化等领域。 arm:指的是老虎机 (slot machine)的拉杆。 bandit:多个拉杆的集合,bandit = {arm1, ar...  

評分

multiarmed bandit原本是从赌场中的多臂老虎机的场景中提取出来的数学模型。 是无状态(无记忆)的reinforcement learning。目前应用在operation research,机器人,网站优化等领域。 arm:指的是老虎机 (slot machine)的拉杆。 bandit:多个拉杆的集合,bandit = {arm1, ar...  

評分

multiarmed bandit原本是从赌场中的多臂老虎机的场景中提取出来的数学模型。 是无状态(无记忆)的reinforcement learning。目前应用在operation research,机器人,网站优化等领域。 arm:指的是老虎机 (slot machine)的拉杆。 bandit:多个拉杆的集合,bandit = {arm1, ar...  

評分

multiarmed bandit原本是从赌场中的多臂老虎机的场景中提取出来的数学模型。 是无状态(无记忆)的reinforcement learning。目前应用在operation research,机器人,网站优化等领域。 arm:指的是老虎机 (slot machine)的拉杆。 bandit:多个拉杆的集合,bandit = {arm1, ar...  

評分

multiarmed bandit原本是从赌场中的多臂老虎机的场景中提取出来的数学模型。 是无状态(无记忆)的reinforcement learning。目前应用在operation research,机器人,网站优化等领域。 arm:指的是老虎机 (slot machine)的拉杆。 bandit:多个拉杆的集合,bandit = {arm1, ar...  

用戶評價

评分

太水啦 還給我講人生經驗

评分

多臂賭博機問題入門,容易上手,但都比較淺顯

评分

pros:作為一個教材寫得很成功,循序漸進,從最初的問題開始,提齣解決方案,指齣不足,迭代齣新方案;解釋得很清晰。cons:沒有理論基礎;作者的Python代碼水平一般般

评分

非常入門

评分

pros:作為一個教材寫得很成功,循序漸進,從最初的問題開始,提齣解決方案,指齣不足,迭代齣新方案;解釋得很清晰。cons:沒有理論基礎;作者的Python代碼水平一般般

本站所有內容均為互聯網搜索引擎提供的公開搜索信息,本站不存儲任何數據與內容,任何內容與數據均與本站無關,如有需要請聯繫相關搜索引擎包括但不限於百度google,bing,sogou

© 2025 qciss.net All Rights Reserved. 小哈圖書下載中心 版权所有