» » Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning)

Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning) epub download

by Francis Bach,Kevin P. Murphy


A comprehensive introduction to machine learning that uses probabilistic models and inference as a unifying approach. Today's Web-enabled deluge of electronic data calls for automated methods of data analysis.

A comprehensive introduction to machine learning that uses probabilistic models and inference as a unifying approach. Machine learning provides these. by. Kevin P. Murphy (Author). Find all the books, read about the author, and more.

quality that advance the understanding and practical application of machine learning and adaptive computation. A Probabilistic Perspective. Murphy 2012.

This series will publish works of the highest quality that advance the understanding and practical application of machine learning and adaptive computation. Books in this Series. Foundations of Machine Learning.

oceedings{Murphy2012MachineL, title {Machine learning - a probabilistic perspective}, author {Kevin P. Murphy}, booktitle {Adaptive computation and machine learning series}, year {2012} }. Murphy. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data.

Murphy, Kevin P. Machine learning : a probabilistic perspective, Kevin P. p. cm. - (Adaptive computation and machine learning series) Includes bibliographical references and index. ISBN 978-0-262-01802-9 (hardcover : alk. paper) 1. Machine learning.

Machine learning provides these, developing methods that can .

Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach. Previously, he was Associate Professor of Computer Science and Statistics at the University of British Columbia. Informazioni bibliografiche. Machine Learning: A Probabilistic Perspective Adaptive Computation and Machine Learning series.

Machine Learning book. Adaptive Computation and Machine Learning). A comprehensive introduction to machine learning that uses probabilistic models and inference as a unifying approach.

Machine Learning: A Probabilistic Perspective. 1098 Pages · 2012 · 2. 9 MB · 4,804 Downloads ·English. Machine Learning with Python Cookbook: Practical Solutions from Preprocessing to Deep Learning. 59 MB·43,576 Downloads·New! This practical guide provides nearly 200 self-contained recipes to help you solve machine learning. Hands-On Machine Learning with Scikit-Learn and TensorFlow: Concepts, Tools, and Techniques. 31 MB·32,943 Downloads·New!

Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series).

Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series). By 作者: Kevin P.

Items related to Machine Learning: A Probabilistic Perspective (Adaptive. Murphy, Kevin P. Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning series). ISBN 13: 9780262018029.

A comprehensive introduction to machine learning that uses probabilistic models and inference as a unifying approach.

Today's Web-enabled deluge of electronic data calls for automated methods of data analysis. Machine learning provides these, developing methods that can automatically detect patterns in data and then use the uncovered patterns to predict future data. This textbook offers a comprehensive and self-contained introduction to the field of machine learning, based on a unified, probabilistic approach.

The coverage combines breadth and depth, offering necessary background material on such topics as probability, optimization, and linear algebra as well as discussion of recent developments in the field, including conditional random fields, L1 regularization, and deep learning. The book is written in an informal, accessible style, complete with pseudo-code for the most important algorithms. All topics are copiously illustrated with color images and worked examples drawn from such application domains as biology, text processing, computer vision, and robotics. Rather than providing a cookbook of different heuristic methods, the book stresses a principled model-based approach, often using the language of graphical models to specify models in a concise and intuitive way. Almost all the models described have been implemented in a MATLAB software package―PMTK (probabilistic modeling toolkit)―that is freely available online. The book is suitable for upper-level undergraduates with an introductory-level college math background and beginning graduate students.

Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning) epub download

ISBN13: 978-0262018029

ISBN: 0262018020

Author: Francis Bach,Kevin P. Murphy

Category: Computers and Technology

Subcategory: Computer Science

Language: English

Publisher: The MIT Press; 1 edition (August 24, 2012)

Pages: 1104 pages

ePUB size: 1552 kb

FB2 size: 1919 kb

Rating: 4.8

Votes: 357

Other Formats: azw rtf lrf docx

Related to Machine Learning: A Probabilistic Perspective (Adaptive Computation and Machine Learning) ePub books

Zicelik
In a nutshell, the value of reading Murphy's Machine Learning highly depends on what you expect to get out of it.
------------------
As a graduate student who had read a descent number of papers in the field, I feel very conflicted about this textbook.
------------------
If you expect to teach yourself machine learning from this textbook, this is in my opinion almost surely *not* the textbook to get. (0/5 Stars)
-The content of the textbook is highly disorganized. Future chapters are constantly referenced in the text (as if you have already read them!). Perplexingly, meaningful explanations of concepts are often delayed by multiple chapters. (Ex. BIC is introduced in Ch.6 but a mathematical justification is provided only in Ch. 8 when the mathematical justification could have (and should have) been in Ch. 6).
-A number of topics are merely mentioned (like VC dimension) but not actually discussed at any reasonable length, making some sections of the textbook meaningless.
-I would instead recommend the related (but different) text Introduction to Statistical Learning with Applications in R as it is quite accessible.
-----------
However, if you are an instructor and wish to use this textbook as a supplement to a course or are a researcher then Murphy's Machine Learning is in my opinion could be a worthwhile purchase. (4/5 stars)
-The examples, references and illustrations give the textbook a particularly nice touch. (I particularly enjoyed the example of calculating the posterior probability of user ratings of two different items on Amazon).

In summary, if you are an instructor that wants their students to learn how to read challenging exposition to prepare them for reading research papers in the field or if you wish to use this as a reference, then this is a good choice. Otherwise, pass.
Zicelik
In a nutshell, the value of reading Murphy's Machine Learning highly depends on what you expect to get out of it.
------------------
As a graduate student who had read a descent number of papers in the field, I feel very conflicted about this textbook.
------------------
If you expect to teach yourself machine learning from this textbook, this is in my opinion almost surely *not* the textbook to get. (0/5 Stars)
-The content of the textbook is highly disorganized. Future chapters are constantly referenced in the text (as if you have already read them!). Perplexingly, meaningful explanations of concepts are often delayed by multiple chapters. (Ex. BIC is introduced in Ch.6 but a mathematical justification is provided only in Ch. 8 when the mathematical justification could have (and should have) been in Ch. 6).
-A number of topics are merely mentioned (like VC dimension) but not actually discussed at any reasonable length, making some sections of the textbook meaningless.
-I would instead recommend the related (but different) text Introduction to Statistical Learning with Applications in R as it is quite accessible.
-----------
However, if you are an instructor and wish to use this textbook as a supplement to a course or are a researcher then Murphy's Machine Learning is in my opinion could be a worthwhile purchase. (4/5 stars)
-The examples, references and illustrations give the textbook a particularly nice touch. (I particularly enjoyed the example of calculating the posterior probability of user ratings of two different items on Amazon).

In summary, if you are an instructor that wants their students to learn how to read challenging exposition to prepare them for reading research papers in the field or if you wish to use this as a reference, then this is a good choice. Otherwise, pass.
Anaragelv
(Disclaimer: I have worked with a draft of the book and been allowed to use the instructor's review copy for this review. I have bought the book from Amazon.co.uk, but apparently this Amazon.com review can't be tagged "verified purchase". I don't receive any compensation whatsoever for writing this review. I hope it will help you chose a machine learning textbook.)

Similar textbooks on statistical/probabilistic machine learning (links to book websites, not Amazon pages):
- Barber's Bayesian Reasoning and Machine Learning ("BRML", Cambridge University Press 2012)
- Koller and Friedman's Probabilistic Graphical Models ("PGM", MIT Press 2009)
- Bishop's Pattern Recognition and Machine Learning ("PRML", Springer 2006)
- MacKay's Information Theory, Inference and Learning Algorithms ("ITILA", CUP 2003)
- Hastie, Tibshirani and Friedman's Elements of Statistical Learning ("ESL", Springer 2009)

* Perspective: My perspective is that of a machine learning researcher and student, who has used these books for reference and study, but not as classroom textbooks.

* Audience/prerequisites: they are comparable among all the textbooks mentioned. BRML has lower expected commitment and specialization, PGM requires more scrupulous reading. The books differ in their topics and disciplinary approach, some more statistical (ESL), some more Bayesian (PRML, ITILA), some focused on graphical models (PGM, BRML). K Murphy compares MLAPP to others here. For detailed coverage comparison, read the table of contents on the book websites.

* Main strength: MLAPP stands out for covering more advanced and current research topics: there is a full chapter on Latent Dirichlet Allocation, learning to rank, L1 regularization, deep networks; in the basics, the decision theory part is quite thorough (e.g. will mention Jeffrey's/uninformative priors). The book is "open" and vivid, doesn't shy away from current research and advanced concepts. This seems to be purposeful, as it shows in many aspects:

- quotes liberally from web sources, something usually not done in academic publications

- borrows "the best" from other authors (always with permission and acknowledgment, of course): most importantly the best pictures and diagrams, but also tables, recaps, insightful diagrams. Whereas other books will produce their own pictures and diagrams themselves (eg, PRML has a distinctive clarity and style in its illustrations), MLAPP takes many of its colour illustrations from other people's publications; therefore it can select the most pithy and relevant pictures to make a point. You could think that reproductions may be illegible and require extra effort to interpret because they come from a variety of sources; I have found that the bonus coming from having precisely the right image prevails.

- frequent references to the literature, mentions of extensions and open questions, as well as computational complexity considerations: for instance, the section on HMMs will mention duration modeling and variable-duration Markov models, and a comparison of the expressive power of hierarchical HMMs versus stochastic context-free grammars, complete with relevant citations, and a brief mention of the computational complexity results from the publications. All this connects the material with research and new ideas in a fine way -- which other textbooks don't achieve, I find. For instance, PGM defers references to a literature section at the end of each chapter, resulting in a more self-contained, but more poorly "linked" text.

* Didactic aids: Another distinctive feature is that the author clearly has tried to include didactic aids gathered over the years, such as recaps, comparative tables, diagrams, much in the spirit of the "generative model of generative models" (Roweis and Ghahramani): e.g. table comparing all models discussed, pros and cons of generative vs. discriminative models, recap of operations on HMMs (smoothing, filtering etc), list of parameter estimation methods for CRFs.

* Editorial features: Other editorial features worth mentioning are

- compared to others, helpful mentions of terminology, e.g. jargon, nomenclature, concept names, in bold throughout the text ("you could also devise a variant thus; this is called so-and-so")

- mathematical notation relatively clear and consistent, occasional obscurities. PGM stands out as excruciatingly precise on this aspect.

- boxes/layout: no "skill boxes" or "case study boxes" (PGM), not many roadmap/difficulty indications like ITILA or PGM, examples are present but woven into the text (not separated like PGM or BRML). Layout rather plain and homogeneous, much like PRML.

- sadly lacks list of figures and tables, but has index of code

* Complete accompanying material:

- interesting exercises (yet fewer than PRML, BRML, PGM); solutions, however, are only accessible to instructors (same with BRML, PGM), which in my experience makes them only half as useful for the self-learner. PRML and ITILA have some solutions online resp. in the book.

- accompanying Matlab/Octave source code, which I found more readily usable than BRML's. PGM and PRML have no accompanying source code, even though the toolkit distributed with Koller's online PGM class might qualify as one. I find accompanying code a truly useful tool for learning; there's nothing like trying to implement an algorithm, checking your implementation against a reference, having boilerplate/utility code for the parts of the algorithm you're not interested in re-implementing. Also, code may clarify an algorithm, even when presented in pseudo-code. By the way, MLAPP has rather few pseudo-code boxes (like BRML or PRML, while PGM is very good here).

- MLAPP is not freely available as a PDF (unlike BRML, closest topic-wise, ESL, or ITILA). This will no doubt reduce its diffusion. My own take on the underlying controversy is in favor of distributing the PDF: makes successful books widely popular and cited (think ITILA or Rasmussen and Williams' Gaussian Processes), increases the book's overall value, equips readers with a weightless copy to annotate with e-ink, or consult on the go. I believe PDF versions positively impact sales, too: impact neutral-to-positive to course textbook/university library sales, indifferent to sales in countries with widely different purchase power, positive to all other segments due to enormous diffusion/popularity.

* Conclusion:
The closest contender to this book I believe is BRML. Both are excellent textbooks and have accompanying source code.

BRML is more accessible, has a free PDF version, and a stronger focus on graphical models.
MLAPP has all the qualities of an excellent graduate textbook (unified presentation, valuable learning aids), and yet is unafraid of discussing detail points (e.g. omnipresent results on complexity), as well as advanced and research topics (LDA, L1 regularization).
Anaragelv
(Disclaimer: I have worked with a draft of the book and been allowed to use the instructor's review copy for this review. I have bought the book from Amazon.co.uk, but apparently this Amazon.com review can't be tagged "verified purchase". I don't receive any compensation whatsoever for writing this review. I hope it will help you chose a machine learning textbook.)

Similar textbooks on statistical/probabilistic machine learning (links to book websites, not Amazon pages):
- Barber's Bayesian Reasoning and Machine Learning ("BRML", Cambridge University Press 2012)
- Koller and Friedman's Probabilistic Graphical Models ("PGM", MIT Press 2009)
- Bishop's Pattern Recognition and Machine Learning ("PRML", Springer 2006)
- MacKay's Information Theory, Inference and Learning Algorithms ("ITILA", CUP 2003)
- Hastie, Tibshirani and Friedman's Elements of Statistical Learning ("ESL", Springer 2009)

* Perspective: My perspective is that of a machine learning researcher and student, who has used these books for reference and study, but not as classroom textbooks.

* Audience/prerequisites: they are comparable among all the textbooks mentioned. BRML has lower expected commitment and specialization, PGM requires more scrupulous reading. The books differ in their topics and disciplinary approach, some more statistical (ESL), some more Bayesian (PRML, ITILA), some focused on graphical models (PGM, BRML). K Murphy compares MLAPP to others here. For detailed coverage comparison, read the table of contents on the book websites.

* Main strength: MLAPP stands out for covering more advanced and current research topics: there is a full chapter on Latent Dirichlet Allocation, learning to rank, L1 regularization, deep networks; in the basics, the decision theory part is quite thorough (e.g. will mention Jeffrey's/uninformative priors). The book is "open" and vivid, doesn't shy away from current research and advanced concepts. This seems to be purposeful, as it shows in many aspects:

- quotes liberally from web sources, something usually not done in academic publications

- borrows "the best" from other authors (always with permission and acknowledgment, of course): most importantly the best pictures and diagrams, but also tables, recaps, insightful diagrams. Whereas other books will produce their own pictures and diagrams themselves (eg, PRML has a distinctive clarity and style in its illustrations), MLAPP takes many of its colour illustrations from other people's publications; therefore it can select the most pithy and relevant pictures to make a point. You could think that reproductions may be illegible and require extra effort to interpret because they come from a variety of sources; I have found that the bonus coming from having precisely the right image prevails.

- frequent references to the literature, mentions of extensions and open questions, as well as computational complexity considerations: for instance, the section on HMMs will mention duration modeling and variable-duration Markov models, and a comparison of the expressive power of hierarchical HMMs versus stochastic context-free grammars, complete with relevant citations, and a brief mention of the computational complexity results from the publications. All this connects the material with research and new ideas in a fine way -- which other textbooks don't achieve, I find. For instance, PGM defers references to a literature section at the end of each chapter, resulting in a more self-contained, but more poorly "linked" text.

* Didactic aids: Another distinctive feature is that the author clearly has tried to include didactic aids gathered over the years, such as recaps, comparative tables, diagrams, much in the spirit of the "generative model of generative models" (Roweis and Ghahramani): e.g. table comparing all models discussed, pros and cons of generative vs. discriminative models, recap of operations on HMMs (smoothing, filtering etc), list of parameter estimation methods for CRFs.

* Editorial features: Other editorial features worth mentioning are

- compared to others, helpful mentions of terminology, e.g. jargon, nomenclature, concept names, in bold throughout the text ("you could also devise a variant thus; this is called so-and-so")

- mathematical notation relatively clear and consistent, occasional obscurities. PGM stands out as excruciatingly precise on this aspect.

- boxes/layout: no "skill boxes" or "case study boxes" (PGM), not many roadmap/difficulty indications like ITILA or PGM, examples are present but woven into the text (not separated like PGM or BRML). Layout rather plain and homogeneous, much like PRML.

- sadly lacks list of figures and tables, but has index of code

* Complete accompanying material:

- interesting exercises (yet fewer than PRML, BRML, PGM); solutions, however, are only accessible to instructors (same with BRML, PGM), which in my experience makes them only half as useful for the self-learner. PRML and ITILA have some solutions online resp. in the book.

- accompanying Matlab/Octave source code, which I found more readily usable than BRML's. PGM and PRML have no accompanying source code, even though the toolkit distributed with Koller's online PGM class might qualify as one. I find accompanying code a truly useful tool for learning; there's nothing like trying to implement an algorithm, checking your implementation against a reference, having boilerplate/utility code for the parts of the algorithm you're not interested in re-implementing. Also, code may clarify an algorithm, even when presented in pseudo-code. By the way, MLAPP has rather few pseudo-code boxes (like BRML or PRML, while PGM is very good here).

- MLAPP is not freely available as a PDF (unlike BRML, closest topic-wise, ESL, or ITILA). This will no doubt reduce its diffusion. My own take on the underlying controversy is in favor of distributing the PDF: makes successful books widely popular and cited (think ITILA or Rasmussen and Williams' Gaussian Processes), increases the book's overall value, equips readers with a weightless copy to annotate with e-ink, or consult on the go. I believe PDF versions positively impact sales, too: impact neutral-to-positive to course textbook/university library sales, indifferent to sales in countries with widely different purchase power, positive to all other segments due to enormous diffusion/popularity.

* Conclusion:
The closest contender to this book I believe is BRML. Both are excellent textbooks and have accompanying source code.

BRML is more accessible, has a free PDF version, and a stronger focus on graphical models.
MLAPP has all the qualities of an excellent graduate textbook (unified presentation, valuable learning aids), and yet is unafraid of discussing detail points (e.g. omnipresent results on complexity), as well as advanced and research topics (LDA, L1 regularization).
Arilak
I'm sure if you are a Google Research Scientist and are not learning the material for the first time, this book is amazing. For everyone else, I would not recommend it. I bought this book for my Fall 2013 COMPSCI 571 class, and I regret it. Before buying this book, consider the following:

1. Take a look at the online Errata. This book is already in it's 3rd printing and it just came out. The list of corrections for this (the 3rd edition) is already mind-numbingly long. The 4th printing coming out this month will surely fix some errors, but there are just too many.
2. Our class has an online forum (for a 100 person class) where we discuss topics, and most questions are either (a) basic topics from the book that no one understood or (b) talking about how one figure in the book has multiple errors associated with it. At first I was really excited to find mistakes and submit them to the Errata - it was like I was part of the book! Now I just get frustrated and have already given up on submitting corrections.
3. Our instructor regrets using this book and modifies the examples before giving them to us in class. Our out of class readings now consist mostly of MetaAcademy.com.
4. There are hardly any worked-through examples, and many of those that are worked through have errors.
5. Many important concepts are skimmed over way too quickly. For example, there is a whole chapter on Logistic regression. However, Logistic regression is covered for exactly 2 pages. Then a weird 3D graph is presented but not explained (a common theme throughout the book is graphs that look absolutely amazing, but which convey little information as to exactly what's going on to a lay-person like me), then the rest of the chapter presents methods for doing the math, which I'm sure are useful in some sense, but I'm still thinking: "why is this MLE not in closed form, what is a Hessian doing here... and wtf is going on?!"

Most students just got the PDF for free online, and I would highly suggest doing something other than paying $55 for this book.
Arilak
I'm sure if you are a Google Research Scientist and are not learning the material for the first time, this book is amazing. For everyone else, I would not recommend it. I bought this book for my Fall 2013 COMPSCI 571 class, and I regret it. Before buying this book, consider the following:

1. Take a look at the online Errata. This book is already in it's 3rd printing and it just came out. The list of corrections for this (the 3rd edition) is already mind-numbingly long. The 4th printing coming out this month will surely fix some errors, but there are just too many.
2. Our class has an online forum (for a 100 person class) where we discuss topics, and most questions are either (a) basic topics from the book that no one understood or (b) talking about how one figure in the book has multiple errors associated with it. At first I was really excited to find mistakes and submit them to the Errata - it was like I was part of the book! Now I just get frustrated and have already given up on submitting corrections.
3. Our instructor regrets using this book and modifies the examples before giving them to us in class. Our out of class readings now consist mostly of MetaAcademy.com.
4. There are hardly any worked-through examples, and many of those that are worked through have errors.
5. Many important concepts are skimmed over way too quickly. For example, there is a whole chapter on Logistic regression. However, Logistic regression is covered for exactly 2 pages. Then a weird 3D graph is presented but not explained (a common theme throughout the book is graphs that look absolutely amazing, but which convey little information as to exactly what's going on to a lay-person like me), then the rest of the chapter presents methods for doing the math, which I'm sure are useful in some sense, but I'm still thinking: "why is this MLE not in closed form, what is a Hessian doing here... and wtf is going on?!"

Most students just got the PDF for free online, and I would highly suggest doing something other than paying $55 for this book.