Library Hours
Monday to Friday: 9 a.m. to 9 p.m.
Saturday: 9 a.m. to 5 p.m.
Sunday: 1 p.m. to 9 p.m.
Naper Blvd. 1 p.m. to 5 p.m.
     
Limit search to available items
Results Page:  Previous Next
Author Larose, Daniel T., author.

Title Data mining and predictive analytics / Daniel T. Larose, Chantal D. Larose. [O'Reilly electronic resource]

Edition Second edition.
Publication Info. Hoboken, New Jersey : John Wiley & Sons, [2015]
©2015
QR Code
Description 1 online resource (1 volume) : illustrations
Bibliography Includes bibliographical references and index.
Contents Series; Title Page; Copyright; Table of Contents; Dedication; Preface; What is Data Mining? What is Predictive Analytics?; Why is this Book Needed?; Who Will Benefit from this Book?; Danger! Data Mining is Easy to do Badly; "White-Box" Approach; Algorithm Walk-Throughs; Exciting New Topics; The R Zone; Appendix: Data Summarization and Visualization; The Case Study: Bringing it all Together; How the Book is Structured; The Software; Weka: The Open-Source Alternative; The Companion Web Site: www.dataminingconsultant.com; Data Mining and Predictive Analytics as a Textbook; Acknowledgments.
Machine-generated contents note: ch. 1 An Introduction To Data Mining And Predictive Analytics -- 1.1. What Is Data MiningWhat Is Predictive Analytics-- 1.2. Wanted: Data Miners -- 1.3. The Need For Human Direction Of Data Mining -- 1.4. The Cross-Industry Standard Process For Data Mining: CRISP-DM -- 1.4.1. CRISP-DM: The Six Phases -- 1.5. Fallacies Of Data Mining -- 1.6. What Tasks Can Data Mining Accomplish -- 1.6.1. Description -- 1.6.2. Estimation -- 1.6.3. Prediction -- 1.6.4. Classification -- 1.6.5. Clustering -- 1.6.6. Association -- The R Zone -- R References -- Exercises -- ch. 2 Data Preprocessing -- 2.1. Why Do We Need To Preprocess The Data-- 2.2. Data Cleaning -- 2.3. Handling Missing Data -- 2.4. Identifying Misclassifications -- 2.5. Graphical Methods For Identifying Outliers -- 2.6. Measures Of Centre And Spread -- 2.7. Data Transformation -- 2.8. Min-max Normalization -- 2.9. Z-Score Standardization -- 2.10. Decimal Scaling -- 2.11. Transformations To Achieve Normality -- 2.12. Numerical Methods For Identifying Outliers -- 2.13. Flag Variables -- 2.14. Transforming Categorical Variables Into Numerical Variables -- 2.15. Binning Numerical Variables -- 2.16. Reclassifying Categorical Variables -- 2.17. Adding An Index Field -- 2.18. Removing Variables That Are Not Useful -- 2.19. Variables That Should Probably Not Be Removed -- 2.20. Removal Of Duplicate Records -- 2.21. A Word About Id Fields -- The R Zone -- R Reference -- Exercises -- ch. 3 Exploratory Data Analysis -- 3.1. Hypothesis Testing Versus Exploratory Data Analysis -- 3.2. Getting To Know The Data-Set -- 3.3. Exploring Categorical Variables -- 3.4. Exploring Numeric Variables -- 3.5. Exploring Multivariate Relationships -- 3.6. Selecting Interesting Subsets Of The Data For Further Investigation -- 3.7. Using EDA To Uncover Anomalous Fields -- 3.8. Binning Based On Predictive Value -- 3.9. Deriving New Variables: Flag Variables -- 3.10. Deriving New Variables: Numerical Variables -- 3.11. Using EDA To Investigate Correlated Predictor Variables -- 3.12. Summary Of Our EDA -- The R Zone -- R References -- Exercises -- ch. 4 Dimension-Reduction Methods -- 4.1. Need For Dimension-Reduction In Data Mining -- 4.2. Principal Components Analysis -- 4.3. Applying PCA To The Houses Data Set -- 4.4. How Many Components Should We Extract-- 4.4.1. The Eigenvalue Criterion -- 4.4.2. The Proportion Of Variance Explained Criterion -- 4.4.3. The Minimum Communality Criterion -- 4.4.4. The Scree Plot Criterion -- 4.5. Profiling The Principal Components -- 4.6. Communalities -- 4.6.1. Minimum Communality Criterion -- 4.7. Validation Of The Principal Components -- 4.8. Factor Analysis -- 4.9. Applying Factor Analysis To The Adult Data-Set -- 4.10. Factor Rotation -- 4.11. User-Defined Composites -- 4.12. An Example Of A User-Defined Composite -- The R Zone -- R References -- Exercises -- ch. 5 Univariate Statistical Analysis -- 5.1. Data Mining Tasks In Discovering Knowledge In Data -- 5.2. Statistical Approaches To Estimation And Prediction -- 5.3. Statistical Inference -- 5.4. How Confident Are We In Our Estimates-- 5.5. Confidence Interval Estimation Of The Mean -- 5.6. How To Reduce The Margin Of Error -- 5.7. Confidence Interval Estimation Of The Proportion -- 5.8. Hypothesis Testing For The Mean -- 5.9. Assessing The Strength Of Evidence Against The Null Hypothesis -- 5.10. Using Confidence Intervals To Perform Hypothesis Tests -- 5.11. Hypothesis Testing For The Proportion -- Reference -- The R Zone -- R Reference -- Exercises -- ch. 6 Multivariate Statistics -- 6.1. Two-Sample T-Test For Difference In Means -- 6.2. Two-Sample Z-Test For Difference In Proportions -- 6.3. Test For The Homogeneity Of Proportions -- 6.4. Chi-Square Test For Goodness Of Fit Of Multinomial Data -- 6.5. Analysis Of Variance -- Reference -- The R Zone -- R Reference -- Exercises -- ch. 7 Preparing To Model The Data -- 7.1. Supervised Versus Unsupervised Methods -- 7.2. Statistical Methodology And Data Mining Methodology -- 7.3. Cross-Validation -- 7.4. Overfitting -- 7.5. Bias-variance Trade-Off -- 7.6. Balancing The Training Data-Set -- 7.7. Establishing Baseline Performance -- The R Zone -- R Reference -- Exercises -- ch. 8 Simple Linear Regression -- 8.1. An Example Of Simple Linear Regression -- 8.1.1. The Least-Squares Estimates -- 8.2. Dangers Of Extrapolation -- 8.3. How Useful Is The RegressionThe Coefficient Of Determination, R2 -- 8.4. Standard Error Of The Estimate, S -- 8.5. Correlation Coefficient R -- 8.6. Anova Table For Simple Linear Regression -- 8.7. Outliers, High-Leverage Points, And Influential Observations -- 8.8. Population Regression Equation -- 8.9. Verifying The Regression Assumptions -- 8.10. Inference In Regression -- 8.11. T-Test For The Relationship Between X And Y -- 8.12. Confidence Interval For The Slope Of The Regression Line -- 8.13. Confidence Interval For The Correlation Coefficient P -- 8.14. Confidence Interval For The Mean Value Of Y Given X -- 8.15. Prediction Interval For A Randomly-Chosen Value Of Y Given X -- 8.16. Transformations To Achieve Linearity -- 8.17. Box-cox Transformations -- The R Zone -- R References -- Exercises -- ch. 9 Multiple Regression And Model-Building -- 9.1. An Example Of Multiple Regression -- 9.2. The Population Multiple Regression Equation -- 9.3. Inference In Multiple Regression -- 9.3.1. The T-Test For The Relationship Between Y And Xi -- 9.3.2. T-Test For Relationship Between Nutritional Rating And Sugars -- 9.3.3. T-Test For Relationship Between Nutritional Rating And Fibre Content -- 9.3.4. The F-Test For The Significance Of The Overall Regression Model -- 9.3.5. F-Test For Relationship Between Nutritional Rating And {Sugar And Fibre}, Taken Together -- 9.3.6. The Confidence Interval For A Particular Coefficient, βζ 9.3.7. The Confidence Interval For The Mean Value Of Y, Given X1,X2 ..., Xm -- 9.3.8. The Prediction Interval For A Randomly-Chosen Value Of Y, Given X1,X2,Xm -- 9.4. Regression With Categorical Predictors, Using Indicator Variables -- 9.5. Adjusting R2: Penalizing Models For Including Predictors That Are Not Useful -- 9.6. Sequential Sums Of Squares -- 9.7. Multicollinearity -- 9.8. Variable Selection Methods -- 9.8.1. The Partial F-Test -- 9.8.2. The Forward Selection Procedure -- 9.8.3. The Backward Elimination Procedure -- 9.8.4. The Stepwise Procedure -- 9.8.5. The Best Subsets Procedure -- 9.8.6. The All-Possible-Subsets Procedure -- 9.9. Gas Mileage Data-Set -- 9.10. An Application Of Variable Selection Methods -- 9.10.1. Forward Selection Procedure Applied To The Gas Mileage Data-Set -- 9.10.2. Backward Elimination Procedure Applied To The Gas Mileage Data-Set -- 9.10.3. The Stepwise Selection Procedure Applied To The Gas Mileage Data-Set -- 9.10.4. Best Subsets Procedure Applied To The Gas Mileage Data-Set -- 9.10.5. Mallows' C Statistic -- 9.11. Using The Principal Components As Predictors In Multiple Regression -- The R Zone -- R References -- Exercises -- ch. 10 K-Nearest Neighbour Algorithm -- 10.1. Classification Task -- 10.2. K-Nearest Neighbour Algorithm -- 10.3. Distance Function -- 10.4. Combination Function -- 10.4.1. Simple Unweighted Voting -- 10.4.2. Weighted Voting -- 10.5. Quantifying Attribute Relevance: Stretching The Axes -- 10.6. Database Considerations -- 10.7. K-Nearest Neighbour Algorithm For Estimation And Prediction -- 10.8. Choosing K -- 10.9. Application Of K-Nearest Neighbour Algorithm Using IBM/SPSS Modeller -- The R Zone -- R References -- Exercises -- ch. 11 Decision Trees -- 11.1. What Is A Decision Tree-- 11.2. Requirements For Using Decision Trees -- 11.3. Classification And Regression Trees -- 11.4. C4.5 Algorithm -- 11.5. Decision Rules -- 11.6. Comparison Of The C5.0 And Cart Algorithms Applied To Real Data -- The R Zone -- R References -- Exercises -- ch.
Daniel's AcknowledgmentsChantal's Acknowledgments; Part I: Data Preparation; Chapter 1: An Introduction to Data Mining and Predictive Analytics; 1.1 What is Data Mining? What Is Predictive Analytics?; 1.2 Wanted: Data Miners; 1.3 The Need For Human Direction of Data Mining; 1.4 The Cross-Industry Standard Process for Data Mining: CRISP-DM; 1.5 Fallacies of Data Mining; 1.6 What Tasks can Data Mining Accomplish; The R Zone; R References; Exercises; Chapter 2: Data Preprocessing; 2.1 Why do We Need to Preprocess the Data?; 2.2 Data Cleaning; 2.3 Handling Missing Data.
2.4 Identifying Misclassifications2.5 Graphical Methods for Identifying Outliers; 2.6 Measures of Center and Spread; 2.7 Data Transformation; 2.8 Min-Max Normalization; 2.9 Z-Score Standardization; 2.10 Decimal Scaling; 2.11 Transformations to Achieve Normality; 2.12 Numerical Methods for Identifying Outliers; 2.13 Flag Variables; 2.14 Transforming Categorical Variables into Numerical Variables; 2.15 Binning Numerical Variables; 2.16 Reclassifying Categorical Variables; 2.17 Adding an Index Field; 2.18 Removing Variables that are not Useful; 2.19 Variables that Should Probably not be Removed.
2.20 Removal of Duplicate Records2.21 A Word About ID Fields; The R Zone; R Reference; Exercises; Chapter 3: Exploratory Data Analysis; 3.1 Hypothesis Testing Versus Exploratory Data Analysis; 3.2 Getting to Know The Data Set; 3.3 Exploring Categorical Variables; 3.4 Exploring Numeric Variables; 3.5 Exploring Multivariate Relationships; 3.6 Selecting Interesting Subsets of the Data for Further Investigation; 3.7 Using EDA to Uncover Anomalous Fields; 3.8 Binning Based on Predictive Value; 3.9 Deriving New Variables: Flag Variables; 3.10 Deriving New Variables: Numerical Variables.
3.11 Using EDA to Investigate Correlated Predictor Variables3.12 Summary of Our EDA; The R Zone; R References; Exercises; Chapter 4: Dimension-Reduction Methods; 4.1 Need for Dimension-Reduction in Data Mining; 4.2 Principal Components Analysis; 4.3 Applying PCA to the Houses Data Set; 4.4 How Many Components Should We Extract?; 4.5 Profiling the Principal Components; 4.6 Communalities; 4.7 Validation of the Principal Components; 4.8 Factor Analysis; 4.9 Applying Factor Analysis to the Adult Data Set; 4.10 Factor Rotation; 4.11 User-Defined Composites.
Summary "This updated second edition serves as an introduction to data mining methods and models, including association rules, clustering, neural networks, logistic regression, and multivariate analysis. The authors apply a unified 'white box' approach to data mining methods and models. This approach is designed to walk readers through the operations and nuances of the various methods, using small data sets, so readers can gain an insight into the inner workings of the method under review. Chapters provide readers with hands-on analysis problems, representing an opportunity for readers to apply their newly-acquired data mining expertise to solving real problems using large, real-world data sets."-- Portion of summary from book
Language English.
Subject Data mining.
Prediction theory.
Business -- Data processing.
Data Mining
Exploration de données (Informatique)
Théorie de la prévision.
Gestion -- Informatique.
Méthodes statistiques.
Etudes de cas.
Manuels.
Fouille de données.
Business -- Data processing
Data mining
Prediction theory
Added Author Larose, Chantal D., author.
Other Form: Larose, Daniel T. Data mining and predictive analytics. Second edition. Hoboken, New Jersey : Wiley, [2015] 9781118116197 (DLC) 2014043340 (OCoLC)862096372
ISBN 1118116194
9781118116197
9781118868676
1118868676
1118868706
9781118868706
Patron reviews: add a review
Click for more information
EBOOK
No one has rated this material

You can...
Also...
- Find similar reads
- Add a review
- Sign-up for Newsletter
- Suggest a purchase
- Can't find what you want?
More Information