Statistical Reading Rainbow

For those of us who received statistical training outside of statistics departments, it often emphasized procedures over principles. This entailed that we learned about various statistical techniques and how to perform analysis in a particular statistical software, but glossed over the mechanisms and mathematical statistics underlying these practices. While that training methodology (hereby referred to as the ‘heuristic method’) has value, it has many drawbacks when the ultimate goal is to perform sound statistical analysis that is valid and thorough. Even in my current role as a data scientist at a technology company in the San Francisco Bay Area, I have had to go back and understand various procedures and metrics instead of just “doing data analysis”.

Given this realization, I have dedicated hours of time outside of work over the last couple years to “re-training” myself on many of the important concepts in both descriptive and inferential statistics. This post will give brief mention to the books that have been most useful is helping me develop a fuller understanding of the statistical sciences. These books have also helped me fill in the gaps and deficiencies from my statistical training in university and graduate school. Furthermore, these are the texts that I often revisit when I need a reference on some statistical topic of interest. This is at a minimum a a six year journey, so I have a long way to go until I am able to stand solidly in my understanding of statistics. While I am sacrificing a lot of my free time to this undertaking, it will certainly improve my knowledge and help prepare me for graduate school (PhD) in biostatistics, which I hope to attend in around five to six years.[1]

Please note that I am not taking issue with the ‘heuristic method’ of statistical training. It certainly has its place and provides students with the immediate knowledge required to satisfactorily prepare for work in private industry. In fact, I prefer the ‘heuristic method’ and still rely on straight forward rules in my day to day work as that ensures that best practices are followed and satisfactory analysis is performed. Furthermore, I certainly believe that it is superior to the hack-ey nature of data mining and data science education, but that is a different story.

Fundamentals:

Statistics in Plain English – Urdan
Clear, concise, and covers all the fundamental items that one would need to know. Everything from descriptive statistics to linear regression are covered, with many good examples. Even if you never use ANOVA or factor analysis, this is a good book to review and one that I strongly recommend to people who are interested in data science.

Principles of Statistics – Balmer
This is a classic text that offers a good treatment of probability theory, distributions, and statistical inference. The text contains a bit more math than ‘Statistics in Plain English’, so I think it should be read after completing the previous book.

Fundamentals of Modern Statistical Methods – Wilcox
This book reviews ‘traditional’ parametric statistics and provides a good overview of robust statistical methods. There is a fair amount on the historical evolution of various techniques, and I found that a bit unnecessary. But overall, this is still a solid introductory text to learn about statistical inference using robust techniques.

Regression Analysis:

Mostly Harmless Econometrics – Angrist
While I don’t regularly work with instrumental variables, generalized methods of moments, or regression discontinuity, this book is a great high level introduction to econometrics. The chapters on regression theory and quantile regression are phenomenal.

Regression Modeling Strategies – Harrell
This is my most referenced book and the one that really helped in my overall development as an applied statistician. All the important topics are covered, from imputation, regression splines, and so forth. This book includes R code for performing analysis using the RMS package. I end up citing this book quite a lot. For example, in a recent work email, I mentioned that Harrell “also says on page 61 that “narrowly distributed predictor variables will require higher sample sizes.”” Essential reading in my opinion.

Data Analysis Using Regression and Multilevel/Hierarchical Models – Gelman and Hill
The first half of this book cover statistical inference using single level models and the second half is dedicated to multilevel methods. Given that I am rarely work with panel data, I use the first half of this book a reference for things that I may need a quick refresher on. It is very accessible and has plenty of examples with R code.

Semiparametric Regression for the Social Sciences – Keele
This is one of my favorite statistical books. Well written and easy to comprehend, but still rigorous. Covers local regression, splines, and generalized additive models. There is also a solid chapter on the use of bootstrapping with semiparametric and nonparametric models.

Statistical Learning:

Statistical Learning from a Regression Perspective – Berk
As a skeptic who is wary of every hype machine, I really enjoyed Berks preface in which he discusses the “dizzying array of new statistical procedures” that have been introduced over the past several decades with “the hype of a big-budget movie.” I got this text for its treatment of topics such as boosting, bagging, random forest, and support vector machines. I will probably need to reread this book several more times before I fully comprehend everything.

Time Series:

Time Series: a Biostatistical Introduction – Diggle
The lack of quality time series books is really infuriating. Don’t get me wrong, there are some good texts on forecasting, such as the free online book from Hyndaman. However, I’ve yet to find a really good intermediate level treatment of time series analysis besides this one. Contains good coverage of repeated measurements, ARIMA modeling, and forecasting.

Bayesian Methods

Statistical Rethinking – McElreath
While I was introduced to robust techniques and nonparametric statistics in graduate school, there was nothing on Bayesian methods. Due to a fear of the topic, I avoided learning about it until this past year. This book by McElreath has been great as it is very accessible and provides code for understanding various principles. Over the next year, I am hoping to dive deeper into Bayesian techniques and this was a good first step.

 

Advertisements

SQL Cheat Sheet

I’ve been putting together a basic SQL cheat sheet that could be used as a reference guide. Here are a series of common procedures that should be of use for anyone who uses SQL to extract data. No explanations are provided as they should largely be known to the end user.


-- COUNT OF DISTINCT VALUES 

COUNT(DISTINCT year) AS years_count
COUNT(DISTINCT month) AS months_count

-- SELECT DUPLICATED ROWS

SELECT name, email, COUNT(*)
FROM users
GROUP BY name, email
HAVING COUNT(*) > 1

SELECT name, email
    FROM users
    WHERE email in
    (
    SELECT email
    FROM users
    GROUP BY email
    HAVING COUNT(*)>1
    )

SELECT firstname, lastname, list.address FROM list
INNER JOIN (SELECT address FROM list
GROUP BY address HAVING count(id) > 1) dup ON list.address = dup.address

SELECT
	td.user_id,
	td.brand,
	td.order_dt,
	COUNT(*)
FROM training_details AS td
INNER JOIN users as u on u.user_id = td.user_id
GROUP BY 1, 2, 3
HAVING COUNT(*) >= 2
ORDER BY td.order_date DESC;

-- USING CASE WHEN - ONE

SELECT rr.id,
       rr.created_at,
       rr.report_id,
       rr.account_id,
       rr.executed_by_id,
       rr.data_source_id,
       rr.state,
       MAX(CASE WHEN rs.id IS NOT NULL OR lsr.id IS NOT NULL THEN 1 ELSE 0 END) AS scheduled_run
 FROM report_runs rr
 LEFT JOIN report_schedule_runs rs ON rs.report_run_id = rr.id
 LEFT JOIN list_run_report_runs lrrr ON lrrr.report_run_id = rr.id
 LEFT JOIN list_schedule_runs lsr ON lsr.list_run_id = lrrr.list_run_id
 GROUP BY 1,2,3,4,5,6,7

-- SEE THE FIRST FEW ROWS OF A TABLE

SELECT count(1) FROM table;

SELECT * FROM table LIMIT 5;

-- USING THE LIKE OPERATOR

SELECT * FROM student WHERE name LIKE ā€˜d%nā€™;
(returns dan or den)

-- CHECKING QUERY PERFORMANCE

EXPLAIN QUERY PLAN SELECT * FROM student;

-- SUBQUERY - ONE

SELECT SUM (Sales) FROM Store_Information
WHERE Store_Name IN
(SELECT Store_Name FROM Geography
WHERE Region_Name = 'West');

-- SUBQUERY - TWO

SELECT SUM (a1.Sales) FROM Store_Information a1
WHERE a1.Store_Name IN
(SELECT Store_Name FROM Geography a2
WHERE a2.Store_Name = a1.Store_Name);

-- SUBQUERY - TWO

SELECT sub.*
  FROM (
        SELECT *
          FROM table
         WHERE day_of_week = 'Friday'
       ) sub
 WHERE sub.resolution = 'NONE'

-- SUBQUERY - THREE

SELECT *
  FROM table
 WHERE Date IN (SELECT date
                 FROM table
                ORDER BY date
                LIMIT 5
              )

-- SUBQUERY - FOUR

SELECT incidents.*,
       sub.incidents AS incidents_that_day
  FROM tutorial.sf_crime_incidents_2014_01 incidents
  JOIN ( SELECT date,
          COUNT(incidnt_num) AS incidents
           FROM tutorial.sf_crime_incidents_2014_01
          GROUP BY 1
       ) sub
    ON incidents.date = sub.date
 ORDER BY sub.incidents DESC, time

-- SELECT RECORDS FROM A TIME FRAME

SELECT * FROM users WHERE TO_DAYS(last_login) = ( TO_DAYS(NOW()) - 1 )

SELECT* FROM users DATE_SUB(NOW(),INTERVAL 90 MINUTE);

-- UNION OPERATOR

SELECT users.name
FROM users WHERE (users.name BETWEEN 'A%' AND 'M%')
UNION
SELECT banned_users.name FROM banned_users
WHERE (banned_users.name BETWEEN 'A%' AND 'M%');

-- CONCATENATE DATA INTO ONE COLUMN

SELECT CONCAT(emp.firstname, '-', emp.lastname) AS emp_full_name FROM emp;

-- INDEXING STRINGS

SELECT LEFT(date, 10) AS cleaned_date,
       RIGHT(date, 17) AS cleaned_time
FROM table

SELECT SUBSTR(date, 4, 2) AS day
FROM table

-- OTHER

Select database: use [database];

Show all tables: show tables;

Show table structure: describe [table];

Counting and selecting grouped records:
SELECT *, (SELECT COUNT([column]) FROM [table]) AS count
FROM [table]
GROUP BY [column];

Select records containing [value]:
SELECT * FROM [table]
WHERE [column] LIKE '%[value]%';

Select records starting with [value]:
SELECT * FROM [table]
WHERE [column] LIKE '[value]%';

Select records starting with val and ending with ue:
SELECT * FROM [table]
WHERE [column] LIKE '[val_ue]';

Select a range:
SELECT * FROM [table]
WHERE [column] BETWEEN [value1] and [value2];

Select with custom order and only limit:
SELECT * FROM [table]
WHERE [column]
ORDER BY [column] ASC
LIMIT [value]; 

-- INNER, LEFT, RIGHT, AND OUTER JOIN

INNER JOIN: returns rows when there is a match in both tables.

LEFT JOIN: returns all rows from the left table, even if there are no matches in the right table.

RIGHT JOIN: returns all rows from the right table, even if there are no matches in the left table.

FULL JOIN: returns rows when there is a match in one of the tables.

SELF JOIN: is used to join a table to itself as if the table were two tables, temporarily renaming at least one table in the SQL statement.

CARTESIAN JOIN: returns the Cartesian product of the sets of records from the two or more joined tables.