SQL Cheat Sheet

I’ve been putting together a basic SQL cheat sheet that could be used as a reference guide. Here are a series of common procedures that should be of use for anyone who uses SQL to extract data. No explanations are provided as they should largely be known to the end user.


-- COUNT OF DISTINCT VALUES 

COUNT(DISTINCT year) AS years_count
COUNT(DISTINCT month) AS months_count

-- SELECT DUPLICATED ROWS

SELECT name, email, COUNT(*)
FROM users
GROUP BY name, email
HAVING COUNT(*) > 1

SELECT name, email
    FROM users
    WHERE email in
    (
    SELECT email
    FROM users
    GROUP BY email
    HAVING COUNT(*)>1
    )

SELECT firstname, lastname, list.address FROM list
INNER JOIN (SELECT address FROM list
GROUP BY address HAVING count(id) > 1) dup ON list.address = dup.address

SELECT
	td.user_id,
	td.brand,
	td.order_dt,
	COUNT(*)
FROM training_details AS td
INNER JOIN users as u on u.user_id = td.user_id
GROUP BY 1, 2, 3
HAVING COUNT(*) >= 2
ORDER BY td.order_date DESC;

-- USING CASE WHEN - ONE

SELECT rr.id,
       rr.created_at,
       rr.report_id,
       rr.account_id,
       rr.executed_by_id,
       rr.data_source_id,
       rr.state,
       MAX(CASE WHEN rs.id IS NOT NULL OR lsr.id IS NOT NULL THEN 1 ELSE 0 END) AS scheduled_run
 FROM report_runs rr
 LEFT JOIN report_schedule_runs rs ON rs.report_run_id = rr.id
 LEFT JOIN list_run_report_runs lrrr ON lrrr.report_run_id = rr.id
 LEFT JOIN list_schedule_runs lsr ON lsr.list_run_id = lrrr.list_run_id
 GROUP BY 1,2,3,4,5,6,7

-- SEE THE FIRST FEW ROWS OF A TABLE

SELECT count(1) FROM table;

SELECT * FROM table LIMIT 5;

-- USING THE LIKE OPERATOR

SELECT * FROM student WHERE name LIKE ‘d%n’;
(returns dan or den)

-- CHECKING QUERY PERFORMANCE

EXPLAIN QUERY PLAN SELECT * FROM student;

-- SUBQUERY - ONE

SELECT SUM (Sales) FROM Store_Information
WHERE Store_Name IN
(SELECT Store_Name FROM Geography
WHERE Region_Name = 'West');

-- SUBQUERY - TWO

SELECT SUM (a1.Sales) FROM Store_Information a1
WHERE a1.Store_Name IN
(SELECT Store_Name FROM Geography a2
WHERE a2.Store_Name = a1.Store_Name);

-- SUBQUERY - TWO

SELECT sub.*
  FROM (
        SELECT *
          FROM table
         WHERE day_of_week = 'Friday'
       ) sub
 WHERE sub.resolution = 'NONE'

-- SUBQUERY - THREE

SELECT *
  FROM table
 WHERE Date IN (SELECT date
                 FROM table
                ORDER BY date
                LIMIT 5
              )

-- SUBQUERY - FOUR

SELECT incidents.*,
       sub.incidents AS incidents_that_day
  FROM tutorial.sf_crime_incidents_2014_01 incidents
  JOIN ( SELECT date,
          COUNT(incidnt_num) AS incidents
           FROM tutorial.sf_crime_incidents_2014_01
          GROUP BY 1
       ) sub
    ON incidents.date = sub.date
 ORDER BY sub.incidents DESC, time

-- SELECT RECORDS FROM A TIME FRAME

SELECT * FROM users WHERE TO_DAYS(last_login) = ( TO_DAYS(NOW()) - 1 )

SELECT* FROM users DATE_SUB(NOW(),INTERVAL 90 MINUTE);

-- UNION OPERATOR

SELECT users.name
FROM users WHERE (users.name BETWEEN 'A%' AND 'M%')
UNION
SELECT banned_users.name FROM banned_users
WHERE (banned_users.name BETWEEN 'A%' AND 'M%');

-- CONCATENATE DATA INTO ONE COLUMN

SELECT CONCAT(emp.firstname, '-', emp.lastname) AS emp_full_name FROM emp;

-- INDEXING STRINGS

SELECT LEFT(date, 10) AS cleaned_date,
       RIGHT(date, 17) AS cleaned_time
FROM table

SELECT SUBSTR(date, 4, 2) AS day
FROM table

-- OTHER

Select database: use [database];

Show all tables: show tables;

Show table structure: describe [table];

Counting and selecting grouped records:
SELECT *, (SELECT COUNT([column]) FROM [table]) AS count
FROM [table]
GROUP BY [column];

Select records containing [value]:
SELECT * FROM [table]
WHERE [column] LIKE '%[value]%';

Select records starting with [value]:
SELECT * FROM [table]
WHERE [column] LIKE '[value]%';

Select records starting with val and ending with ue:
SELECT * FROM [table]
WHERE [column] LIKE '[val_ue]';

Select a range:
SELECT * FROM [table]
WHERE [column] BETWEEN [value1] and [value2];

Select with custom order and only limit:
SELECT * FROM [table]
WHERE [column]
ORDER BY [column] ASC
LIMIT [value]; 

-- INNER, LEFT, RIGHT, AND OUTER JOIN

INNER JOIN: returns rows when there is a match in both tables.

LEFT JOIN: returns all rows from the left table, even if there are no matches in the right table.

RIGHT JOIN: returns all rows from the right table, even if there are no matches in the left table.

FULL JOIN: returns rows when there is a match in one of the tables.

SELF JOIN: is used to join a table to itself as if the table were two tables, temporarily renaming at least one table in the SQL statement.

CARTESIAN JOIN: returns the Cartesian product of the sets of records from the two or more joined tables.

Weekly R-Tips: Visualizing Predictions

Lets say that we estimated a linear regression model on time series data with lagged predictors. The goal is to estimate sales as a function of inventory, search volume, and media spend from two months ago. After using the lm function to perform linear regression, we predict sales using values from two month ago.


frmla <- sales ~ inventory + search_volume + media_spend
mod <- lm(frmla, data=dat)
pred = predict(mod, values, interval="predict") 

If this model is estimated weekly or monthly, we will eventually want to understand how well our model did in predicting actual sales from month to month. To perform this task, we must regularly maintain a spreadsheet or data structure (RDS object) with actual predicted sales figures for each time period. That data can be used to create line graphs that visualize both the actual versus predicted values.

Here is what the original spreadsheet looked like.

Screenshot from 2016-02-05 16:41:22

Transform that data into long format using whatever package you prefer.


library(reshape)
mydat = melt(d1)

This will provide a data frame with three columns.

Screenshot from 2016-02-04 15:04:36

We can utilize the ggplot2 package to create visualizations.


ggplot(mydat, aes(Month, value, group=variable, colour=variable)) +
  geom_line(lwd=1.05) + geom_point(size=2.5) + 
  ggtitle("Sales (01/2010 to 05/2015)") +
  xlab("Date") + ylab("Sales") + ylim(0,30000) + xlab(" ") + ylab(" ") +  
  theme(legend.title=element_blank()) + xlab(" ") + 
  theme(axis.text.x=element_text(colour="black")) +
  theme(axis.text.y=element_text(colour="black")) +
  theme(legend.position=c(.4, .85))

Predictions

Above is an example of what the final product could look like. Visualizing predicted against actual values is an important component of evaluating the quality of a model. Furthermore, having such visualization will be of value when interacting with business audiences and “selling” your analysis.

Extract Google Trends Data with Python

Anyone who has regularly worked with Google Trends data has had to deal with the slightly tedious task of grabbing keyword level data and reformatting the spreadsheet provided by Google. After looking for a seamless way to pull the data, I came upon the PyTrends library on GitHub, and sought to put together some quick user defined functions to manage the task of pulling daily and weekly trends data.


# set working directory (location where code is)
import os
os.chdir("path")

import re
import csv
import time
import pandas as pd
from random import randint
from GT_Automation_Code import pyGTrends

# set gmail credentials and path to extract data
google_username = "*****@gmail.com"
google_password = "*****"

Daily_Data = [ ]
        
# define daily pull code
def GT_Daily_Run(keys):
    
    path = 'path'

    # connect to Google
    connector = pyGTrends(google_username, google_password)
    # make request
    connector.request_report(keys, date="today 90-d", geo="US")
    # wait a random amount of time between requests to avoid bot detection
    time.sleep(randint(5, 10))
    # download file
    connector.save_csv(path, '_' + "GT_Daily" + '_' + keys.replace(' ', '_'))

    name = path + '_' + "GT_Daily" + '_' + keys.replace(' ', '_')
      
    with open(name + '.csv', 'rt') as csvfile:    
        csvReader = csv.reader(csvfile)
        data = []

        for row in csvReader:
            if any('2015' in s for s in row): 
                data.append(row)

        day_df = pd.DataFrame(data)
        cols = ["Day", keys]    
        day_df.columns = [cols]  
        Daily_Data.append(day_df) 

keywords = ['soccer', 'football', 'baseball']    

map(lambda x: GT_Daily_Run(x), keywords)  

rge = [Daily_Data[0], Daily_Data[1], Daily_Data[2]]    

df_final_daily = reduce(lambda left,right: pd.merge(left,right, on='Day'), rge)
df_final_daily = df_final_daily.loc[:, (df_final_daily != "0").any(axis=0)]
df_final_daily.to_csv("Daily_Trends_Data.csv", index=False)

Weekly_Data = [ ]     
        
# define weekly pull code
def GT_Weekly_Run(keys):

    path = 'path'
    
    # connect to Google
    connector = pyGTrends(google_username, google_password)
    # make request
    connector.request_report(keys, geo="US")
    # wait a random amount of time between requests to avoid bot detection
    time.sleep(randint(5, 10))
    # download file
    connector.save_csv(path, '_' + "GT_Weekly" + '_' + keys.replace(' ', '_'))

    name = path + '_' + "GT_Weekly" + '_' + keys.replace(' ', '_') 

    with open(name + '.csv', 'rt') as csvfile:    
        csvReader = csv.reader(csvfile)
        data = []
        datex = re.compile('(19|20)dd-(0[1-9]|1[012])-(0[1-9]|[12][0-9]|3[01])')

        for row in csvReader:
            if datex.search(str(row)):
                data.append(row)
       
        week_df = pd.DataFrame(data)
        cols = ["Week", keys]    
        week_df.columns = [cols]
        Weekly_Data.append(week_df)              
             
map(lambda x: GT_Weekly_Run(x), keywords)  

rge = [Weekly_Data[0], Weekly_Data[1], Weekly_Data[2]]    

df_final_weekly = reduce(lambda left,right: pd.merge(left,right, on='Week'), rge)
df_final_weekly = df_final_weekly.loc[:, (df_final_weekly != "0").any(axis=0)] 
df_final_weekly.to_csv("Weekly_Trends_Data.csv", index=False)


Weekly R-Tips: Importing Packages and User Inputs

Number 1: Importing Multiple Packages

Anyone who has used R for some time has written code that required the use of multiple packages. In most cases, this will be done by using the library or require function to bring in the appropriate extensions.


library(forecast)
library(ggplot2)
library(stringr)
library(lubridateee)
library(rockchalk)

That’s nice and gets the desired result, but can’t we just import all the packages we need in one or two lines. Yes we can, and here is the one line of code to do that.


libs <- c("forecast", "ggplot2", "stringr", "lubridateee", "rockchalk")
sapply(libs, library, character.only=TRUE, logical.return=TRUE)

libs <- c("forecast", "ggplot2", "stringr", "lubridateee", "rockchalk")
lapply(libs, require, character.only=TRUE)

Number 2: User Input

One side project that I hope to start on is a process whereby I can interact with R and select options that will result in particular outcomes. For example, let’s say you’re trying to put together a script that manages a weekly list. A good first step would be a list of options that the user would see and be prompted to select an option. Here is how R can be used to get user input in such circumstances.


lopts <- cat("
             1. Add an item
             2. Delete an item
             3. Print the list
             4. Quit 
             ")

action <- readline("Choose an option: ")

Automate the Boring Stuff: GGPlot2

The majority of my interaction with the ggplot2 package involves the interactive execution of code to visualize data within the context of exploratory data analysis. This is often a manual process and quite laborious. I recently sought to improve these tasks by creating a series of user defined functions that contained my most commonly used ggplot calls. These functions could then be sourced in and the appropriate arguments specified to generate the desired visualization. While this is a fairly simple task, attempting to call ggplot2 functions within a user defined function requires some understanding of R’s evaluation procedures. The key thing to remember is that the generic aes mapping argument uses non-standard evaluation to specify variables names within ggplot. When programming, it is suggested that we utilize standard evaluation by using aes_string to map the properties of a geom. Here are some examples of how aes_string can be utilized within a function to create graphics.


library(ggplot2)

mydat <- data.frame(date = c(seq(as.Date("2010/01/01"), as.Date("2010/01/31"), by=1)),
                    value1 = abs(round(rnorm(31), 2)),
                    value2 = abs(round(rnorm(31), 2)),
                    value3 = abs(round(rnorm(31), 2)))

head(mydat)

viz_func <- function(data, x, y){
    ggplot(data, aes_string(x=x, y=y)) +
    geom_line(lwd=1.05) + geom_point(size=2.5) + 
    ggtitle("Insert Title Here") +
    xlab("Date") + ylab("Value") + ylim(0,5) + 
    theme(axis.text.x=element_text(colour="black")) +
    theme(axis.text.y=element_text(colour="black"))
}

viz_func(mydat, 'date', 'value1')

viz_func(mydat, 'date', 'value3') + 
  ggtitle("Insert Different Title Here") +
  xlab("Different Date") + ylab("Different Value")

viz_func <- function(data, x){
    ggplot(data, aes_string(x=x)) +
    geom_histogram() +
    ggtitle("Insert Title Here") +
    xlab("Date") + ylab("Value") + ylim(0,5) + 
    theme(axis.text.x=element_text(colour="black")) +
    theme(axis.text.y=element_text(colour="black"))
}

viz_func(mydat, 'value1')

viz_func(mydat, 'value3') + 
  ggtitle("Insert Different Title Here") +
  xlab("Different Date") + ylab("Different Value")

Applied Statistical Theory: Quantile Regression

This is part two of the ‘applied statistical theory’ series that will cover the bare essentials of various statistical techniques. As analysts, we need to know enough about what we’re doing to be dangerous and explain approaches to others. It’s not enough to say “I used X because the misclassification rate was low.”

Standard linear regression summarizes the average relationship between a set of predictors and the response variable. \beta_1 represents the change in the mean value of Y given a one unit change in X_1 . A single slope is used to describe the relationship. Therefore, linear regression only provides a partial view of the link between the response variable and predictors. This is often inadaquete when there is heterogenous variance between X and Y . In such cases, we need to examine how the relationship between X and Y changes depending on the value of Y . For example, the impact of education on income may be more pronounced for those at higher income levels than those at lower income levels. Likewise, the the affect of parental care on the mean infant birth weight can be compared to it’s effect on other quantiles of infant birth weight. Quantile regression solves for these problems by looking at changes in the different quantiles of the response. The parameter estimates for this technique represent the change in a specified quantile of the response variable produced by a one unit change in the predictor variable. One major benefit of quantile regression is that it makes no assumptions about the error distribution.


library(quantreg)

head(mtcars)

frmla <- mpg ~ .
u=seq(.02,.98,by=.02)

mm = rq(frmla, data=mtcars, tau=u) # for a series of quantiles
mm = rq(frmla, data=mtcars, tau=0.50) # for the median

summ <- summary(mm, se = "boot")
summ

plot(summ)

Applied Statistical Theory: Belief Networks

Applied statistical theory is a new series that will cover the basic methodology and framework behind various statistical procedures. As analysts, we need to know enough about what we’re doing to be dangerous and explain approaches to others. It’s not enough to say “I used X because the misclassification rate was low.” At the same time, we don’t need to have doctoral level understanding of approach X. I’m hoping that these posts will provide a simple, succinct middle ground for understanding various statistical techniques.

Probabilistic grphical models represent the conditional dependencies between random variables through a graph structure. Nodes correspond to random variables and edges represent statistical dependencies between the variables. Two variables are said to be conditionally dependent if they have a direct impact on each others’ values. Therefore, a graph with directed edges from parent A_p and child B_c denotes a causal relationship. Two variables are conditionally independent if the link between those variables are conditional on another. For a graph with directed edges from A to B and from B to C , it would suggest that A and C are conditionally independent given variable B . Each node fits a probability distribution function that depends only on the value(s) of the variables with edges leading into the variable. For example, the probability distribution for variable C in the following graphic depends only on the value of variable B.

pic

Let’s consider a graphical model with K = (k_1, k_2, ... , k_n) variables and a set of dependencies between the variables, A = (a_1, a_2, ... , a_n) . For each K and A , we denote a set of conditional probability distributions for each K given the parent variable. In the following directed acyclic graph, we see that P(A|B,C) = P(A|B) . This means that the probability of A is conditionally dependent only on B and the value of C does not explain the other random variables. For belief networks, inference involves computing the probability of each value of a node in a network.

There you go; the absolute basics. And below is a presentation on belief networks that I made last year.

pic2