Turning Data Into Awesome With sqldf and pandasql

Both R and Python possess libraries for using SQL statements to interact with data frames. While both languages have native facilities for manipulating data, the sqldf and pandasql provide a simple and elegant interface for conducting tasks using an intuitive framework that’s widely used by analysts.

Screenshot from 2015-04-29 11:23:02

Screenshot from 2015-04-29 11:23:44

 

 

 

 

R and sqldf

sqldf("SELECT COUNT(*) FROM df2 WHERE state = 'CA'")
 
  COUNT(*)
1        4
 
sqldf("SELECT 
            df2.firstname, 
            df2.lastname, 
            df1.var1, 
            df2.state 
      FROM df1
      INNER JOIN df2 ON df1.personid = df2.id 
      WHERE df2.state = 'TX'")
 
  firstname lastname  var1 state
1     David    Spade -2.09    TX
2       Joe  Montana  1.16    TX
 
sqldf("SELECT 
            df2.state, 
            COUNT(df1.var1) 
      FROM df1
      INNER JOIN df2 ON df1.personid = df2.id WHERE df1.var1 > 0
      GROUP BY df2.state")
 
   state COUNT(df1.var1)
1     AZ               1
2     CA               1
3     GA               1
4     IL               1
5     NC               1
6     NY               1
7     OK               1
8     SC               1
9     TX               1
10    VT               1

Python and pandasql

import pandasql as ps
 
 
q1 = """SELECT COUNT(*) FROM df2 WHERE state = 'CA'"""
 
print ps.sqldf(q1, locals())
 
   COUNT(*)
0         4
 
q2 = """
    SELECT 
        df2.firstname, 
        df2.lastname, 
        df1.var1, 
        df2.state 
    FROM df1 INNER JOIN df2 ON df1.personid = df2.id 
    WHERE df2.state = "TX";
    """
 
print ps.sqldf(q2, locals())
 
  firstname lastname  var1 state
0     David    Spade -2.09    TX
1       Joe  Montana  1.16    TX
 
q3 = """SELECT 
            df2.state, 
            COUNT(df1.var1) 
      FROM df1
      INNER JOIN df2 ON df1.personid = df2.id WHERE df1.var1 > 0
      GROUP BY df2.state"""
 
print ps.sqldf(q3, locals())
 
  state  COUNT(df1.var1)
0    AZ                1
1    CA                1
2    GA                1
3    IL                1
4    NC                1
5    NY                1
6    OK                1
7    SC                1
8    TX                1
9    VT                1

 

Graphviz by Example: Part Two

My previous post introduced the dot language and how it can be utilized to create flowcharts. For part two, I sought to partially reproduce a more demanding visualization to highlight how Graphviz could be used. The original graphic was taken from the website for the Python scikit library and provide a quick reference guide on working with estimation procedures. It can be found here.

digraph Cheat_Sheet {
 
    graph [fontsize=10 fontname="Verdana" compound=true];
    node [shape=record fontsize=10 fontname="Verdana"];
 
    A1 [label="START", shape=box, fontcolor=black, color=Yellow, style=filled]; 
    A2 [label=">50\nsamples", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
    A3 [label="get\nmore\ndata", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
    A4 [label="predicting a\ncategory", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
    A5 [label="predicting a\nquantity", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
    A6 [label="do you have\nlabeled\ndata", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
    A7 [label="just\nlooking", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
    A8 [label="predicting\nstructure", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
    A9 [label="tough\nluck", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
 
    A1 -> A2 [color=yellow];
    A2 -> A3 [color=red, fontcolor=red, label="NO", fontsize=10];
    A2 -> A4 [color=green, fontcolor=green, label="YES", fontsize=10];
    A4 -> A5 [color=red, fontcolor=red, label="NO", fontsize=10];
    A4 -> A6 [color=green, fontcolor=green, label="YES", fontsize=10];
    A5 -> A7 [color=red, fontcolor=red, label="NO", fontsize=10];
    A7 -> A8 [color=red, fontcolor=red, label="NO", fontsize=10];
    A8 -> A9 [color=yellow];
    A6 -> H [color=green, fontcolor=green, label="YES", fontsize=10];
    A6 -> P [color=red, fontcolor=red, label="NO", fontsize=10];
    A5 -> B [color=green, fontcolor=green, label="YES", fontsize=10];
    R -> A9 [color=red, fontcolor=red, label="NO", fontsize=10];
    A7 -> X [color=green, fontcolor=green, label="YES", fontsize=10];
 
    subgraph cluster_1 {
	label=<<B>regression</B>>;
        color=lightgrey;
        style=filled;
 
	B [label="<100k\nsamples", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        C [label="SGD\nRegressor", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        D [label="few features\nshould be\nimportant", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        E [label="ElasticNet\nLasso", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        F [label="Ridge Regression\nSVR\n(kernel='linear')", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
	G [label="SVR(kernel='rbf')\nEnsembleRegressors", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
 
	B -> C [color=red, fontcolor=red, label="NO", fontsize=10];
	B -> D [color=green, fontcolor=green, label="YES", fontsize=10];
	D -> E [color=green, fontcolor=green, label="YES", fontsize=10];
	D -> F [color=red, fontcolor=red, label="NO", fontsize=10];
	F -> G [color=yellow, fontcolor=yellow, label="NOT\nWORKING", fontsize=10];
    }
 
    subgraph cluster_2 {
        label=<<B>classification</B>>;
        color=lightgrey;
        style=filled;
 
        H [label="<100k\nsamples", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        I [label="SGD\nClassifier", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        J [label="Linear\nSVC", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        K [label="Text\nData", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        L [label="KNeighbors\nClassifier", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
	M [label="Naive\nBayes", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
	N [label="kernal\napproximation", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
	O [label="SVC\nEnsemble\nClassifiers", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
 
	H -> I [color=red, fontcolor=red, label="NO", fontsize=10];
	H -> J [color=green, fontcolor=green, label="YES", fontsize=10];
	J -> K [color=yellow, fontcolor=yellow, label="NOT\nWORKING", fontsize=10];
	I -> N [color=yellow, fontcolor=yellow, label="NOT\nWORKING", fontsize=10];
	K -> M [color=green, fontcolor=green, label="YES", fontsize=10];
	K -> L [color=red, fontcolor=red, label="NO", fontsize=10];
	L -> O [color=yellow, fontcolor=yellow, label="NOT\nWORKING", fontsize=10];       
    }
 
    subgraph cluster_3 {
        label=<<B>clustering</B>>;
        color=lightgrey;
        style=filled;
 
        P [label="number of\ncategories\nknown", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
	Q [label="<10k\nsamples", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        R [label="<10k\nsamples", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        S [label="MeanShift\nVBGMM", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        T [label="MiniBatch\nKMeans", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
	U [label="KMeans", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
	V [label="Spectral\nClustering\nGMM", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
 
	P -> Q [color=green, fontcolor=green, label="YES", fontsize=10];
	P -> R [color=red, fontcolor=red, label="NO", fontsize=10];
	R -> S [color=green, fontcolor=green, label="YES", fontsize=10];
	Q -> U [color=green, fontcolor=green, label="YES", fontsize=10];
	Q -> T [color=red, fontcolor=red, label="NO", fontsize=10];
	U -> V [color=yellow, fontcolor=yellow, label="NOT\nWORKING", fontsize=10];       
    }
 
    subgraph cluster_4 {
        label=<<B>dimensionality reduction</B>>;
        color=lightgrey;
        style=filled;
 
	W [label="<10k\nsamples", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        X [label="Randomized\nPCA", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        Y [label="Isomap\nSpectral\nEmbedding", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
        Z [label="kernel\napproximation", shape=box, fontcolor=black, color=cadetblue2, style=filled]; 
	AA [label="LLE", shape=box, fontcolor=black, color=cadetblue2, style=filled];
 
	X -> W [color=yellow, fontcolor=yellow, label="NOT\nWORKING", fontsize=10];       
	W -> Y [color=green, fontcolor=green, label="YES", fontsize=10];
	W -> Z [color=red, fontcolor=red, label="NO", fontsize=10];
	Y -> AA [color=yellow, fontcolor=yellow, label="NOT\nWORKING", fontsize=10];       
    }
 
}

Blog_Graphviz.dot

 

Graphviz by Example: Part One

Introduction
GraphViz is an open-source software package developed by AT&T Labs for generating directed graphs and flowcharts. Outputs are created using Dot, a plain text graph description language that is part of the Graphviz package. GraphViz is a powerful application that allows users to create appealing flowcharts without getting hung up on the layout or positioning of the nodes. Graphs in GraphViz are comprosed of nodes and edges (lines), and can be directed (lines with arrows) or undirected (lines without arrows). To create flowcharts with dot, the user must specify the type of graph, the name of the plot, and then define the terms of the graph within braces.

Basics
To generate visualizations using Graphviz, a user must create a dot source file and save it with a .dot extension. In the terminal, set the current working directory to the location of that file and launch dot by running ‘dot -T png -O file_name.dot’.

Example One:

graph name1 {
     Node1 [label="Node_1"];
}

blog_one.dot

Example Two:

digraph name1 { 
      Node1 [label="Node_1"];
      Node2 [label="Node_2"];
      Node1 -> Node2;
}

blog_two.dot

Example Three:

digraph name1 { 
     Node1 [label="Node_1"];
     Node2 [label="Node_2"];
     Node3 [label="Node_3"];
     Node1 -> Node2 -> Node3;
     Node1 -> Node3;
     Node2 -> Node1;
}

blog_three.dot

Example Four:

digraph name1 { 
   Node1 [label="Node_1"];
   Node2 [label="Node_2", shape=diamond];
   Node3 [label="Node_3", color=Blue, fontcolor=Navy, shape=box, style=filled, 
               color="#d3edea"]; 
   Node4 [label="Node_4"];
   Node1 -> Node2 -> Node3;
   Node1 -> Node3 [color=red, label="Best Choice", fontsize=10];
   Node2 -> Node1;
   Node2 -> Node4;
}

blog_four.dot


 

Using csvkit to Summarize Data: A Quick Example

As data analysts, we’re frequently presented with comma-separated value files and tasked with reporting insights. While it’s tempting to import that data directly into R or Python in order to perform data munging and exploratory data analysis, there are also a number of utilities to examine, fix, slice, transform, and summarize data through the command line. In particular, Csvkit is a suite of python based utilities for working with CSV files from the terminal. For this post, we will grab data using wget, subset rows containing a particular value, and summarize the data in different ways. The goal is to take data on criminal activity, group by a particular offense type, and develop counts to understand the frequency distribution.

Lets start by installing csvkit. Go to your command line and type in the following commands.

$ pip install csvkit

One: Set the working directory.

$ cd /home/abraham/Blog/Chicago_Analysis

Two: Use the wget command to grab data and export it as a csv file entitled rows.

$ wget –no-check-certificate –progress=dot https://data.cityofchicago.org/api/views/ijzp-q8t2/rows.csv?accessType=DOWNLOAD > rows.csv

This dataset contains information on reported incidents of crime that occured in the city of Chicago from 2001 to present. Data comes from the Chicago Police Department’s Citizen Law Enforcement Analysis and Reporting system.

Three: Let’s check to see which files are now in the working directory and how many rows that file contains. We will also use the csvcut command to identify the names of each column within that file.

$ ls
$ wc -l rows.csv
$ csvcut -n rows.csv

Four: Using csvsql, let’s find what unique values are in the sixth column of the file, primary type. Since we’re interested in incidents of prostitution, those observations will be subset using the csvgrep command, and transfered into a csv file entitled rows_pros.

$ csvsql –query “SELECT [Primary Type], COUNT(*) FROM rows GROUP BY [Primary Type]” rows.csv | csvlook
$ csvgrep -c 6 -m PROSTITUTION rows.csv > rows_pros.csv

Five: Use csvlook and head to have a look at the first few rows of the new csv file. The ‘Primary Type’ should only contain information on incidents of crime that involved prostitution.

$ wc -l rows_pros.csv
$ csvlook rows_pros.csv | head

Six: We’ve now got the data we need. So let’s do a quick count of each description that is associated with the prostitution offense. This is done using the csvsql and csvlook command line tools.

$ csvsql –query “SELECT [Primary Type], Description, COUNT(*) FROM rows_pros GROUP BY Description” rows_pros.csv | csvlook

Screenshot from 2015-03-29 23:03:49

This has been a quick example of how the various csvkit utilities can be used to take a large csv file, extract specific observations, and generate summary statistics by executing a SQL query on that data. While this same analysis could have been performed in R or Python in a more efficient manner, it’s important for analysts to remember that the command line offers a variety of important utilities that can simplify their daily job responsibilities.