Collecting data from Zillow with R

Collecting data from Zillow with R

My mom has been house hunting over the past couple of weeks, so I decided to try and use R to look at the local market. Here’s what I’ve learned:

Collecting data from Zillow was pretty easy, overall. I mostly used R packages rvest, xlm2, and tidyr.

library(rvest)
library(tidyr
library(xml2)

Next, I went to Zillow and searched for homes in Denver, CO. I zoomed in on an area that I wanted to analyze and then copied the link and pulled the data in R:

url<-"https://www.zillow.com//homes//for_sale//Denver-CO_rb//?fromHomePage=true&shouldFireSellPageImplicitClaimGA=false&fromHomePageTab=buy"
webpage<-read_html(url)

The next part gets pretty complicated to explain. You essentially have to find the information you want from the webpage,which looks like a bunch of scrambled text. It’s helpful to go back to the webpage, right click, and select “View Page Source.” This will help you identify the structure of the webpage and pull the data you want. I started by parsing out the housing links from the metadata. You’ll have to remove characters to parse out the data, which I show below:

houses<- webpage %>%
  html_nodes(".zsg-pagination a") %>%
  html_attr("href")

houses<-houses[!is.na(houses)]
houses <-strsplit(houses,"/")
houses<-lapply(houses, function(x) x[length(x)])
houses<-as.numeric(gsub('[_p]','',houses))
houses <-max(houses)
urls<-c(url,paste0(url,2:houses,'_p/'))
urls

Then I used Jonkatz2 parser function to strip the data down even further. The rest of his functions didn’t work for me =/

getZillow <- function(urls) {
   lapply(urls, function(u) {
   cat(u, '\n')
   houses <- read_html(u) %>%
              html_nodes("article") houses })
 }
zdata<- getZillow(urls)

Instead, I ended breaking down different parts of his function to get the data that I need. The reason I had to write all of this complicated syntax is because the data is saved in a list within lists.

#to pull ID
getID<- function(data) {
  ldata<-1:length(data)
  lapply(1:length(ldata), function(x) {
    num<-data[[x]]
    ids<-num %>% html_attr("id")
  })
}
id<-getID(zdata)

#get latitude
getLAT<- function(data) {
  ldata<-1:length(data)
  lapply(1:length(ldata), function(x) {
    num<-data[[x]]
    lat<-num %>% html_attr("data-latitude")
    
  })
}
lats<-getLAT(zdata)

#get longitude
getLONG<- function(data) {
  ldata<-1:length(data)
  lapply(1:length(ldata), function(x) {
    num<-data[[x]]
    long<-num %>% html_attr("data-longitude")
    
  })
}
longs<-getLONG(zdata)

#get price
getPrice<- function(data) {
  ldata<-1:length(data)
  lapply(1:length(ldata), function(x) {
    num<-data[[x]]
    price<-num %>%  
      html_node(".zsg-photo-card-price") %>%
      html_text() 
  })
}
price<-getPrice(zdata)

#house description
getHdesc<- function(data) {
  ldata<-1:length(data)
  lapply(1:length(ldata), function(x) {
    num<-data[[x]]
    Hdesc<-num %>%  
      html_node(".zsg-photo-card-info") %>%
      html_text() %>%
      strsplit("\u00b7")
  })
}
hdesc<-getHdesc(zdata)

#needs to be stripped down further
hdesc[[1]][[1]]
ldata2<-length(hdesc[[ldata]])

beds<-list()
getBeds<- function(data) {
  for(i in 1:length(data)) {
    t1<-data[[i]]
     beds[[i]]<- t1 %>%
       purrr::map_chr(1)
  }
  return(beds)
}
beds<-getBeds(hdesc)

baths<-list()
getBath<- function(data) {
  for(i in 1:length(data)) {
    t1<-data[[i]]
    baths[[i]]<- t1 %>%
      purrr::map_chr(2)
  }
  return(baths)
}
baths<-getBath(hdesc)

sqft<-list()
getSQft<- function(data) {
  for(i in 1:length(data)) {
    t1<-data[[i]]
    sqft[[i]]<- t1 %>%
      purrr::map_chr(3)
  }
  return(sqft)
}
sqft<-getSQft(hdesc)

#house type
getHtype<- function(data) {
    ldata<-1:length(data)
    lapply(1:length(ldata), function(x) {
      num<-data[[x]]
      Htype<-num %>%  
        html_node(".zsg-photo-card-spec") %>%
        html_text()
  })
}
htype<-getHtype(zdata)

#address
getAddy<-function(data) {
  ldata<- 1:length(data)
  lapply(1:length(ldata),function(x) {
    num<-data[[x]]
    addy<- num %>%
      html_nodes(".zsg-photo-card-address") %>%
    html_text() %>%
      strsplit("\u00b7")
  })
}

address<-getAddy(zdata)

#listing type
getLtype<-function(data) {
  ldata<-1:length(data)
  lapply(1:length(ldata), function(x) {
    num<-data[[x]]
    ltype<-num %>% html_attr("data-pgapt")
    
  })
}
list_type<-getLtype(zdata)

Now you can unlist one level:

address<-lapply(address, function(x) unlist(x))
htype<-lapply(htype, function(x) unlist(x))
id<-lapply(id, function(x) unlist(x))
lats<-lapply(lats,function(x) unlist(x))
longs<-lapply(longs,function(x) unlist(x))
list_type<-lapply(list_type,function(x) unlist(x))
price<-lapply(price,function(x) unlist(x))

Then, I put it all in a data frame:

df<-data.frame()
list<-list(id, price, address, beds, baths, sqft, list_type,longs, lats, htype)
makeList<-function(data) {
  ldata<-1:length(data)
  lapply(1:length(ldata), function(x) {
    num<-data[[x]]
    ll<-num %>% unlist(recursive=FALSE) 
  })
}
List<-makeList(list)
df<-data.frame(id=c(List[[1]]), price=c(List[[2]]), address=c(List[[3]]),
               beds=c(List[[4]]), baths=c(List[[5]]), sqft=c(List[[6]]),
               l_type=c(List[[7]]), long=c(List[[8]]), lat=c(List[[9]]),
               h_type=c(List[[10]]))

Some of these variables are not correctly formatted. For example, latitude and longitude values were stripped of their decimal points, so I need to add them back in by first removing the factor formatting and then doing some division.

df$long <-as.numeric(as.character(df$long)) / 1000000
df$lat<-as.numeric(as.character(df$lat)) / 1000000

Also, some of my other variables have characters in them, so I want to remove that too:

df$beds <-as.numeric(gsub("[^0-9]", "",df$beds, ignore.case = TRUE))
df$baths <-as.numeric(gsub("[^0-9]", "",df$baths, ignore.case = TRUE))
df$sqft <-as.numeric(gsub("[^0-9]", "",df$sqft, ignore.case = TRUE))
df$price <-as.numeric(gsub("[^0-9]", "",df$price, ignore.case = TRUE))
#replace NAs with 0
df[is.na(df)]<-0

Now I can map my data, in addition to conducting any analyses that I may want to do. Since there’s a ton of stuff out there on conducting analyses in R, I’ll just show you how I mapped my data using the leaflet package:

library(leaflet)
m <- leaflet() %>%
  addTiles() %>%
  addMarkers(lng=df$long, lat=df$lat, popup=df$id) 
m

It should look like this:

DenverZillow

If you click on the markers, they will show you the house IDs that they are associated with. You can see the web version by going to my OSF account, where I also posted the R program that I used.

Collecting Twitter Data using the twitteR package in Rstudio

Collecting Twitter Data using the twitteR package in Rstudio

Last week, I wrote a blog post about collecting data using Tweepy in Python. Like usual, I decided to recreate my work in R, so that I can compare my experience using different analytical tools. I will walk you through what I did, but I assume that you already have Rstudio installed. If not, and you wish to follow along, here’s a link to a good resource that explains how to download and install Rstudio.

Begin by loading the following libraries–download them if you don’t have them already installed.

#To download:
#install.packages(c("twitteR", "purrr", "dplyr", "stringr"),dependencies=TRUE)

library(twitteR)
library(purrr)
suppressMessages(library(dplyr))
library(stringr)

Next, initiate the OAuth protocol. This of course assumes that you have registered your Twitter app. If not, here’s a link that explains how to do this.


api_key <- "your_consumer_api_key"
api_secret <-"your_consumer_api_secret"
token <- "your_access_token"
token_secret <- "your_access_secret"

setup_twitter_oauth(api_key, api_secret, token, token_secret)

Now you can use the package twitteR to collect the information that you want. For example, #rstats or #rladies <–great hashtags to follow on Twitter, btw 😉

tw = searchTwitter('#rladies + #rstats', n = 20)

which will return a list of (20) tweets that contain the two search terms that I specified:

RstatsRlaides

*If you want more than 20 tweets, simply increase the number following n=

Alternatively, you can collect data on a specific user. For example, I am going to collect tweets from this awesome R-Lady, @Lego_RLady:

Again, using the twitteR package, type the following:


LegoRLady <- getUser("LEGO_RLady") #for info on the user
RLady_tweets<-userTimeline("LEGO_RLady",n=30,retryOnRateLimit=120) #to get tweets
tweets.df<-twListToDF(RLady_tweets) #turn into data frame
write.csv(tweets.df, "Rlady_tweets.csv", row.names = FALSE) #export to Excel

Luckily, she only has 27 tweets total. If you are collecting tweets from a user that has been on Twitter for longer, you’ll likely have to use a loop to continue collecting every tweet because of the rate limit. If you export to Excel, you should see something like this:
Excel

*Note: I bolded the column names and created the border to help distinguish the data

If you’re interested in the retweets and replies to @LEGO_RLady, then you can search for that specifically. To limit the amount of data, let’s limit it to any replies since the following tweet:

target_tweet<-"991771358634889222"
atRLady <- searchTwitter("@LEGO_RLady", 
                       sinceID=target_tweet, n=25, retryOnRateLimit = 20)
atRLady.df<-twListToDF(atRLady)

The atRLady.df data frame should look like this:

atRlady

There’s much more data if you scroll right. You should have 16 variables total.

Sometimes there are characters in the tweet that result in errors. To make sure that the tweet is in plain text, you can do the following:

replies <- unlist(atRLady) #make sure to use the list and not the data frame

#helper function to remove characters:
clean_tweets <- function (tweet_list) {
  lapply(tweet_list, function (x) {
    x <- x$getText() # get text alone
    x <- gsub("&amp", "", x) # rm ampersands
    x <- gsub("(f|ht)(tp)(s?)(://)(.*)[.|/](.*) ?", "", x) # rm links
    x <- gsub("#\\w+", "", x) # rm hashtags
    x <- gsub("@\\w+", "", x) # rm usernames
    x <- iconv(x, "latin1", "ASCII", sub="") # rm emojis
    x <- gsub("[[:punct:]]", "", x) # rm punctuation
    x <- gsub("[[:digit:]]", "", x) # rm numbers
    x <- gsub("[ \t]{2}", " ", x) # rm tabs
    x <- gsub("\\s+", " ", x) # rm extra spaces
    x <- trimws(x) # rm leading and trailing white space
    x <- tolower(x) # convert to lower case
  })
}
tweets_clean <- unlist(clean_tweets(replies))
# If you want to rebombine the text with the metadata (user, time, favorites, retweets)
tweet_data <- data.frame(text=tweets_clean)
tweet_data <- tweet_data[tweet_data$text != "",]
tweet_data<-data.frame(tweet_data)
tweet_data$user <-atRLady.df$screenName
tweet_data$time <- atRLady.df$created
tweet_data$favorites <- atRLady.df$favoriteCount
tweet_data$retweets <- atRLady.df$retweetCount
tweet_data$time_bin <- cut.POSIXt(tweet_data$time, breaks="3 hours", labels = FALSE)
tweet_data$isRetweet <- atRLady.df$isRetweet

You can pull other information from the original data frame as well, but I don’t find that information very helpful since it is usually NA (e.g., latitude and longitude). The final data frame should look like this:

cleantweets

Now you can analyze it. For example, you can graph retweets for each reply

library(ggplot2)
ggplot(data = tweet_data, aes(x = retweets)) +
  geom_bar(aes(fill = ..count..)) +
  theme(legend.position = "none") +
  xlab("Retweets") +
  scale_fill_gradient(low = "midnightblue", high = "aquamarine4")
dev.copy(png,'myplot.png')
dev.off()

myplot

If you have more data, you can conduct a sentiment analysis of all the words in the text of the tweets or create a wordcloud (example below)

BigDataWordMap-1264x736

Overall, using R to collect data from Twitter was really easy. Honestly, it was pretty easy to do in Python too. However, I must say that the R community is slightly better when it comes to sharing resources and blogs that make it easy for beginners to follow what they’ve done. I really love the open source community and I’m excited that I am apart of this movement!

PS- I forgot to announce that I am officially an R-Lady (see directory)! To all my fellow lady friends, I encourage you to join!

 

 

Mining Data from Twitter (and replies to Tweets) with Tweepy

Mining Data from Twitter (and replies to Tweets) with Tweepy

I recently met someone who is interested in mining data from Twitter. In addition to mining data from Twitter however, they’re also interested in collecting all of the replies. I thought that I would try giving it a shot and sharing what I learn.

Note: This post assumes that Python is installed on your computer. If you haven’t installed Python, this Python Wiki walks you through the process.

To scrape tweets from Twitter, I recommend using Tweepy, but there are several other options. To install tweepy:

pip install tweepy

*Note: If your environments are configured like mine, you may need to type: conda install -c conda-forge tweepy

Now, go to Twitter’s developer page to register your app (you will have to sign in with your username and password, or sign up with a new username). You should see a button on the right-hand side of the page that says “Create New App.” Fill out the necessary fields (i.e. name of the app; it’s description; your website) and then check the box that says you agree to their terms, which I linked to above. If you don’t have a publicly accessible website, just list the web address that is hosting your app (e.g., link to your school profile; link to your work website). You can likely ignore the Callback URL field, unless you are allowing users to log into your app to authenticate themselves. In which case, enter the URL where they would be returned after they’ve given permission to Twitter to use your app.

After registering your app, you should see a page where you can create your access token. Click the “Create my access token” button. If you don’t see this button after a few seconds, refresh the page. The next page will ask you what type of access you need. For this example, we will need Read, Write, and Access Direct Messages. Now, note your OAuth settings, particularly your Consumer Key, Consumer Secret, OAuth Access Token, and OAuth Access Token Secret. Don’t share this information with anyone!

Next, import tweepy and use the OAuth interface to collect data.

import tweepy
from tweepy import OAuthHandler

consumer_key = 'YOUR-CONSUMER-KEY'
consumer_secret = 'YOUR-CONSUMER-SECRET'
access_token = 'YOUR-ACCESS-TOKEN'
access_secret = 'YOUR-ACCESS-SECRET'

auth = OAuthHandler(consumer_key, consumer_secret)
auth.set_access_token(access_token, access_secret)

api = tweepy.API(auth,  wait_on_rate_limit=True)

To collect tweets that are currently being shared on Twitter. For example, tweets that are under “StarwarsDay”. Note that I’ve asked Python to format the tweets by first listing the the user and screen name, and then their tweet. If you want ALL the meta data, remove the specifications that I wrote.

#you'll need to import json to run this script
import json
class PrintListener(tweepy.StreamListener):
    def on_data(self, data):
        # Decode the JSON data
        tweet = json.loads(data)

        # Print out the Tweet
        print('@%s: %s' % (tweet['user']['screen_name'], tweet['text'].encode('ascii', 'ignore')))

    def on_error(self, status):
        print(status)


if __name__ == '__main__':
    listener = PrintListener()

    # Show system message
    print('I will now print Tweets containing "StarWarsDay"! ==>')

    # Authenticate
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_secret)

    # Connect the stream to our listener
    stream = tweepy.Stream(auth, listener)
    stream.filter(track=['StarWarsDay'], async=True)

Here’s a brief glimpse of what I got:

I will now print Tweets containing "StarWarsDay"! ==>
@Amanda33441401: b'RT @givepennyuk: This #StarWarsDay we want you to feel inspired by the force to think up a Star Wars fundraiser!\n\n  Star Wars Movie Marat'
@jin_keapjjang: b'RT @HamillHimself: People around the world are marking #StarWarsDay in spectacular style. #MayThe4thBeWithYou https://t.co/BM02D965Xa via @'
@Bradleyg1996G: b'RT @NHLonNBCSports: MAY THE PORGS BE WITH YOU\n\n#StarWarsDay #MayThe4thBeWithYou https://t.co/HWAmptYND5'
@thays_jeronimo: b"RT @g1: 'May the 4th'  celebrado por fs de 'Star Wars' https://t.co/ggNhaEQCPV #MayThe4thBeWithYou #StarWarsDay #G1 https://t.co/wUY74DZL"
@DF_SomersetKY: b"If you're a fan of the franchise, you're going to love all of this Star Wars gear for your car! Tweet us your favor https://t.co/PteAtqS1Ui"
@zakrhssn: b'RT @williamvercetti: #StarWarsDay https://t.co/fgHZzTZ0Fm'
@kymaticaa: b'RT @Electric_Forest: May The Forest be with you.  #ElectricForest #StarWarsDay #StarWars https://t.co/bfQnZHI8eX'
@hullodave: b'"Only Imperial Stormtroopers are this precise" How precise? Not very? But why? Science! #StarWarsDay https://t.co/niZ2h6ssnp'

To store the data you just collected, rather than printing it, you’ll have to add some extra code (see text in red)

import csv
import json
class PrintListener(tweepy.StreamListener):
    def on_data(self, data):
        # Decode the JSON data
        tweet = json.loads(data)

        # Print out the Tweet
        print('@%s: %s' % (tweet['user']['screen_name'], tweet['text'].encode('ascii', 'ignore')))

        with open('StarWarsDay.csv','a') as f:
                f.write(data)
        except:
            pass

    def on_error(self, status):
        print(status)


if __name__ == '__main__':
    listener = PrintListener()

    # Show system message
    print('I will now print Tweets containing "StarWarsDay"! ==>')

    # Authenticate
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_secret)

    # Connect the stream to our listener
    stream = tweepy.Stream(auth, listener)
    stream.filter(track=['StarWarsDay'], async=True)

Now, let’s see if it’s possible to get replies to a tweet. I checked these users (listed above) and none of them have any replies. So I decided to just search #StarWarsDay on Twitter instead. I immediately found the twitter handle for Arrested Development, which has replies (at the time of this writing 9) to their tweet that includes #StarWarsDay:

Look at the hyperlink: [“https://twitter.com/bluthquotes/status/992433028155654144”%5D
You can see that the user we are interested in is @bluthquotes and that the id for this particular tweet is “992433028155654144”

To get tweets from just @bluthquotes, you would type

bluthquotes_tweets = api.user_timeline(screen_name = 'bluthquotes', count = 100)

for status in bluthquotes_tweets:
print(status)

To see all the replies to @bluthquotes tweet posted above:


replies=[] 
non_bmp_map = dict.fromkeys(range(0x10000, sys.maxunicode + 1), 0xfffd)  

for full_tweets in tweepy.Cursor(api.user_timeline,screen_name='bluthquotes',timeout=999999).items(10):
  for tweet in tweepy.Cursor(api.search,q='to:bluthquotes', since_id=992433028155654144, result_type='recent',timeout=999999).items(1000):
    if hasattr(tweet, 'in_reply_to_status_id_str'):
      if (tweet.in_reply_to_status_id_str==full_tweets.id_str):
        replies.append(tweet.text)
  print("Tweet :",full_tweets.text.translate(non_bmp_map))
  for elements in replies:
       print("Replies :",elements)
  replies.clear()

*Note: change the .items(10) line to get more replies. Remember that Twitter limits you to 100 per hour (at least at the time of this writing).

This is what I got:

Tweet : @HulkHogan You ok hermano?
Tweet : Go see a Star War #MayTheForceBeWithYou #StarWarsDay https://t.co/OLcmCAEl30
Replies : @bluthquotes @mlot11
Replies : @bluthquotes Oh my yes
Replies : @bluthquotes Star Wars needs more gay characters #jarjarXobama
Replies : @bluthquotes @TheSAPeacock May the 4th be with you!!!!
Replies : @bluthquotes @SaraAnneGill
Replies : @bluthquotes @kurkobains Por q xuxa la sacaron de #Netflix
Replies : @bluthquotes  https://t.co/Qv4KJJ6dFU
Replies : @bluthquotes @hiagorecanello
Replies : @bluthquotes No it’s #CincoDeCuatro
Replies : @bluthquotes @auburnhays 😂
Replies : @bluthquotes Go see a Star War on Cinco de Quatro! 🤠🌶️🍹
Replies : @bluthquotes @jmdroberts
Tweet : RT @arresteddev: Hey, hermanos! It's Cinco de Cuatro! Season 4 Remix is now streaming. https://t.co/Alw0Z2Zwlm
Tweet : Keep fighting little guy #StarWarsDay  #MayThe4thBeWithYou https://t.co/Uim4D2BP49
Replies : @bluthquotes "worth every penny"
Replies : @bluthquotes You’re still doin’ that?
Replies : @bluthquotes Im crying 😭
Replies : @bluthquotes @theJdog 😂😂😂😂😂
Tweet : @JTHM8008 @herooine @JeffEisenband @ZachAJacobson I say HUZZAH! like this at least 5 times a week.
Tweet : @herooine @JeffEisenband @ZachAJacobson Checks out ✅
Tweet : @gjb512 It’s there already. Huzzah!
Tweet : @drkatiemd_ @MitchHurwitz It’s a wonderful program!
Tweet : I prematurely blue myself  #EmbarrassmentIn4Words https://t.co/QYUFeSKFT2
Tweet : @sebastrivi @VICE @arresteddev I’m not on board

You can see that it’s not quite what I wanted, which is just the responses to the Star Wars tweet (see red text). According to the API reference page, there should be a way to limit the returned text to the replies to the specific tweet we are interested, but I will have to continue tinkering with it. I’ll post an update when I figure it out.

US Fertility Heat Map DIY

US Fertility Heat Map DIY

The US fertility heat maps that I made a couple of weeks ago received a lot of attention and one of the questions I’ve been asked is how I produced it, which I describe in this post.

As I mentioned in my previous post, I simply followed the directions specified in this article, but I limited the UN data to the US. Overall, I think the article does a good job of explaining how they created their heat map in Tableau. The reason why I remade the heat map in R is because I was just frustrated with the process of trying to embed the visualization into WordPress. Both Tableau and WordPress charge you to embed visualizations in a format that is aesthetically pleasing. Luckily, recreating the heat map in R was extremely easy and just as pretty, at least in my opinion. Here’s how I did it:

First, download the data from the UN website–limit the data to the US only. Alternatively, I’ve linked to the (formatted) data on my OSF account, which also provides access to my code.

Now type the following in Rstudio:


#load libraries:
#if you need to install first, type: install.packages("package_name",dependencies=TRUE)
library(tidyverse)
library(viridis)

#set your working directory to the folder your data is stored in
setwd("C:/Users/Stella/Documents/blog/US birth Map")
#if you don't know what directory is currently set to, type: getwd()

#now import your data
us_fertility<-read.csv("USBirthscsv.csv", header=TRUE) #change the file name if you did not use the data I provided (osf.io/h9ta2)

#limit to relevant data
dta% select(Year, January:December)

#gather (i.e., "aggregate") data of interest, in preparation for graphing
dta%
arrange(Year)

#orderring the data by most frequent incidence of births
dta %>%
group_by(Year) %>%
mutate(rank=dense_rank(desc(births)))

#plot the data
plot<- ggplot(bb2, aes(x =fct_rev(Month),
y = Year,
fill=rank)) +
scale_x_discrete(name="Months", labels=c("Jan", "Feb", "Mar",
"Apr", "May","Jun",
"Jul", "Aug", "Sep",
"Oct", "Nov", "Dec")) +
scale_fill_viridis(name = "Births", option="magma") + #optional command to change the colors of the heat map
geom_tile(colour = "White", size = 0.4) +
labs(title = "Heat Map of US Births",
subtitle = "Frequency of Births from 1969-2014",
x = "Month",
y = "Year",
caption = "Source: UN Data") +
theme_tufte()

plot+ aes(x=fct_inorder(Month))

#if you want to save the graph
dev.copy(png, "births.png")
dev.off()

 
And that’s it! Simple, right?!

Gapminder gif with Rstudio

Gapminder gif with Rstudio

I decided to remake the Gapminder gif that I made the other day in Python, but in Rstudio this time. I’ll probably continue doing this for a while, as I try to figure out the advantages of using one program over the other. Here’s is a walk-through of what I did to recreate it:

#install these packages if you haven't already
install.packages(c("devtools", "dplyr", "ggplot2", "readr"))
devtools::install_github("dgrtwo/gganimate",force=TRUE)

library(devtools)
library(dplyr)
library(readr)
library(viridis)
library(ggplot2)
library(gganimate)
library(animation)

#Set up ImageMagick --for gifs
install.packages("installr",dependencies = TRUE)
library(installr)

#Configure your environment--change the location
Sys.setenv(PATH = paste("C:/Program Files/ImageMagick-7.0.7-Q16", Sys.getenv("PATH"), sep = ";")) #change the path to where you installed ImageMagick
#Again, change the location:
magickPath <- shortPathName("C:/Program Files/ImageMagick-7.0.7-Q16/magick.exe")
#ani.options(convert=magickPath)

If you need to download ImageMagick, go to this link

Load data and create plot

Once you’ve installed the appropriate packages and configured ImageMagick to work with Rstudio, you can load your data and plot as usual.

gapminder_data<-read.csv("https://python-graph-gallery.com/wp-content/uploads/gapminderData.csv", header=TRUE)

glimpse(gapminder_data) #print to make sure it loaded correctly
## Observations: 1,704
## Variables: 6
## $ country    Afghanistan, Afghanistan, Afghanistan, Afghanistan, ...
## $ year       1952, 1957, 1962, 1967, 1972, 1977, 1982, 1987, 1992...
## $ pop        8425333, 9240934, 10267083, 11537966, 13079460, 1488...
## $ continent  Asia, Asia, Asia, Asia, Asia, Asia, Asia, Asia, Asia...
## $ lifeExp    28.801, 30.332, 31.997, 34.020, 36.088, 38.438, 39.8...
## $ gdpPercap  779.4453, 820.8530, 853.1007, 836.1971, 739.9811, 78...
# Helper function for string wrapping. 
# Default 20 character target width.
swr = function(string, nwrap=40) {
  paste(strwrap(string, width=nwrap), collapse="\n")
}
swr = Vectorize(swr)

gapminder_plot<-ggplot(gapminder_data) +
  aes(x = gdpPercap,
      y = lifeExp,
      colour = continent,
      size = pop, 
      frame=year) +
      scale_x_log10() +
  scale_size_continuous(guide =FALSE) + #suppresses the second legend (size=pop)
  geom_point() +
  scale_color_viridis(discrete=TRUE)+ #optional way to change colors of the plot
  theme_bw() +
  labs(title=swr("Relationship Between Life Expectancy and GDP per Capita"),
       x= "GDP Per Capita",
       y= "Life expectancy",
      caption="Data: Gapminder")
  theme(legend.position = "none",
        axis.title.x=element_text(size=.2),
        axis.title.y=element_text(size=.2),
        plot.caption = element_text(size=.1))</

#getOption("device") #try running this if your plot doesn't immediately show gapminder_plot

#if you want to save the plot:
ggsave("title.png", 
       plot = last_plot(), # or give ggplot object name as in myPlot,
       width = 5, height = 5, 
       units = "in", # other options c("in", "cm", "mm"), 
       dpi = 300)

Notice that I created the swr function to wrap the title text. If I don’t include that function, the title runs off the plot, like this:

gapminderplot

Animate the plot

Now you can animate the plot using gganimate. Also, if you want to change any of the axis-titles or any other feature of the plot, I like to reference STHDA.

#remember to assign a working directory first:
#setwd() <--use this to change the working directory, if needed
gganimate(gapminder_plot,interval=.5,"gapminderplot.gif")

 

All in all, I’d say that creating the gif was equally easy in Python and R. Although I had more trouble initally configuring Python with ImageMagic–I might have found it easier in R simply because I used Python to figure this out the first time.  On the other hand, I like the way the Python gif looks much more than the gif that Rstudio rendered.

animated_gapminder

Looks like I’ll have to continue experimenting.

Responding to the article, “Why Are Data Science Leaders Running for the Exit?”

I have some objections to an article that was written by guest contributor Edward Chenard and posted on Data Science Central. The purpose of his opinion piece is to present an argument for why Data Science Leaders are leaving the field. He makes three points: (1) Academia can’t do for-profit, (2) Wrong Expectations, and (3) Bad Methods. I don’t have much of an opinion on the last two claims. I mainly take issue with his second point. I would also like more evidence that supports the issue that inspired his article. Full disclaimer: I’m a graduate student.

With respect to why he is writing this article: Drawing from anecdotes, Mr. Chenard states that he knows “a lot of people currently running data science teams at large organizations and the vast majority of them…want to leave their jobs.” While I don’t doubt that he has had these conversations, surveys don’t seem to support his observation. For example—assuming that Data Science Leaders consider themselves Data Scientists, although Mr. Chenard is not clear about the particularly job title he is referring to—Glassdoor ranks Data Scientist as the number 1 job in America, as measured by median salary, job satisfaction, and labor opportunity. In fact, Data Scientist has been ranked the number 1 job for the past 3 consecutive years.

I can’t find evidence that supports his point, which is that data scientists are unsatisfied with their jobs. There is certainly some evidence that indicates that there is volatility in this field, but we cannot assume that this is because these employees are dissatisfied. For example, it could be that these individuals may have decided to return to school (or plan to return to school in the near future) or that they started working in a tech firm that failed or cut employees, which is quite common for tech startups (e.g., Forbes 2015). Despite this, in support of his claim, I did find an article in the Financial Times that states, “According to Kaggle’s survey, most people working in the field spend 1-2 hours a week looking for a new job.” Most is not an objective number that can be debated, however, and the article did not link to the source, so there is no way to investigate what the author meant by “most.” The statistics that the article does go on to report, undermine Mr. Chenard’s argument. The article goes on to say that, based on Stack Overflow data (n=64,000 developers), machine learning specialists top the list (at 14.3%) of employees that reported looking for a new job. Data scientist were second on the list, at 13.2%. This is far from “a lot.” The evidence that I found therefore suggests that most data scientists are in fact, satisfied with their jobs, which makes the premise of his article a bit dubious.

But that isn’t the main issue that I have with Mr. Chenard’s post. I would like to dispute his second claim, which is that data scientists with PhDs (who possess an academic mindset) “can be more of a liability than an asset,” particularly when “your drive is profits and customer satisfaction.” I’m not sure who he is referring to when he says “your,” which I assume to mean the employer. The statement suggests that academics are somehow less adept at both considering profit margins and customers satisfaction. Furthermore, with the disclaimer that he doesn’t have a PhD, he argues that the type of work PhD students excel in isn’t useful in the private market. This is a bold and ridiculous claim. I believe that Mr. Chenard is misattributing what he believes to be one of the primary reasons for the “mass exodus” from data scientists, which is the elevation or privileging of Data Scientist (Leaders) who have PhDs by employers. Although I do not know why employers may be doing this, I’m sure they have their (well-researched) reasons.

To respond to this argument: I’m not sure why Mr. Chenard solely targets PhDs over someone with a Master’s degree or a Bachelor’s. What is the threshold? When does more education become a liability? He does not make this clear. Regardless of his point. I disagree. Graduate students are dedicated and disciplined workers. Indeed, general statistics on the positive relationship between higher levels of education and positive outcomes, such as longevity, income, and cognitive ability, support that broadly speaking, more education generally results in positive returns. (The argument has more to do with selection, but that’s neither here nor there.)

With respect to data science, I concede that not all PhDs would be spectacular in Data Science, just as not every person looking for a job would excel as a Data Scientist. But just considering the type of graduate student who may be interested in data science (e.g., those who study statistic; computer science; physics; economists; demographers; sociologists), I think that it’s ridiculous to make such strong claims about their characteristics and their ability to excel in a particular field. Consider the two main arguments that he made: inability or limited ability to consider profits and customer satisfaction. All graduate students are skilled researchers, meaning that they are trained to critically consider all aspects of projects, including budgets. As someone who has written for grants (and successfully won them), writing a grant proposal—which is a lot like a business proposal—requires carefully budgeting out the research project, which includes things like calculating the cost of data collection, compensating researchers and/or research participants, and data storage. Grant writing is a vital part of conducting research. In many ways, a lot of our careers depend on it, which is why I think it’s outrageous that Mr. Chenard is suggesting that PhDs are somehow bad at considering the monetary aspects of managing projects and teams. Also, side note: graduate students are generally poor! Learning to carefully manage money is a critical part of earning a PhD. Therefore, I’m pretty confident in saying that most graduate students are probably very capable of managing profits, at least as effectively—if not more—than a data scientist without a PhD, in an apples-to-apples comparison.

As for dealing with customers, graduate students have to deal with some of the most critical and cantankerous “customers” one could ever deal with, such as highly opinionated researchers, policymakers with an agenda, national and international grant institutions, university deans, and of course, college students and their guardians. These groups absolutely “consume” our products, and we are obliged to consider their satisfaction of our projects. In that respect, graduate students and those who have earned their PhDs certainly know how to respond to customers. I think what Mr. Chenard may be conflating is any observed differences that result from the knowledge accrued while working in a particular field. Sure, someone who has been working in industry rather than spending the last few years earning their PhD will of course be more familiar with handling “customers” that consume their products. However, they likely learned this skill over time on the job. Someone with a PhD will likely also gain or sharpen these skills, over time on the job. Training in academia does not hinder this ability, and one cannot assume so.