Musings of a dad with too much time on his hands and not enough to do. Wait. Reverse that.

Month: July 2019

Scraping the PyOhio Schedule

The twelfth annual PyOhio conference was held on July 27-28 and…it. was. awesome!

Now, when it comes to planning for a conference, I must admit that I’m a bit “old school.” A day or two before the gathering, I like to print out the schedule and carefully research each session so that I can choose the ones that best meet my work and personal objectives. Often, a conference will let you download a printable schedule; however, I didn’t find any such file on the PyOhio website. No matter, I can write some Python to scrape the schedule from the website and create my own CSV for printing. Here’s what I did:

Step 1: Import the requisite packages

import requests
from bs4 import BeautifulSoup
import csv

Step 2: Grab the schedule page

result = requests.get('https://www.pyohio.org/2019/events/schedule/')
soup = BeautifulSoup(result.content, 'lxml')

Step 3: Parse out the sessions

Unfortunately, I can only attend Saturday, so my code just focuses on pulling the Saturday sessions:

day_2_list = [['start_end', 'slot1', 'slot2', 'slot3', 'slot4']]
day_2 = soup.select('div.day')[1]  # get just Saturday
talks_section = day_2.find('h3', string='Keynotes, Talks, & Tutorials').parent

# iterate across each time block
for time_block in talks_section.select('div.time-block'):
    start_end = time_block.find('div', {'class': 'time-wrapper'}).get_text().replace('to', ' - ')
    time_rec = [start_end, '', '', '', '']
    # now, iterate across each slot within a time block.  a time block can have 1-4 time slots
    for slot in time_block.select('div.time-block-slots'):
        for i, card in enumerate(slot.select('div.schedule-item')):
            class_title = card.select_one('h3').get_text()
            presenter = (card.select('p')[0]).get_text()
            location = (card.select('p')[1]).get_text()
            time_rec[i+1] = '{0}\n{1}\n{2}'.format(class_title, presenter, location)
    day_2_list.append(time_rec)  # after grabbing each slot, write the block to my "day 2" list

Step 4: Write the scraped results to a CSV

csv_file = 'pyohio_20190727_schedule.csv'

with open(csv_file, 'w', newline='') as f:
    writer = csv.writer(f)
    writer.writerows(day_2_list)

Sweet! Now I can choose just the right sessions to attend. Get my complete code here.

Excuses

I love a good, comedic excuse. When appropriate, I like to use one or two at work just to keep the work environment light and fun. A classic one is Priscilla Presley’s line from Naked Gun.

Recently, I was re-watching the Blue Brothers and was reminded of this fantastic list of excuses Jake Blues rattled off to the girl he left at the altar. I just had to reproduce that here:

  • I ran out of gas
  • I had a flat tire
  • I didn’t have enough money for cab fare
  • My tux didn’t come back from the cleaners
  • An old friend came in from out of town
  • Someone stole my car
  • There was an earthquake
  • There was a terrible flood
  • locusts!

Slope Charts in Python

I continue to explore the different charts from Machine Learning Plus’s Top 50 matplotlib visualizations post and look for good opportunities to recreate them with data sets I care about. Recently, I thought it might be interesting to create a slope chart where I simply match objects on one side of the chart to objects on the other side, without using the Y axis to convey any meaning. For my data set, I grabbed CollegeChoice.net’s 25 Best Colleges in Ohio. I didn’t dig into how they decide one college is better than another, although they do provide a description of their methodology. What I thought was interesting was that they provide the 4-5 most popular majors at each of the colleges. So, I thought I could create a slope chart where I write the top 10 Ohio Colleges on one side (all 25 would make the chart too cluttered), their most popular majors on the other side, and draw lines in between. How common are these majors among the top 10? My chart should be able to tell that story.

Step 1: Bring in all the packages I’ll need

Since I’m pulling in a parsing a web page for its data, requests and BeautifulSoup are in. numpy and math will help with spacing out the points in my chart and, of course, matplotlib will render the chart:

import requests
from bs4 import BeautifulSoup
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
from matplotlib import cm
import math

Step 2: Grab the page and parse out the data

# grab the page
result = requests.get("https://www.collegechoice.net/rankings/best-colleges-in-ohio/")
soup = BeautifulSoup(result.content, 'lxml')

# parse out just the top 10 schools and their popular majors
ranking_divs = soup.find_all('div', 'ranking-box')
top_10_schools = {}

for ranking_div in ranking_divs[:10]:
    school = ranking_div.select_one('div.rb-list-title h3').text
    majors = [maj.text for maj in ranking_div.select('div.rb-ranking-body ul li')]
    top_10_schools[school] = majors

Step 3: Do some data cleanup

As I scanned the results, I noticed that CollegeChoice used two slightly different names for the same major: Visual and Performing Arts and Visual & Performing Arts. I had to write some code to clean that up:

for school, majors in top_10_schools.items():
    if 'Visual and Performing Arts' in majors:
        top_10_schools[school] = ['Visual & Performing Arts' if maj=='Visual and Performing Arts' else maj for maj in majors]

Step 4: Finally, build the chart

Now, I can build the chart, and that Machine Learning Plus article really helped out. The one difference was that, in their slope chart, they used GDP dollars on their Y axis. My Y axis wouldn’t have any sort of meaning: just a list of colleges. So, I used the numpy and math packages to help me evenly space out my points along the axis. Here’s what I came up with:

# if, say, you have a count of 11 and you want to round up to the nearest 5, this will return 15
def roundupto(your_count, round_up_to_nearest):
    return int(math.ceil(your_count / round_up_to_nearest)) * round_up_to_nearest

# draws a line between points
def newline(p1, p2, color='black'):
    ax = plt.gca()
    l = mlines.Line2D([p1[0], p2[0]], [p1[1], p2[1]], color=color, marker='o', markersize=6)
    ax.add_line(l)
    return l
    

fig, ax = plt.subplots(1, 1, figsize=(14, 14))

# get school and major lists and calculate the scale of the chart
school_list = list(top_10_schools.keys())
school_list.reverse()  # matplotlib will then put the #1 school at the top of the chart
major_list = list(set(sum(top_10_schools.values(), [])))
major_list.sort(); major_list.reverse()  # to help matplotlib list majors alphabetically down the chart
scale = roundupto(max(len(school_list), len(major_list)), 5)

# write the vertical lines
ax.vlines(x=1, ymin=0, ymax=scale, color='black', alpha=0.7, linewidth=1, linestyles='dotted')
ax.vlines(x=3, ymin=0, ymax=scale, color='black', alpha=0.7, linewidth=1, linestyles='dotted')

# plot the points; unlike the slope chart in the MachineLearningPlus.com article, my Y axis has no meaning, so
# I use numpy's linspace function to help me evenly space each point
school_y_vals = np.linspace(1, scale-1, num=len(school_list))
major_y_vals = np.linspace(1, scale-1, num=len(major_list))
ax.scatter(y=school_y_vals, x=np.repeat(1, len(school_list)), s=10, color='black', alpha=0.7)
ax.scatter(y=major_y_vals, x=np.repeat(3, len(major_list)), s=10, color='black', alpha=0.7)

# write the lines and annotation
for school, school_y_val in zip(school_list, school_y_vals):
    ax.text(1-0.05, school_y_val, school, horizontalalignment='right', verticalalignment='center', fontdict={'size':10})
    for major in top_10_schools[school]:
        major_y_val = major_y_vals[major_list.index(major)]
        newline([1, school_y_val], [3, major_y_val], color=cm.get_cmap('tab20')(school_list.index(school)))
        
for major, major_y_val in zip(major_list, major_y_vals):
    ax.text(3+0.05, major_y_val, major, horizontalalignment='left', verticalalignment='center', fontdict={'size':10})
    
# vertical line annotations
ax.text(1-0.05, scale-0.25, 'College', horizontalalignment='right', verticalalignment='center', 
        fontdict={'size':14, 'weight':700})
ax.text(3+0.05, scale-0.25, 'Major', horizontalalignment='left', verticalalignment='center', 
        fontdict={'size':14, 'weight':700})

# misc cleanup
ax.set(xlim=(0, 4), ylim=(0, scale))
ax.axis('off')

plt.title("Most Popular Majors at Ohio's Top 10 Colleges")
plt.show()

And the result is at the top of this post. Get my full code here.

Conclusions?

Well, I did note that over half of the popular majors are popular at only one of the Top 10 schools. I expected to see many of the same majors appear repeatedly across multiple schools. I guess maybe that’s a good thing: if, say, you want to study Finance, it would seem The Ohio State University and only The Ohio State University is the best place to study the discipline.

More importantly, the slope chart is now another cool visual I (and you) can add to your tool box. Happy sloping!

© 2024 DadOverflow.com

Theme by Anders NorenUp ↑