Musings of a dad with too much time on his hands and not enough to do. Wait. Reverse that.

Category: technology (Page 20 of 36)

How do you hide secrets in Jupyter Notebooks?

Often in my notebooks, I will connect to a relational database or other data store, query the system for data, and then do all sorts of amazing operations with said data. Many times, these data stores are restricted to select users and I must authenticate myself to the system–usually with an id and password. One might be inclined to code such connection strings inline in his Jupyter Notebook. However, I usually check my notebooks in to source control and/or hand them in to management as reports or documentation. Thus, any number of people might see my notebooks potentially compromising my personal id and password were I to code the credentials inline.

So, how can I hide my secrets–my connection strings and other sensitive information–so I can still safely share the good work I do in my notebooks? The way I do it is by moving my connection strings to configuration files. Allow me to demonstrate:

Step 1: Import my packages

from sqlalchemy import create_engine
import pandas as pd
from configparser import ConfigParser

I import the usual suspects–SQLAlchemy for database management and pandas for my dataframe work–but I’m also loading in configparser. It’s this last package that will help me pull out my secret stuff to a separate file that I can protect.

Step 2: Create my configuration file

Now, I need to create that separate configuration file. In the same directory as my notebook, I’ll create a text file. I usually name my file nb.cfg–as in, notebook config. For my example, storing the connection string to my SQLite database, my configuration file looks like so:

[my_db]
conn_string: sqlite:///mwc.db

Although SQLite databases don’t have authentication requirements, you can imagine, say, a connection string to a PostgreSQL database that would contain an id and password.

Step 3: Load the configuration file

Back in your notebook, load your configuration file:

parser = ConfigParser()
_ = parser.read('nb.cfg')

Step 4: Access the secrets in your configuration file

Now you’re ready to access those secrets! In this example, I’ll pass my secret connection string to my database engine object:

engine = create_engine(parser.get('my_db', 'conn_string'))

Step 5: Profit!

That’s basically it. In my example, I can now use my database engine object to query a table in my database and load the results into a dataframe:

qry = """
SELECT *
FROM people
"""

df_mwc_people = pd.read_sql(qry, engine)

Check out the complete code example here.

Postscript

You might ask yourself, “self, do I need to do anything else to protect my config file from getting into the hands of my enemies?” Well, since I often use Git for source control, I do want to make sure I don’t accidentally check my configuration file into my source code repository. To avoid that problem, I create a .gitignore file and add the name of my configuration file to it. Then, every time I commit a change, Git will simply ignore committing my configuration file.

Scraping the PyOhio Schedule

The twelfth annual PyOhio conference was held on July 27-28 and…it. was. awesome!

Now, when it comes to planning for a conference, I must admit that I’m a bit “old school.” A day or two before the gathering, I like to print out the schedule and carefully research each session so that I can choose the ones that best meet my work and personal objectives. Often, a conference will let you download a printable schedule; however, I didn’t find any such file on the PyOhio website. No matter, I can write some Python to scrape the schedule from the website and create my own CSV for printing. Here’s what I did:

Step 1: Import the requisite packages

import requests
from bs4 import BeautifulSoup
import csv

Step 2: Grab the schedule page

result = requests.get('https://www.pyohio.org/2019/events/schedule/')
soup = BeautifulSoup(result.content, 'lxml')

Step 3: Parse out the sessions

Unfortunately, I can only attend Saturday, so my code just focuses on pulling the Saturday sessions:

day_2_list = [['start_end', 'slot1', 'slot2', 'slot3', 'slot4']]
day_2 = soup.select('div.day')[1]  # get just Saturday
talks_section = day_2.find('h3', string='Keynotes, Talks, & Tutorials').parent

# iterate across each time block
for time_block in talks_section.select('div.time-block'):
    start_end = time_block.find('div', {'class': 'time-wrapper'}).get_text().replace('to', ' - ')
    time_rec = [start_end, '', '', '', '']
    # now, iterate across each slot within a time block.  a time block can have 1-4 time slots
    for slot in time_block.select('div.time-block-slots'):
        for i, card in enumerate(slot.select('div.schedule-item')):
            class_title = card.select_one('h3').get_text()
            presenter = (card.select('p')[0]).get_text()
            location = (card.select('p')[1]).get_text()
            time_rec[i+1] = '{0}\n{1}\n{2}'.format(class_title, presenter, location)
    day_2_list.append(time_rec)  # after grabbing each slot, write the block to my "day 2" list

Step 4: Write the scraped results to a CSV

csv_file = 'pyohio_20190727_schedule.csv'

with open(csv_file, 'w', newline='') as f:
    writer = csv.writer(f)
    writer.writerows(day_2_list)

Sweet! Now I can choose just the right sessions to attend. Get my complete code here.

Slope Charts in Python

I continue to explore the different charts from Machine Learning Plus’s Top 50 matplotlib visualizations post and look for good opportunities to recreate them with data sets I care about. Recently, I thought it might be interesting to create a slope chart where I simply match objects on one side of the chart to objects on the other side, without using the Y axis to convey any meaning. For my data set, I grabbed CollegeChoice.net’s 25 Best Colleges in Ohio. I didn’t dig into how they decide one college is better than another, although they do provide a description of their methodology. What I thought was interesting was that they provide the 4-5 most popular majors at each of the colleges. So, I thought I could create a slope chart where I write the top 10 Ohio Colleges on one side (all 25 would make the chart too cluttered), their most popular majors on the other side, and draw lines in between. How common are these majors among the top 10? My chart should be able to tell that story.

Step 1: Bring in all the packages I’ll need

Since I’m pulling in a parsing a web page for its data, requests and BeautifulSoup are in. numpy and math will help with spacing out the points in my chart and, of course, matplotlib will render the chart:

import requests
from bs4 import BeautifulSoup
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.lines as mlines
from matplotlib import cm
import math

Step 2: Grab the page and parse out the data

# grab the page
result = requests.get("https://www.collegechoice.net/rankings/best-colleges-in-ohio/")
soup = BeautifulSoup(result.content, 'lxml')

# parse out just the top 10 schools and their popular majors
ranking_divs = soup.find_all('div', 'ranking-box')
top_10_schools = {}

for ranking_div in ranking_divs[:10]:
    school = ranking_div.select_one('div.rb-list-title h3').text
    majors = [maj.text for maj in ranking_div.select('div.rb-ranking-body ul li')]
    top_10_schools[school] = majors

Step 3: Do some data cleanup

As I scanned the results, I noticed that CollegeChoice used two slightly different names for the same major: Visual and Performing Arts and Visual & Performing Arts. I had to write some code to clean that up:

for school, majors in top_10_schools.items():
    if 'Visual and Performing Arts' in majors:
        top_10_schools[school] = ['Visual & Performing Arts' if maj=='Visual and Performing Arts' else maj for maj in majors]

Step 4: Finally, build the chart

Now, I can build the chart, and that Machine Learning Plus article really helped out. The one difference was that, in their slope chart, they used GDP dollars on their Y axis. My Y axis wouldn’t have any sort of meaning: just a list of colleges. So, I used the numpy and math packages to help me evenly space out my points along the axis. Here’s what I came up with:

# if, say, you have a count of 11 and you want to round up to the nearest 5, this will return 15
def roundupto(your_count, round_up_to_nearest):
    return int(math.ceil(your_count / round_up_to_nearest)) * round_up_to_nearest

# draws a line between points
def newline(p1, p2, color='black'):
    ax = plt.gca()
    l = mlines.Line2D([p1[0], p2[0]], [p1[1], p2[1]], color=color, marker='o', markersize=6)
    ax.add_line(l)
    return l
    

fig, ax = plt.subplots(1, 1, figsize=(14, 14))

# get school and major lists and calculate the scale of the chart
school_list = list(top_10_schools.keys())
school_list.reverse()  # matplotlib will then put the #1 school at the top of the chart
major_list = list(set(sum(top_10_schools.values(), [])))
major_list.sort(); major_list.reverse()  # to help matplotlib list majors alphabetically down the chart
scale = roundupto(max(len(school_list), len(major_list)), 5)

# write the vertical lines
ax.vlines(x=1, ymin=0, ymax=scale, color='black', alpha=0.7, linewidth=1, linestyles='dotted')
ax.vlines(x=3, ymin=0, ymax=scale, color='black', alpha=0.7, linewidth=1, linestyles='dotted')

# plot the points; unlike the slope chart in the MachineLearningPlus.com article, my Y axis has no meaning, so
# I use numpy's linspace function to help me evenly space each point
school_y_vals = np.linspace(1, scale-1, num=len(school_list))
major_y_vals = np.linspace(1, scale-1, num=len(major_list))
ax.scatter(y=school_y_vals, x=np.repeat(1, len(school_list)), s=10, color='black', alpha=0.7)
ax.scatter(y=major_y_vals, x=np.repeat(3, len(major_list)), s=10, color='black', alpha=0.7)

# write the lines and annotation
for school, school_y_val in zip(school_list, school_y_vals):
    ax.text(1-0.05, school_y_val, school, horizontalalignment='right', verticalalignment='center', fontdict={'size':10})
    for major in top_10_schools[school]:
        major_y_val = major_y_vals[major_list.index(major)]
        newline([1, school_y_val], [3, major_y_val], color=cm.get_cmap('tab20')(school_list.index(school)))
        
for major, major_y_val in zip(major_list, major_y_vals):
    ax.text(3+0.05, major_y_val, major, horizontalalignment='left', verticalalignment='center', fontdict={'size':10})
    
# vertical line annotations
ax.text(1-0.05, scale-0.25, 'College', horizontalalignment='right', verticalalignment='center', 
        fontdict={'size':14, 'weight':700})
ax.text(3+0.05, scale-0.25, 'Major', horizontalalignment='left', verticalalignment='center', 
        fontdict={'size':14, 'weight':700})

# misc cleanup
ax.set(xlim=(0, 4), ylim=(0, scale))
ax.axis('off')

plt.title("Most Popular Majors at Ohio's Top 10 Colleges")
plt.show()

And the result is at the top of this post. Get my full code here.

Conclusions?

Well, I did note that over half of the popular majors are popular at only one of the Top 10 schools. I expected to see many of the same majors appear repeatedly across multiple schools. I guess maybe that’s a good thing: if, say, you want to study Finance, it would seem The Ohio State University and only The Ohio State University is the best place to study the discipline.

More importantly, the slope chart is now another cool visual I (and you) can add to your tool box. Happy sloping!

« Older posts Newer posts »

© 2025 DadOverflow.com

Theme by Anders NorenUp ↑