Musings of a dad with too much time on his hands and not enough to do. Wait. Reverse that.

Tag: tools (Page 4 of 35)

Finding sub-ranges in my dataset

File this under: there-has-to-be-a-simpler-way-to-do-this-in-pandas-but-I-haven’t-found-what-that-is

Recently, I’ve been playing with some financial data to get a better understanding of the yield curve. Related to yield and inverted yield curves are the periods of recession in the US economy. In my work, I wanted to first build a chart that indicated the periods of recession and ultimately overlay that with yield curve data. Little did I realize the challenge of just coding that first part.

I downloaded a dataset of recession data, which contains a record for every calendar quarter from the 1960s to present day and a 0 or 1 to indicate whether the economy was in recession for that quarter–“1” indicating that it was. What I need to do was pull all the records with a “1” indicator and find the start and end times for each of those ranges so that I could paint them onto a chart.

I’ve heard it said before that any time you have to write a loop over your pandas dataframe, you’re probably doing it wrong. I’m certainly doing a loop here and I have a nagging suspicion there’s probably a more elegant way to achieve the solution. Nevertheless, here’s what I came up with to solve my recession chart problem:

Step 1: Bring in the necessary packages

import pandas as pd
import numpy as np
import matplotlib.pyplot as plt

%matplotlib inline  # for easy chart display in jupyter notebook

Step 2: Load in my downloaded recession dataset and take a peek

# recession dates: https://fred.stlouisfed.org/series/JHDUSRGDPBR
df_recessions = pd.read_csv('./data/JHDUSRGDPBR_20220327.csv')

df_recessions['DATE'] = pd.to_datetime(df_recessions.DATE)
df_recessions.head()
The first records of the Recession dataset
df_recessions[df_recessions.JHDUSRGDPBR==1.0].head()
The first records in the dataset where the economy was in recession

Step 3: Mark the start of every period of recession in the dataset

So, now I’m asking myself, “how do I extract the start and stop dates for every period of recession identified in the dataset? Let’s start with first just finding the start dates of recessions.” That shouldn’t be too difficult. If I can filter in just the recession quarters and calculate the date differences from one row to the next, if the difference is greater than three months (I estimated 93 days as three months), then I know there was a gap in quarters prior to the current record indicating that current record is the start of a new recession. Here’s what I came up with [one further note: my yield curve data only starts in 1990, so I filtered the recession data for 1990 to present]:

df_spans = df_recessions[(df_recessions.DATE.dt.year>=1990) & (df_recessions.JHDUSRGDPBR==1.0)].copy()
df_spans['days_elapsed'] = df_spans.DATE - df_spans.shift(1).DATE
df_spans['ind'] = df_spans.days_elapsed.dt.days.apply(lambda d: 's' if d > 93 else '')
df_spans.iloc[0, 3] = 's'  # mark first row as a recession start
df_spans
“s” indicates the start of a new recession

Step 4: Find the end date of each recession

Here’s where my approach starts to go off the rails a little. The only way I could think to find the end dates of each recession is to:

  1. Loop through a list of the start dates
  2. In each loop, get the next start date and then grab the date of the record immediately before that one
  3. When I hit the last loop, just consider the last record to be the end date of the most recent recession
  4. With every stop date, add three months since the stop date is only the first day of the quarter and, presumably, the recession more or less lasts the entire quarter

Confusing? Here’s my code:

start_stop_dates = []
start_dates = df_spans.loc[df_spans.ind=='s', ].DATE.tolist()

for i, start_date in enumerate(start_dates):
    if i < len(start_dates)-1:
        stop_date = df_spans.loc[df_spans.DATE < start_dates[i+1]].iloc[-1].DATE
    else:
        stop_date = df_spans.iloc[-1].DATE
        
    # add 3 months to the end of each stop date to stretch the value to the full quarter
    start_stop_dates.append((start_date, stop_date + np.timedelta64(3,'M')))
    
start_stop_dates
Recessions from 1990 to the beginning of 2022

Step 5: Build my chart

With that start/stop list, I can build my underlying recession chart:

fig, ax = plt.subplots(figsize=(12,6))

_ = ax.plot()
_ = ax.set_xlim([date(1990, 1, 1), date(2022, 4, 1)])
_ = ax.set_ylim([0, 10])

for st, sp in start_stop_dates:
    _ = ax.axvspan(st, sp, alpha=0.2, color='gray')
US Recessions: 1990 – 2021

Phew. All that work and I’m only at the starting point of my yield curve exploration, but that will have to wait for a future post. However, if you can think of a more elegant way to identify these date ranges without having to resort to looping, I’d love to hear it!

Parsing Word documents with Python

If you ever had a need to programmatically examine the text in a Microsoft Word document, getting the text out in the first place can be challenging. Sure, you can manually save your document to a plain text file that’s much easier to process, but if you have multiple documents to examine, that can be painful.

Recently I had such a need and found this Toward Data Science article quite helpful. But let’s take the challenge a little further: suppose you had a document with multiple sections and need to pull the text from specific sections.

Page 1 has my table of contents
Page 2 contains a variety of sections

Let’s suppose I need to pull just the text from the “sub-sections”. In my example, I have three sub-sections: Sub-Section 1, Sub-Section 2, and Sub-Section 3. In my Word document, I’ve styled these headers as “Heading 2” text. Here’s how I went about pull out the text for each of these sections.

Step 1: Import your packages

For my needs, I only need to import zipfile and ElementTree, which is nice as I didn’t need to install any third party packages:

import zipfile
import xml.etree.ElementTree as ET

Step 2: Parse the document XML

doc = zipfile.ZipFile('./data/test.docx').read('word/document.xml')
root = ET.fromstring(doc)

Step 3: Explore the XML for the sections and text you want

You’ll spend most of your time here, trying to figure out what elements hold the contents in which you are interested. The XML of Microsoft documents follows the WordprocessingML standard, which can be quite complicated. I spent a lot of time manually reviewing my XML looking for the elements I needed. You can write out the XML like so:

ET.tostring(root)

Step 4: Find all the paragraphs

To solve my problem, I first decided to pull together a collection of all the paragraphs in the document so that I could later iterate across them and make decisions. To make that work a little easier, I also declared a namespace object used by Microsoft’s WordprocessingML standard:

# Microsoft's XML makes heavy use of XML namespaces; thus, we'll need to reference that in our code
ns = {'w': 'http://schemas.openxmlformats.org/wordprocessingml/2006/main'}
body = root.find('w:body', ns)  # find the XML "body" tag
p_sections = body.findall('w:p', ns)  # under the body tag, find all the paragraph sections

It can be helpful to actually see the text in each of these sections. Through researching Microsoft’s XML standard, I know that document text is usually contained in “t” elements. So, if I write an XPath query to find all the “t” elements within a given section, I can join the text of all those elements together to get the full text of the paragraph. This code does that:

for p in p_sections:
    text_elems = p.findall('.//w:t', ns)
    print(''.join([t.text for t in text_elems]))
    print()

Step 5: Find all the “Heading 2” sections

Now, let’s iterate through each paragraph section and see if we can figure out which sections have been styled with “Heading 2”. If we can find those Heading 2 sections, we’ll then know that the subsequent text is the text we need.

Through researching more the XML standard, I found that if I search for pStyle elements that contain the value “Heading2”, these will be the sections I’m after. To make my code a little cleaner, I wrote functions to both evaluate each section for the Heading 2 style and extract the full text of the section:

def is_heading2_section(p):
    """Returns True if the given paragraph section has been styled as a Heading2"""
    return_val = False
    heading_style_elem = p.find(".//w:pStyle[@w:val='Heading2']", ns)
    if heading_style_elem is not None:
        return_val = True
    return return_val


def get_section_text(p):
    """Returns the joined text of the text elements under the given paragraph tag"""
    return_val = ''
    text_elems = p.findall('.//w:t', ns)
    if text_elems is not None:
        return_val = ''.join([t.text for t in text_elems])
    return return_val


section_labels = [get_section_text(s) if is_heading2_section(s) else '' for s in p_sections]

Now, if I print out my section_labels list, I see this:

My section_labels list

Step 6: Finally, extract the Heading 2 headers and subsequent text

Now, I can use simple list comprehension to glue together both the section headers and associated text of the three sub-sections I’m after:

section_text = [{'title': t, 'text': get_section_text(p_sections[i+1])} for i, t in enumerate(section_labels) if len(t) > 0]

And that list looks like this:

My section_text list

You can download my code here.

SQLite in PowerShell

A while back, I wrote about how I use SQLite to do, among other things, some database modeling. Typically, I will write all my table creation scripts in a single SQL file and then run that script at the command line like so:

C:\sqlite-tools-win32-x86-3300100\sqlite3.exe orders.db < orders_db.sql

This approach works great if you run your command in the Windows command shell or even in Linux, but will it work in PowerShell?

The answer is: No. PowerShell uses the “<” sign differently than the other two shells and won’t honor that command.

Both the Windows shell and Linux treat that “less than” sign as an “input redirection” instruction. You are redirecting your input, the SQL file, into the database you’re generating with the sqlite3 command utility.

So, in PowerShell, how might we accomplish this same sort of input redirection? One answer is to use PowerShell’s Get-Content cmdlet (using its alias “cat”) and pipe the content of the SQL script into the sqlite3 utility:

cat orders_db.sql | & C:\sqlite-tools-win32-x86-3300100\sqlite3.exe orders.db

Note that I’m using the ampersand operator as a “call operator” to launch the sqlite3 utility.

So, now you know how to use SQLite to load scripts in three different command shells!

« Older posts Newer posts »

© 2024 DadOverflow.com

Theme by Anders NorenUp ↑