Musings of a dad with too much time on his hands and not enough to do. Wait. Reverse that.

Tag: python (Page 8 of 26)

Music to Drive By: Python Edition

If only Wayne and Garth had a thumb drive of music to listen to

In the past, I’ve written about using PowerShell to help build a thumb drive of music to listen to in the car. Recently, I took a crack at converting that work to Python with the help of the pymediainfo package. Here’s what I did:

Load the requisite packages

import os
import pandas as pd
import json
import shutil

Build your music inventory

The to_data function of pymediainfo makes it very easy to gather all the important properties of your music files. Optionally, I wrote code to save that inventory out to a json file for later analysis, but you don’t have to do that to build your thumb drive. I hard coded the path to my music folder (D:\music_backup) and my code does assume that you only want to process mp3 files (line 4).

music_col = []
for dirpath, dirs, files in os.walk("D:\\music_backup"):
    for filename in files:
        if filename.lower().endswith('mp3'):
            fname = os.path.join(dirpath,filename)
            mi = MediaInfo.parse(fname)
            music_col.append([t for t in mi.tracks if t.track_type == 'General'][0].to_data())

# save collection to file if needed
with open('music_col.json', 'w') as f:
    json.dump(music_col, f) 

Build a pandas dataframe

Yes, pandas is my go-to “hammer” to solve most of my coding problems. I use the fillna function to replace any null values with empty strings–makes filtering easier later on.

df_music = pd.DataFrame(music_col)
df_music = df_music.fillna('')

Filter on just the music I want to listen to in the car

As I’ve said before, I have a lot of music in my library but technical limits with my car stereo. So, I have to make certain decisions on what music to copy. Dataframe filtering makes that fast and easy. To make things interesting, I’m leveraging the pandas sample function to randomly sort my music. Here’s the code I came up with:

genres_to_include = ["Pop", "Rock", "Hard Rock & Metal"]

album_artists_to_exclude = ["ABBA", "Disney", "Vanilla Ice"]
albums_to_exclude = ["Frozen [Original Motion Picture Soundtrack]", "High School Musical 2 [Original Soundtrack]", "The Smurfs 2- Music from and Inspired By"]
# excluded any "songs" that might actually be talking of some sort
bad_titles = 'interview|speech'

df_usb = df_music[(df_music.genre.isin(genres_to_include)) & ~(df_music.performer.isin(album_artists_to_exclude)) & 
                  ~(df_music.album.isin(albums_to_exclude)) & ~(df_music.title.str.contains(bad_titles, case=False)) & 
                  (df_music.duration>30000)].sample(frac=1)

Don’t forget about the size constraints of the thumb drive

I’m using a 16Gb thumb drive and I have well over 50Gb of music, so I need to make sure I only copy over enough files to fill up the drive and nothing more. The pandas cumsum function will help me easily figure that out:

df_usb['file_size_cumsum'] = df_usb.file_size.cumsum()

Finally, write to the thumb drive

Now, I’m ready to write my randomized music, filtered just how I want, to my thumb drive:

# set a max bytes of about 15.7 Gb
max_bytes = 15700000000
usb_drive = 'E:\\.'

for f in df_usb[df_usb.file_size_cumsum<max_bytes].complete_name.tolist():
    shutil.copy(f, usb_drive)

Lists in your Dataframes

I had a challenge not long ago where I had a dataframe of users and a list of different security groups to which each belonged. I wanted to do some simple analysis on how many groups were represented in the dataframe and how many users belonged to each group. A simple horizontal bar chart would suffice.

To provide a more real life example of my problem and solution, imagine you wanted to do some analysis on the three main UEFA titles–Champions League, Europa League, and UEFA Super Cup–and wanted to know how many English teams won each. You might first start by collecting the title winners for each of the contests into a single dataframe. Following that approach, we now have a dataframe that looks like this:

Our dataframe with “Club” as a string and “title” as a list of strings

Start with a unique set of titles

Since I want my chart to show each UEFA title, let’s get a list of those titles like so:

unique_title_list = list(set([item for sublist in df_combo.title.tolist() for item in sublist if len(item)>0]))

This code performs several operations in a single line:

  1. It converts the title column into a list. Since each value is already a list, the result is a list of lists.
  2. Next, I use some clever list comprehension to iterate into each sublist and then interate into each item in that sublist. The result is one large list of all titles won. Note that I also add a “length greater than 0” test just to make sure I avoid empty strings.
  3. Next, I use Python’s set function to produce a group of just the unique titles.
  4. Finally, I cast the set back to a list.

Count the teams that have won each title

To get the count of teams winning each title, I iterate across my unique list, filter down the dataframe by each title, and count the results:

title_counts = {}

for u in unique_title_list:
	winner_count = df_combo[df_combo.title.apply(lambda t: u in t)].shape[0]
	title_counts[u] = winner_count

Nicely sort your results

A good looking bar chart usually sorts the bars low-to-high or high-to-low, so I take this additional step to sort my results:

title, c = [], []
for k,v in sorted(title_counts.items(), key=lambda x: x[1]):
	title.append(k)
	c.append(v)

Finally, chart the results

Last, I wrote this code to produce a horizontal bar charts showing a count of the English teams winning a UEFA title:

fig, ax = plt.subplots(figsize=(10,6))
_ = ax.barh(title, c)
_ = ax.set_xlabel('Number of English Teams')
_ = ax.set_ylabel('Title')
_ = ax.set_title('Number of English football teams winning UEFA titles')

So, this chart is a little lackluster, but what an accomplishment to have five different English teams winning these titles!

Pulling public data into dataframes

Some of what I write about here is inspired by challenges I encounter at work. Often the hardest part in describing those challenges is substituting public data and scenarios for my work-specific ones. Sites like kaggle.com and wikipedia.org really come to the rescue.

Recently, I had a circumstance where I need to process a dataframe in which one of the columns contained a list of values for each field. I think I came up with a clever way of dealing with that obstacle and would like to discuss it in these pages, but first…what sort of public data can I collect to replicate my scenario? How about some English football?! Let’s take some some football titles–say, the UEFA Champions League champion, the UEFA Europa League champion, and the UEFA Super Cup champion–and build a dataframe that shows which English football clubs have ever won any of these titles.

Step 1: Collect the raw data for each of these titles

I’ve written a few times before on how to use Python to collect public data from the Internet. This time around, I went with a slightly more manual approach. I went to each of the three Wikipedia pages, hit F12 to bring up my developer tools, and used those tools to copy just the HTML code for the data tables I was interested in–the ones listing each club team and the number of times they won the particular title. I simply copied the HTML to three different data files for subsequent processing.

Step 2: Read in the data files

The pandas read_html function is such a time saver here. Here’s the code I came up with to read in my data files:

import pandas as pd
from bs4 import BeautifulSoup
import numpy as np
import matplotlib
import matplotlib.pyplot as plt

%matplotlib inline  # for jupyter notebook
matplotlib.style.use('seaborn')

df_champions = pd.read_html('./data/uefa_champions.txt')[0]
df_champions = df_champions[df_champions.Titles>0]
df_champions['title'] = 'UEFA Champions League champ'

df_europa = pd.read_html('./data/uefa_europa_league.txt')[0]
df_europa = df_europa[df_europa.Winners>0]
df_europa['title'] = 'UEFA Europa League champ'

df_super = pd.read_html('./data/uefa_super.txt')[0]
df_super = df_super[df_super.Winners>0]
df_super['title'] = 'UEFA Super Cup champ'

The df_champions dataframe looks like this:

The last five records in the df_champions dataframe. Which of these are English teams?

These dataframes are looking good, but how do I know which teams are the English teams? Wikipedia identifies each team’s nationality with a flag icon, but pandas isn’t pulling in that data. Time for a little HTML parsing with BeautifulSoup.

Step 3: Collect the names of the English teams

Since pandas didn’t pull in the nationality information, I had to revisit each of the HTML data files and parse out that information with BeautifulSoup:

epl_teams = []
for filepath in ['./data/uefa_champions.txt', './data/uefa_europa_league.txt', './data/uefa_super.txt']:
    with open(filepath, 'r') as f:
        soup = BeautifulSoup(f)

        for th in soup.findAll('th'):
            span = th.find('span', {'class':'flagicon'})
            if span:
                a = span.find('a', {'title':'England'})
                if a:
                    epl_teams.append(th.text.strip())

Step 4: Filter each title list down to just the English teams

With the names of the English teams, I can filter my dataframes down accordingly:

df_champions = df_champions[df_champions.Club.isin(epl_teams)]
df_europa = df_europa[df_europa.Club.isin(epl_teams)]
df_super = df_super[df_super.Club.isin(epl_teams)]

Step 5: Merge the dataframes together

I need to merge my three dataframes into a single one. The pandas merge function did the trick:

df_combo = df_champions[['Club','title']].merge(df_europa[['Club','title']], on='Club', how='outer')
df_combo = df_combo.merge(df_super[['Club','title']], on='Club', how='outer')

And the results:

Merging dataframes with columns of the same name forces pandas to add suffices to those column names

Step 6: Combine the “title” columns together into one

The final step–to just get this public data into a shape to replicate my original problem–is to merge the three “title” columns into a single one. Two lines do the deed:

df_combo['title'] = df_combo.apply(lambda r: [t for t in [r.title_x, r.title_y, r.title] if t is not np.nan], axis=1)
df_combo = df_combo[['Club', 'title']]
A dataframe with a list of values in the “title” column

Phew. That was a fair amount of work just to pull together some public data to replicate one of my work datasets. And the actual code I wrote to analyze a dataframe containing lists in a column? Well, that will have to wait for a future post.

« Older posts Newer posts »

© 2025 DadOverflow.com

Theme by Anders NorenUp ↑