This Is Not The Course Website You Are Looking For

This course website is from a previous semester. If you are currently in the class, please make sure you are viewing the latest course website instead of this old one.

Everyone:

Next week, we will continue to explore using the Python programming language for data processing. In particular, we will review regular expressions and learn how to utilize them in Python and then examine ways of processing structured information such as CSV and JSON files.

TL;DR

The focus of this reading is to introduce data processing in Python.

Readings

The readings for this week are:

  1. Automate The Boring Stuff

Optional Resources

Here are some additional resources:

Scripts

This week, there is no reading quiz. Instead, you are to complete three Python scripts: courses.py, users.py, and machines.py.

To test these scripts, you will need to download the Makefile and test scripts:

$ git checkout master                 # Make sure we are in master branch
$ git pull --rebase                   # Make sure we are up-to-date with GitHub

$ git checkout -b reading06           # Create reading06 branch and check it out

$ cd reading06                        # Go into reading06 folder

# Download Reading 06 Makefile
$ curl -LO https://raw.githubusercontent.com/nd-cse-20289-sp21/cse-20289-sp21-assignments/master/reading06/Makefile

# Execute tests (and download them)
$ make

Script: courses.py

For the first script, courses.py, you are to use regular expressions to extract all the CSE courses from the Computer Science Curriculum webpage.

After counting up how many courses there are per academic year (ie. Sophomore, Junior, and Senior), print out the totals in descending sorted order as shown below:

$ ./courses.py
      5 Sophomore
      4 Junior
      2 Senior

It should basically be the Python equivalent of the following pipeline:

$ curl -sL https://cse.nd.edu/undergraduate/computer-science-course-map/ \
  | grep -Eo 'CSE [234][0-9]{4}' \
  | sed -E -e 's/CSE 2.*/Sophomore/' -e 's/CSE 3.*/Junior/' -e 's/CSE 4.*/Senior/' \
  | sort | uniq -c | sort -rn

Skeleton

To help you get started, we have provided you with the following courses.py skeleton code:

import collections
import re
import requests

# Globals

URL = 'https://cse.nd.edu/undergraduate/computer-science-course-map/'

# Initialize a default dictionary with integer values
counts = None

# TODO: Make a HTTP request to URL
response = None

# TODO: Access text from response object
data = None

# TODO: Compile regular expression to match CSE courses (ie. CSE XXXXX)
regex = None

# TODO: Search through data using compiled regular expression and count up all
# the courses per class year
for course in re.findall(None, None):
    pass

# TODO: Sort items in counts dictionary by value in reverse order and
# display counts and class year
for year, count in sorted(counts.items(), key=lambda p: p[1]):
    print(f'{count:>7} {year}')

Implement the TODO sections in the code in order to complete the courses.py script.

Hints

Script: users.py

For the second script, users.py, you are to use csv.reader to loop through the records in your local /etc/passwd file and extract all the user descriptions (ie. the fifth field).

It should basically be the Python equivalent of the following pipeline:

$ cat /etc/passwd | cut -d : -f 5 | sed '/^\s*$/d' | env LC_ALL=C sort

For instance, here is the output of users.py on student05.cse.nd.edu:

$ ./users.py
Account used by the trousers package to sandbox the tcsd daemon
Anonymous NFS User
Apache
Condor Batch System
FTP User
GlusterFS daemons
Guest
LDAP Client User
MariaDB Server
NSCD Daemon
...
mail
operator
qemu user
root
shutdown
sync
systemd Bus Proxy
systemd Network Management
tog-pegasus OpenPegasus WBEM/CIM services
usbmuxd user

Note: The output does not contain any lines that are empty.

Skeleton Code

To help you get started, we have provided you with the following users.py skeleton code:

import csv

# Constants

PATH = '/etc/passwd'

# TODO: Loop through ':' delimited data in PATH and extract the fifth field
# (user description)
pass

# TODO: Print user descriptions in sorted order
pass

Implement the TODO sections in the code in order to complete the users.py script.

Hints

Script: machines.py

For the third script, machines.py, you are to parse the JSON data from http://catalog.cse.nd.edu:9097/query.json, which contains a listing of all the machines and services registered with Cooperative Computing Lab, and display the name of the machines with the type wq_factory.

It should basically be the Python equivalent of the following pipeline:

curl -sL http://catalog.cse.nd.edu:9097/query.json | sed -En 's/\{"name":"([^"]+)".*"type":"wq_factory".*/\1/p'

For instance, here is the output of machines.py:

$ ./machines.py
103-165-135-150.dynamic.arizona.edu
126-165-135-150.dynamic.arizona.edu
barricade.cri.uchicago.edu
condorfe.crc.nd.edu
crcfe01.crc.nd.edu
crcfe02.crc.nd.edu
earth.crc.nd.edu
vm142-121.cyverse.org
Yeti.lifemapper.org

Skeleton

To help you get started, we have provided you with the following machines.py skeleton code:

import requests

# Constants

URL = 'http://catalog.cse.nd.edu:9097/query.json'

# TODO: Make a HTTP request to URL
response = None

# TODO: Access json representation from response object
data = None

# TODO: Display all machine names with type "wq_factory"
pass

Implement the TODO sections in the code in order to complete the machines.py script.

Hints

Testing

To test all of these scripts, you can use the provided test_scripts.sh, which should have been downloaded by the Makefile:

$ ./test_scripts.sh
Testing scripts...
 courses.py                               ... Success
 users.py                                 ... Success
 machines.py                              ... Success

   Score 4.00

We'll Do It Live

Because the data in each of these scripts is being pulled from data sources in real-time, the outputs may change in between runs.

Submission

To submit your work, follow the same process outlined in Reading 01:

#--------------------------------------------------
# BE SURE TO DO THE PREPARATION STEPS ABOVE
#--------------------------------------------------

$ cd reading06                        # Go into reading06 folder

$ $EDITOR courses.py                  # Edit your courses.py file
$ $EDITOR users.py                    # Edit your users.py file
$ $EDITOR machines.py                 # Edit your machines.py file

$ ./test_scripts.sh                   # Test your reading 06 scripts

$ git add Makefile                    # Add Makefile to staging area
$ git add courses.py                  # Add courses.py to staging area
$ git add users.py                    # Add users.py to staging area
$ git add machines.py                 # Add machines.py to staging area

$ git commit -m "Reading 06: Scripts" # Commit work

$ git push -u origin reading06        # Push branch to GitHub

Pull Request

Remember to create a Pull Request and assign the appropriate TA from the Reading 06 TA List.

DO NOT MERGE your own Pull Request. The TAs use open Pull Requests to keep track of which assignments to grade. Closing them yourself will cause a delay in grading and confuse the TAs.