This course website is from a previous semester. If you are currently in the class, please make sure you are viewing the latest course website instead of this old one.
Everyone:
Next week, we will continue to explore using the Python programming language for data processing. In particular, we will review regular expressions and learn how to utilize them in Python and then examine ways of processing structured information such as CSV and JSON files.
The readings for this week are:
Chapter 7. Pattern Matching with Regular Expressions
Focus on the use of re.findall.
Focus on the usage of requests.
Chapter 16. Working with CSV Files and JSON Data
Focus on csv.reader and csv.DictReader and using requests with JSON.
Here are some additional resources:
This week, there is no reading quiz. Instead, you are to complete three
Python scripts: courses.py
, users.py
, and machines.py
.
To test these scripts, you will need to download the Makefile and test scripts:
$ git checkout master # Make sure we are in master branch
$ git pull --rebase # Make sure we are up-to-date with GitHub
$ git checkout -b reading06 # Create reading06 branch and check it out
$ cd reading06 # Go into reading06 folder
# Download Reading 06 Makefile
$ curl -LO https://raw.githubusercontent.com/nd-cse-20289-sp23/cse-20289-sp23-assignments/master/reading06/Makefile
# Execute tests (and download them)
$ make
courses.py
¶For the first script, courses.py
, you are to use regular expressions to
extract all the CSE courses from the Computer Science
Curriculum
webpage.
After counting up how many courses there are per academic year (ie. Sophomore, Junior, and Senior), print out the totals in descending sorted order as shown below:
$ ./courses.py
5 Sophomore
4 Junior
2 Senior
It should basically be the Python equivalent of the following pipeline:
$ curl -sL https://cse.nd.edu/undergraduate/computer-science-curriculum/ \
| grep -Eo 'CSE [234][0-9]{4}' \
| sed -E -e 's/CSE 2.*/Sophomore/' -e 's/CSE 3.*/Junior/' -e 's/CSE 4.*/Senior/' \
| sort | uniq -c | sort -rn
To help you get started, we have provided you with the following courses.py skeleton code:
import collections
import re
import requests
# Globals
URL = 'https://cse.nd.edu/undergraduate/computer-science-curriculum/'
# Initialize a default dictionary with integer values
counts = None
# TODO: Make a HTTP request to URL
response = None
# TODO: Access text from response object
data = None
# TODO: Compile regular expression to match CSE courses (ie. CSE XXXXX)
regex = None
# TODO: Search through data using compiled regular expression and count up all
# the courses per class year
for course in re.findall(None, None):
pass
# TODO: Sort items in counts dictionary by value in reverse order and
# display counts and class year
for year, count in sorted(counts.items(), key=lambda p: p[1]):
print(f'{count:>7} {year}')
Implement the TODO
sections in the code in order to complete the courses.py
script.
A collections.defaultdict object is just a dict that has a default
value according to the specified type. For instance, because counts
is a
collections.defaultdict(int)
, accessing counts[key]
will automatically
insert a 0
(the default int value) for that key
if it does not exist
in the dict yet.
You can use requests.get to make a HTTP request. The response
object you get back has a text
attribute you can use to access the
content or data of the request.
You can use the re.compile function to compile regular expressions. Likewise, you can use the re.findall function to search for all matches in a string. For instance:
# Match phone numbers in the format XXX-XXX-XXXX, while grouping just the
# the area codes.
regex = re.compile(r'(\d{3})-\d{3}-\d{4}')
for areacode in re.findall(regex, data):
print(areacode)
You can set the reverse
keyword argument of the sorted function to sort
in descending order (the default is ascending).
users.py
¶For the second script, users.py
, you are to use csv.reader to loop
through the records in your local /etc/passwd
file and extract all the user
descriptions (ie. the fifth field).
It should basically be the Python equivalent of the following pipeline:
$ cat /etc/passwd | cut -d : -f 5 | sed '/^\s*$/d' | env LC_ALL=C sort
For instance, here is the output of users.py
on student10.cse.nd.edu:
$ ./users.py
Account used by the trousers package to sandbox the tcsd daemon
Anonymous NFS User
Apache
Condor Batch System
FTP User
GlusterFS daemons
Guest
LDAP Client User
MariaDB Server
NSCD Daemon
...
mail
operator
qemu user
root
shutdown
sync
systemd Bus Proxy
systemd Network Management
tog-pegasus OpenPegasus WBEM/CIM services
usbmuxd user
Note: The output does not contain any lines that are empty.
To help you get started, we have provided you with the following users.py skeleton code:
import csv
# Constants
PATH = '/etc/passwd'
# TODO: Loop through ':' delimited data in PATH and extract the fifth field
# (user description)
users = None
for record in csv.reader(None):
pass
# TODO: Print user descriptions in sorted order
for user in sorted(users):
pass
Implement the TODO
sections in the code in order to complete the users.py
script.
The csv.reader function takes a file stream (ie. use open) and can
optionally take a delimiter
keyword argument to specify what separates
the fields in each row.
for row in csv.reader(open(path), delimiter=','):
print(row[0])
You can check if a string is non-empty by simplying doing the following:
if string: # Non-empty strings evaluate to True
do_the_thing()
machines.py
¶For the third script, machines.py
, you are to parse the JSON data from
http://catalog.cse.nd.edu:9097/query.json
, which contains a listing of all
the machines and services registered with Cooperative Computing Lab, and
display the name of the machines with the type wq_factory
.
It should basically be the Python equivalent of the following pipeline:
$ curl -sL http://catalog.cse.nd.edu:9097/query.json | sed -En 's/\{"name":"([^"]+)".*"type":"wq_factory".*/\1/p'
For instance, here is the output of machines.py
:
$ ./machines.py
67-165-135-150.dynamic.arizona.edu
earth.crc.nd.edu
gpu06.cyverse.org
joe-124.lifemapper.org
To help you get started, we have provided you with the following machines.py skeleton code:
import requests
# Constants
URL = 'http://catalog.cse.nd.edu:9097/query.json'
# TODO: Make a HTTP request to URL
response = None
# TODO: Access json representation from response object
data = None
# TODO: Display all machine names with type "wq_factory"
for machine in data:
pass
Implement the TODO
sections in the code in order to complete the machines.py
script.
The response
object returned by the requests.get function has a
json() method that parses the response
text as JSON data and returns
an appropriate Python data structure.
You may wish to the pprint.pprint function to examine and explore the JSON data to determine how to access the 'name' and 'type' information of each machine.
To test all of these scripts, you can use the provided test_scripts.sh
,
which should have been downloaded by the Makefile:
$ ./test_scripts.sh
Testing scripts...
courses.py ... Success
users.py ... Success
machines.py ... Success
Score 4.00 / 4.00
Status Success
Because the data in each of these scripts is being pulled from data sources in real-time, the outputs may change in between runs.
To submit your work, follow the same process outlined in Reading 01:
#--------------------------------------------------
# BE SURE TO DO THE PREPARATION STEPS ABOVE
#--------------------------------------------------
$ cd reading06 # Go into reading06 folder
$ $EDITOR courses.py # Edit your courses.py file
$ $EDITOR users.py # Edit your users.py file
$ $EDITOR machines.py # Edit your machines.py file
$ ./test_scripts.sh # Test your reading 06 scripts
$ git add Makefile # Add Makefile to staging area
$ git add courses.py # Add courses.py to staging area
$ git add users.py # Add users.py to staging area
$ git add machines.py # Add machines.py to staging area
$ git commit -m "Reading 06: Scripts" # Commit work
$ git push -u origin reading06 # Push branch to GitHub
Remember to create a Pull Request and assign the appropriate TA from the Reading 06 TA List.
DO NOT MERGE your own Pull Request. The TAs use open Pull Requests to keep track of which assignments to grade. Closing them yourself will cause a delay in grading and confuse the TAs.