CSE 40657/60657
Homework 3

2021/04/09 5:00pm

In this assignment you will build and improve a simple parser trained from the ATIS portion of the Penn Treebank. ATIS (Air Traffic Information System) portion consists of short queries and commands spoken by users of a fake robot travel agent.


Visit this GitHub Classroom link to create a Git repository for you, and clone it to your computer. Initially, it contains the following files:

data/train.trees Training data
data/dev.strings Development data (strings)
data/dev.trees Development data (trees)
data/test.strings Test data (strings): don't peek!
data/test.trees Test data (trees): don't peek! Preprocessor Replace one-count words with <unk> Postprocessor Tree library Compute labeled precision/recall Skeleton of neural parser

You may write code in any language you choose. You are free to use any of the Python code provided when you program your own solutions to this assignment. (In particular, the module has useful code for handling trees.)

1. Training a PCFG

First, some preliminaries (nothing to report for these steps):

  1. When run as a script, pretty-prints trees:
    $ head -1 train.trees | python
                 S                                          PUNC
                 │                                           │
                 VP                                          .
      VB                    NP
      │    ┌───────────┬────┴────┬───────────┐
    List   NP          PP        PP         SBAR
        ┌──┴──┐      ┌─┴──┐    ┌─┴─┐      ┌──┴───┐
        DT   NNS     IN   NP   TO  NP    WHNP    S
        │     │      │    │    │   │      │      │
       the flights from  NNP  to  NNP    WDT     VP
                          │        │      │   ┌──┴──┐
                      Baltimore Seattle that VBP    PP
                                              │  ┌──┴──┐
                                            stop IN    NP
                                                 │     │
                                                in    NNP
  2. Run train.trees through and save the output to train.trees.pre. This script makes the trees strictly binary branching. When it binarizes, it inserts nodes with labels of the form X*, and when it removes unary nodes, it fuses labels so they look like X_Y.
  3. Run train.trees.pre through and verify that the output is identical to the original train.trees. This script reverses all the modifications made by
  4. Run train.trees.pre through and save the output to train.trees.pre.unk. This script replaces all words that occurred only once with the special symbol <unk>.

Next, we will learn a probabilistic CFG from trees, and store it in the following format:

NP -> DT NN # 0.5
NP -> DT NNS # 0.5
DT -> the # 1.0
NN -> boy # 0.5
NN -> girl # 0.5
NNS -> boys # 0.5
NNS -> girls # 0.5

  1. Write code to read in trees and to count all the rules used in each tree. Run your code on train.trees.pre.unk. Please report: How many unique rules are there?1 What are the top five most frequent rules, and how many times did each occur?2
  2. Write code to compute the conditional probability of each rule and print the grammar out in the above format. Please report: What are the top five highest-probability rules with left-hand side NNP, and what are their probabilities?2

2. Parsing

Now write a CKY parser that takes your grammar and a sentence as input, and outputs the highest-probability parse.6 If you can't find any parse, output a blank line. Don't forget to replace unknown words with <unk>. Don't forget to use log-probabilities to avoid underflow.

  1. Run your parser on dev.strings. Run the parses through and save the result to dev.pcfg_parses. Show the output of your parser on the first five lines of dev.strings, along with their log-probabilities (base e).3
  2. Show a plot of parsing time ($y$ axis) versus sentence length ($x$ axis). Use a log-log scale. If the times are too short, you can use the timeit module to get more accurate timings. Estimate the value of $k$ for which $y \approx cx^k$ (you can do a least-squares fit or just eyeball it). Is it close to 3, and why or why not?3
  3. Run your parser and again on test.strings and save the output to test.pcfg_parses. Evaluate your parser output against the correct trees in test.trees using the command:
    python test.pcfg_parses test.trees
    Show the output of this script, including your F1 score, which should be at least 90%.3

3. Training a neural parser

  1. The program is a simple neural parser, with one missing function, Model.parse(). See the docstring of this function for what it is required to do. Adapt your code from Part 2 to implement this function.5
  2. Train the model and save it to a file.1 Please show in your report the output of the trainer.1
  3. Parse test.strings, run it through, and save the output to Please report your score,1 which should be at least 89%.1

Please read these submission instructions carefully.

  1. Add and commit your submission files to the repository you created in the beginning. The repository should contain:
    • All of the code that you wrote.
    • Your grammar from Part 1, your outputs from Part 2, and your final model and outputs from Part 3.
    • A file with
      • instructions on how to build/run your code.
      • Your responses to all of the instructions/questions in the assignment.
  2. After you complete each part, create a commit and tag it with git tag -a part1, git tag -a part2, etc. If you make the final submission late, we'll use these tags to compute the per-part late penalty. (You can also create the tags after the fact, with git tag -a part1 abc123, where abc123 is the commit's checksum.)
  3. Push your repository and its tags to GitHub (git push --tags origin HEAD).
  4. Submit your repository to Gradescope under assignment HW3. If you submit multiple times, the most recent submission will be graded. If you make changes to your repository after submission, you must resubmit if you want us to see and grade your changes.