Student Projects - EG Spring 2022

The following is a listing of demos of student projects done in the Spring 2022 Engineering Computing course. The students found their choice of data source from resources online and decided how they want to examine, analyze and visualize the data.

  • 08-24 - Lord of the Rings Analysis
    Matthew Daly, Obioha Onwuanibe
    entertainment

    Data is gathered from the website in various requests throughout our project. Most of our code is dedicated to turning this into graphable data frames. The biggest issue was that a lot of the data was not standardized and some information had to be parsed.

  • 06-07 - Correlation between Pollution and Climate Change
    Declan Brown, TJ Weber
    Climate Change/Weather

    Our project’s data on climate change was researched and downloaded from a number of internet databases. Once the data was downloaded, we wrote programs that allowed us to easily sort and view the data. We then displayed this sorted data in graphs and figures that made the information simple to understand.

  • 06-16 - Human Trafficking
    Izak Schmidlkofer, Brisny Rodriguez Flores
    social, crime, justice

    The Data came from CSV files and API's from six different sources. They were refined into lists from matplotlib at first. Then, they were refined into dataframes for plotly express. Plotly express revealed that percentages in the form of pie charts and chloropleths were the most intuitive visuals, and that subplots were needed to capture complexity across multiple dimensions.

  • 06-12 - You Better Beelieve it
    Rebecca Gilpatric, Lucy Ordway
    nature

    We first found data about weather patterns across 15 years, and data about bees and honey for the same years across all states. We then used python to put this data into data frames, which allowed us to easily analyze the contents. From there, we made graphs of various correlations between the data. We also added new qualifiers, such as classifying states by hot or dry and by their region. From there, we found insights based on patterns in the graphs.

  • 06-15 - @elonmusk: Style and Influence
    John Flanagan, Garrett Young
    social, financial

    Our project analyzes the influence of Elon Musk’s tweets from 2010 through March 5, 2022. Our project initially determines Elon Musk’s most popular tweets, what days and months he tweets most frequently, his overall sentiment and emotions while tweeting, and specific words that he commonly uses during his tweets. Our project then analyzes the influence of Elon Musk’s tweets on “Bitcoin”, “Dogecoin”, “Tesla”, and “Twitter”. Finally, our project anlayzes the influence of Elon Musk’s on Google Search Trends of “Bitcoin”, “Dogecoin”, “Elon Musk”, “Tesla”, and “SpaceX”.

  • 06-08 - Airplane Crashes
    Samuel Hillenmeyer, Mark Van Kirk
    education

    Our data to information journey began when we started writing code to pull data from the plane crashes website. After our code succeeded in that task, we formed lists and dictionaries from certain subsets of that data, which we then used to create dataframes. Lastly, we created visual representations from those dataframes that allows the audience to garner valuable information on the history of plane crashes. There were many issues we faced along the way, but we stuck to this rough outline and were victorious in the end.

  • 06-14 - Motels4Now and State of Girls in Indiana
    Kyle Phan, Letty Orozco
    social

    Our project's data raises awareness for two social problems, homelessness and child maltreatment. In this case, the data is local. The first part of the project provides data for homelessness in South Bend, Indiana and the second part of the project provides data for child maltreatment rates in the state of Indiana compared to the rates of child maltreatment in the United States. For homelessness in South Bend we worked with the data set from the Non-Profit Organization Motels4Now. Motels4Now is a housing-first program created in August 2020 that houses the chronically homeless in dignity and is helping many move into more long-term, month-to-month housing. This non-profit organization just started so it was hard working with the data set and manipulating it. However, we found that so far the organization is being successful. On the other hand, for child maltreatment rates in the state of Indiana and nationally, we worked with data sets provided by a researcher from St. Mary's, Prof. Sianne Vijay. It was interesting seeing how the rates of maltreatment compared from Indiana to nationally and by gender. In some areas child maltreatment has improved but in others it has gotten worse. This was eye opening and we realized how severe this problem is and how it is important to fix it.

  • 06-11 - The various factors that impact happiness
    Conner Rauguth, Rena Steele
    social, financial, international, crime

    The project began by researching websites which had the specific information that we were looking for. Once we had found those websites with the data that we were looking for we downloaded the data as csv files. Once we had the csv files we copied and pasted them into the google collab file in which we created an internal csv file in the colab file itself with the data from the website. The next step that we took depended on the dataset we were looking at, as in some cases where there weren't many years of data associated with that specific dataset, but for all of them we used Python’s Pandas library in order to convert the data in the csv into a dataframe. Once we had done this, we took the data from the iso_alpha data set and appended the 3-digit iso_alpha code to the other data set for purposes described later. The next step that we took was that if the data set had multiple years, we took that data set, broke it into each individual year, and then concatenated them all so that they had all the years of data in a single column. With these newly organized datasets, we first began by making heat maps of the data using python’s plotly. These data sources combined with the iso_alphas allowed us to make interactive visualizations for each individual data set. The next step that we took was to make scatter plots which compared each country's happiness with other attributes which were collected in the other dataframes. The next step we took was to compare some of the factors which had similar correlations to other factors to see if there was an obvious correlation between those datasets as well. Then finally, this is a way over-simplification, but we took all these interactive graphs and put them into a html file with features such as accordions and text boxes. This final html file creates a website which is able to display all of the information in one place, making it very easy to read and interact with.

  • 06-05 - What buys happiness?
    Caroline Landry, Rex Stayer-Suprick
    social, financial, international

    It started with figuring out which datasets we found were most intriguing in correlation with a person's happiness. After that, each csv file for each dataset and its given year were broken down and condensed into its target information. With that information extracted, it was combined into a bigger list containing all of the datasets with their year, country, and ISO3 key. These were then used to create the visualizations.

  • 06-02 - Music and Happiness
    Varun Taneja, Antonio Karam
    social, entertainment

    Our data to information journey was a long and technical one. We first started by using the Spotify website for the top 200 tracks for the years 2016-2021. Using web scraping, a technique to retrieve data from a website using code we got the names and the rankings of songs from multiple countries and downloaded them as CSV (Comma Separated Value) files. Our next step was to use the Spotify API and with the names of the song and the artist from within each CSV file, retrieve the valence of each song. (Beat, rhythm, timbre, and lyrics all play a vital role in the calculation of a song's valence score ). The Spotify API required a private key, meaning it needed an account to be used, so what we did was once we used the API for extracting the data we added it to a CSV file. After this process, we were left with numerous files which we needed to compound into one, so we created a script in python which combines all of these distinct CSV’s for each country into 1 main CSV with the headers “'index','positon','streams','year','country','title','artist'” Once we had done this we downloaded the CSV files from the UN website containing Happiness ratings, GDP per capita, crime and education levels for all the countries. Once we cleaned the data for each of the CSV’s. Removing unnecessary values and formatting it in a way that made it all homogeneous we had information that we could use to make our visualizations

  • 06-13 - Crimes in Chicago Interactive Plotting
    Owen Grimaldi, Kyle Newman
    crime

    As soon as we had seen the suggestion for a project relating to crime in Chicago, both of us were hooked. The City of Chicago provided plenty of detail on recent crimes, meaning that we could use this data to show many insights about crime in the city. After we had found a way to map our data in Python (using libraries named folium and GeoPandas), we were able to create amazing interactive maps that showed great detail about the crimes. We sorted this data based on a few selective criteria, namely its type of crime, location, and frequency. Once plotted, several insights both challenging and confirming common conceptions about crime in Chicago were very clear.

  • 06-01 - Affects of Crime on the City of Chicago: A Statistical Analysis
    Miles Roberts, Bobby Rizzo
    crime, education

    All of our data could be found in JSON format and therefore we just used Requests Functions and Json functions to access the data in lists of dictionaries. Then we used said dictionaries and put them into dataframes. From this, we began sorting and analyzing.

  • 06-17 - Pokefacts
    Clayton O'Dell, Anthony Battaglia
    entertainment

    The data was generated from API requests and JSON based requests, then they were compiled into lists, dictionaries, and combinations of the two. The requests were used to created dictionaries. From those dictionaries, lists and other dictionaries were made. From there, we displayed the data through graphs, texts, tables, and visual images. Associated data was also organized into classes, which was then utilizes in interactive formats such as sliders and dropdown menus for easy user input.

  • 06-04 - Plane Crashes
    Jeffery Yang, Leo Herman
    international, International/Aviation

    The data started as text files on a different website for each crash. We read all of the data into separate files from each website. Each of these files had a location, but it was just a name, not anything we could plot. We used a Google api to give each of these files a latitude and longitude. We then put all of the files together into one big (json) file that could be read all at once to make graphs such as the ones shown on the website, depending on which variables we graph. Finally, for the Aircraft Manufacturers section, we used another data source with manufacturers for different plane types to add manufacturers to each data file.

  • 06-03 - Does Money Buy Happiness?
    Brendan Goodrow, Carson Feldman
    social, financial, international

    Our project sought to answer the question does money buy happiness? We studied this first by accessing public sources for key financial and social information for countries globally-- including data collected by the United Nations and Gallup Polls. The data in these sources was stored in excel sheet format, so we downloaded the files and uploaded them locally into our code. Once we had the data downloaded we had to read through and parse out which information we wanted to use. Through code we separated out a specific year (2017) for data to standardize our information. Using this data we then aligned each categories data (GDP, Life expectancy, etc.) to the appropriate country in the form of lists. These lists of a country, followed by its specific data for a given year, allowed us to create visualizations (graphs) mapping key attributes against each other (such as the Happiness vs. GDP for every country). By creating these visualizations we were able to make observations about trends and correlations, from which we made modest conclusions that while both are important Happiness is more dependent upon social factors than on per capita income.

  • 06-10 - Analyzing death causes in France from 2001 - 2008
    Ana Beatriz Ribeiro Soares, Annie Zhao
    social

    Our project began with a long CSV data file, and after converting it from CSV to data frames, we were able to see the information clearly and begin to work on creating visualizations. We looked at the data as a whole and decided upon six categories in which it made sense to divide the data into: death by drugs, death by gender, death by cancer, death by congenital conditions, deaths related to external causes, and deaths by assault. From these categories, we created graphs and charts using plotly and gained many insights from them.

  • 06-09 - Popular Baby Names in Indiana
    Isabela, Novoa
    social

    When we were collecting our data from the website of the social security administration, we noticed that the csv files provided to us for each state shared an extremely large amount of data for the amount of popular names recorded from 1910 to 2020. Due to this data being too much to create a visualization, we decided to only look at the popularity of names in Indiana every ten years in the time span of 1910 to 2020. We create separate csv files in our google colab notebooks to make it easier to observe the data ourselves and create numerous visualizations for it.

  • 06-06 - A Changing World: Climate Change and Endangered Animals
    Dorothy Crumlish, Matthew LaCapra
    international, Climate, Biodiversity

    Our data came from various sources, such as the IUCN Redlist, the OECD, Climate.gov and a few other sources. We took the data and downloaded it as csv files, which are similar to excel sheets. We went line by line and extracted the data, which we then stored in a data type called a dataframe which is just like a table of values. We usually had one column on the table for years, one column being for countries, and one being for each of the values we wanted to graph. We used this data type to put our values into whatever type of visualization we needed, and the country column (of official codes) were used for the maps and the years were used for the animation. This is how we created our various visualizations which span almost every country over a large range of time.

  • 08-01 - Are Depressants Actually Depressing
    Jack Wilson, Collin Bowers
    social, international, political

    For our project, we needed data related to alcohol consumption and happiness for countries around the world. To do this, we searched the internet for reputable sources that contained the data we needed in a csv format (a data source with the values separated by commas). We then loaded these files into our program and used Python to parse the data into various lists (such as a countries list, alcohol consumption list, happiness list, etc). We then compiled all of these lists into one single data structure called a dataframe. This dataframe then allowed us to use a powerful visualization tool called Plotly to display interactive scatterplots and world maps of our data.

  • 08-12 - Impact of Sanctions on Foreign Economies
    Anne Howard, Grace Collins
    financial, political

    First, we worked to find relevant data sources that would reflect an impact of sanctions imposed on a country. We decided to focus in on the change of Gross Domestic Product and Foreign Direct Investment and in the case of Russia the Moscow stock exchange to see the immediate impact. We downloaded the CSV files, uploaded them to google colab and read them in there. For some CSV files, cleaning and removing certain characters was necessary. From there we extracted the data for the two countries of Russia and Iran and created lists and dataframes of data in order to prepare to make meaningful visualizations. Once we had the data in the proper format for creating visualizations we thought about what comparisons we wanted to make GDP and FDI over a longer time period to see trends before and after sanctions, GDP and FDI together to see the if there was a correlation between the two increasing and decreasing, and Trading Volume two years before invasion to see stock trend before sanctions and after

  • 08-11 - Unemployment in the US
    Jerrick Ban,Andrew Mitchell
    financial, political

    We wanted to show data that was useful for politicians/legislators when they make decisions that relates to unemployment. We browsed the internet for reliable sources such as from the American Bureau of Labor Statistics and the Federal Reserve Economic Data of St. Louis to find datasets that pertained to our project. After discovering the sources, we accessed them through downloading the datasets into csv files or accessing the information through application programming interfaces and stored the data in code. We took the data we stored in code and made various charts and graphs by plotting the information against each other. Thus, from the raw data we obtained, we were able to construct graphs that visualized trends and relationships between unemployment and various other economic factors.

  • 08-02 - Time Usage and Macroeconomic Trends
    Russell Henggeler, Daniel Blittschau
    international

    We were able to download csv files from the websites of the data sources and upload them to google colab. We then used csv reader and pd.read_csv to convert the raw data from csv format to dataframes. Then, we used more python coding to combine datasets into multiple larger dataframes. Next, we used plotly express to create interactive visualizations and finally used plotly.io to export those into a html document and publish the website.

  • 08-15 - The Correlation Between Crimes and Educational Environment
    Tin Bui, Emma Speyrer
    crime, education

    Most of our project's data to information journey stemmed from finding the correct data and the proper way to visualize it. There were two types of crime data sources, one since 2001 and one for the past year. However, due to the size of the one since 2001, we decided to use the crime data from the past year. With education, we had established what we had seen as the educational environment; therefore, we needed to find data to create this environment which came in the forms of school reports and socioeconomic data. Parsing the data was not a significant issue for us, but we did have to recognize that different data types require other parsing skills. Therefore, displaying information from data was a little troublesome due to the size of the data and parts of the missing data. However, we can get passes by looking at the date that we did have, which was still a substantial amount, and it was good to look at the raw data and not jump completely into visualizations. Therefore, when we were creating visualizations, we noticed trends said we had already seen and found more trends visualizations.

  • 08-05 - Olympic Performance vs. Country Demographics
    Walker Bagley, Eamon Tracey
    financial, sports, entertainment, international

    Our data came in as a 135,000 row spreadsheet and was cleaned, tallied into medal counts, and compared with demographics before being plotted in various visuals.

  • 08-08 - COVID-19's Impact On US Economy
    Will Hoppe,Patrick Schlosser
    financial, COVID-19

    We began by examining data on the unemployment changes of every state from 2020 to 2021, and this raised questions concerning why some states performed better than others. This lead us to a sector by sector examination of the US economy by fiscal quarter from Q1 2005 to Q3 2021 (with particular emphasis on 2018-2021) to visualize how COVID-19 affected different aspects of the U. S. economy. To finish, we returned to a state by state examination of GDP, looking at percent changes from 2019 to 2020 and from 2020 to 2021.

  • 08-13 - Marriage and Divorce in the United States
    Jonathan D'Alonzo, Matt Cornelius
    social

    Data was gathered from the CDC and from a study on the relationships between marriage rates and other social factors. The data was in the form of CSV files so we uploaded these files to google colab to use. We changed the CSV files into dataframes that we could graph. This required us to manipulate the data in ways that would allow for animation, maps, etc. Once the dataframes were created, we made many visualizations from the data we had gathered, using many different visualizations to develop insights. From these visualizations, we developed insights into how they could be interpreted and what information they conveyed.

  • 08-03 - County Mask Mandate Effectiveness
    Zach Ebner, Richard Nin
    health

    We went from having a bunch of raw data in excel spreadsheets, which was disorganized, duplicated, and difficult to read, to having one singular data file with information for every single day for every single county.

  • 08-16 - Causes of Robotics
    Meghan Finnerty, DeVante Brooks
    financial, education

    After already picking out a few data sources we wanted, we first had to make the data actually readable and find the useful parts within it. The next step was using our somewhat sorted data to make basic visual graphs just to get a general idea of what was happening. The basic graphs guided us to make a fully sorted data set which was useful for all future graphs. Next, we used our new data set to make more specific graphs which we then had to analyze in order to make noteworthy conclusions from all the work we have done. Lastly, we sorted the graphs along with our conclusions into a single website so that general audiences can get the key takeaways we had.

  • 08-14 - Exploring the Effect of Tech on Europe (renamed from original)
    Brendan McGinn, Thomas Kusak
    social, financial, technology, health

    From the Eurostat database, we downloaded a JSON file and uploaded it as an object in Python. One central issue, however, was that the specific cells of the spreadsheet were in random order (since they were dictionary keys) and the keys were arbitrary numbers, so we had to create a method to correctly sort the data into various lists. From there, we continued to filter the data, scraping the label names and recovering potentially lost pieces of data. After that, we converted the lists of data we scraped into dataframes, with the country names as indexes and the various statistics as columns. Using these dataframes, we were able to generate graphs and visually represent trends in the data. The trends we observed were then recorded in paragraph format and uploaded to our website.

  • 08-23 - Death Around the World
    Anna Fent, Anna VanAvermaete
    health

    We made the change from data to information by first identifying what we could do with the data at hand. We looked at the age groups, countries, and country categories and said what kind of questions can we answer with this data? We then cleaned the data and turned it into useful dataframes. We then turned these dataframes into charts so that we could further see the data to information process through maps that color coded countries, and graphs that had trend lines.

  • 08-17 - Analysis of Senator's Stock Actions vs Average American's
    Matheus de Andrade Barros, Juan Raul Porras Estripeaut
    financial, political

    The data was collected from the public Senate Stock Tracker website, and we then sifted through the data, separating it into various lists where it tracker the amount bought by each senator and what state they're from, and the amounts bought in accordance with each stock. After this, the data was compared to private stock market data, and then several visualizations were made (choropleths, pie and bar charts) in order to compare the stock actions by senators to that of the average american's.

  • 08-09 - Buying a house
    Ella Gerczak, Allie Bowen
    financial

    Most of our data is in CSV files originally. We used a function called readlines() which allowed us to make a list of the lines in the file. From there, we made a list of every term/piece of data in each line (usually split by commas or spaces) and made sub-lists. Data was pulled from the sub-lists and put into DataFrames. From there, the data would be uploaded into the graphs, by using Python-supported imports like Plotly.

  • 08-07 - Rhetorical Strategies in Politicians' Twitter
    Andrew Clements, Jack McGarrity
    social, political

    First, we needed to find the data. We searched on the internet for open data sources containing tweet information for Trump, AOC, and Obama. Once we found all of our sources, we downloaded the data and then converted it into lists. The lists were then scrubbed so that only the necessary information was included. For example, our data source included replies to AOC's tweet, not her actual tweet, so we excluded those during the scrubbing process. Then, we loaded the organized lists into data frames that could easily be understood. Each person: Trump, AOC, and Obama, had an individual data frame.

  • 08-06 - Bills or Bills: Analyzing the Stock Trading of Politicians
    Peter Hofley, Zach Brown
    financial, political

    For our project, we collected data from House Stock Watcher's CSV file containing transactions made on the stock market by US politicians over the past 3 years. To clean and process the data, we used Python to read the file data and create lists of each column that we needed while ensuring that any mistakes in the file were properly skipped. Then, the transactions were put into a more efficiently organized file type before adding the ticker prices for each transaction and other informative columns. Next, we created multiple tables of data with columns necessary to plot various visualizations. Informative statistics were calculated by iterating through the data tables and summarizing the results. These statistics could be plotted in multiple visualizations, making a complex dataset understandable and interesting to the public.

  • 08-04 - Speech Rhetoric Project
    Chad Brown, Blake Caven
    social

    We began by taking the speech compilations and turning them into purely text files without any headers or page numbers. We then read these pure text files and used them to track the number of occurrences of each word in their speeches. We then used this information to get the word frequency for each speaker. We used a public api to find overall word frequency. We then used graphs to compare word frequency for each speaker to overall wordfrequency.

  • 08-18 - Analyzing NYC Yellow Taxi Trends
    Zach Vincent, Kabir Madan
    financial, transportation

    The richness of the raw data that we were working with made for some technical challenges; we had to be creative in terms of we could parse on the order of millions of data points, while keeping runtimes down and not exceeding our computers' limits. The benefit of having so many data points, however, was that we were able to generate graphs which show some fairly clear trends and tell convincing stories about matters such as the impact of the COVID-19 pandemic on Taxi usage in New York City. As part of the process of generating these graphs, we often had to manipulate the data such that it could be formed into meaningful data tables (known as dataframes) that could be graphed to convey a specific message. This meant having to be strategic about our use of given data (eg. timestamps) to extract the data that we desired (eg. day of the week).

  • 08-10 - Orange County School District Analysis
    Eriberto Gonzales, Michelle Leon-Cigarroa
    education

    In analyzing the factors that play into a quality education, there are many sources of information. We set out to look for all the major factors that play into being able to have good educational opportunities and started looking for all the data to gather. We found a couple of open data portals and were able to extract crucial data such as median income, average daily attendance, work status, average ACT scores, etc for each district in Orange County. After we parsed and cleaned the data, we set out to look for relationships within the data to create different visualizations. We both wanted to bring awareness to the districts that were not up to par to others and show that there is still a gap in educational opportunities.

  • 08-21 - Global Terrorism Tracker
    William Erdman, James Srail
    international, political, crime

    The massive Global Terrorism Database which logs over 180,000 distinct terrorism attacks and 135 categories of information had to be pared down into a usable format. This was done by focusing in on much smaller groups of related columns of data about these many attacks that could collectively be processed into an individual visualization. In this way, rather than navigate a monstrous table of technical data, the user can easily choose to see just the economic impact of terrorism on a country in a given year, the popularity of different kinds of terror attacks over time, the general trends of deaths or number of attacks over time, or even focus in on the United States' fight against terror.

  • 06-18 - Socioeconomic Analysis of Indiana's Schoolchildren
    Sam Martin,Danny Mikolay
    social, political, education

    Our data was taken from online publications produced by the National Center for Educational Statistics and the Indiana DOE. All sets were tabular in format, so we downloaded them and cleaned the data so that it would be usable. We deleted data that we were not going to use for our visualizations, as well as incomplete data. We then used a short program to combine these separate data sources into a singular collection, with all of the data corresponding to a certain district in the same place. With this collection, we were able to use plotting software to create visualizations and place those visualizations within a website.

  • 06-19 - Arbitrage Sports Betting
    Andrew Myers, Nicholas Salem
    sports, entertainment

    Our group wanted to utilize this project to not only learn about data visualization in python, but also how we can gather data in creative ways. We landed on making a bot that scrapes the day's worth of data across several online sports betting books and uses that data to visualize information that can lead to monetary gain for the user. The bot works by getting the url of the website and sorting through that website's internal html code. This was a strenuous process for our group as we had to learn the many nuances to creating such a bot, a topic we had not covered in class. The data collection journey was extremely formative for our coding knowledge as a whole as we were able to work with numerous languages and libraries to get the dataset.

  • 08-20 - Analyzing the Gender Gap in CS
    Lily Geisen, Steven Conaway
    gender, computer science

    We imported data from the StateOfJS 2021 survey into our project. Users completed a section on demographics, so we accessed this data. We added each user’s demographics section (removing all the other portions of the survey) to a list. When there was a range of values, like an age range, we picked the average — for example, the 25 to 34 age range became 30 years old. Then filtered by each thing we plotted by, and plotted!

  • 08-22 - Kanye West Music Analysis
    Anar Bayanzul, Daniel Yu
    entertainment, Music

    We turned the wikipedia HTML tables into json data and used almost directly, and the lyrics were each counted for the number of words and frequency of each word.

  • 08-19 - NCAA Divison 1 Men's Basketball Statistics vs. Team Success in NBA Draft
    Emily Colleran, Anna Biramonte
    sports

    We organized, calculated, and analyzed statistics for over 350 Division I men’s college basketball teams as well as data from drafts of the National Basketball Association for the 2013 to 2018 seasons in order to determine whether or not statistically “good” college basketball teams are successful in the draft for some given year; i.e., is it possible that, say, “blue blood” teams (Kentucky, UCLA, UNC, Duke, Indiana, etc.) succeed in the draft despite being statistically outperformed by other teams on the season? (And if we are looking to investigate further, are certain statistics perhaps better indicators of draft success for teams than others? Additionally, which teams have performed the most in terms of specific statistics over the past several years?) Analyses like these of NCAA Division I men’s basketball statistics and NBA draft data are most likely of great help to high school recruits when deciding which school to play for, NBA teams and coaches looking to assess the benefits of drafting players from certain teams after a given season, analysts of the sport at both the collegiate and professional levels, and those of us who simply are interested in basketball. For the purposes of this project, we (calculated and then) used a given team's adjusted efficiency margin (AdjEM) to compare performance with other teams. (AdjEM is the difference between adjusted offensive efficiency [AdjO] and adjusted defensive efficiency [AdjD], with AdjO being an estimate of the number of points the given team scores against the average DI defense over 100 possessions and AdjD being an estimate of the number of points the given team allows to the average DI offense over 100 possessions.) In order to "calculate" teams' "performance" in the draft, we used data from our NBA CSV to calculate what we call "magnitude of success" -- we used the CSV to total the number of picks teams had in a given year as well as their average pick number, and divided the total number by the average number to find this magnitude. The greater this fraction, the more successful we determined a team to be in a given draft.