Your username and id is your CNET id and CNET password. This will create a new folder that is empty titled cmsc13600-submit. There is similarly a course repository where all of the homework materials will stored. Youshould clone this repository as well:
Your username and id is your CNET id and CNET password. This will create a new folder that is empty titled cmsc13600-submit. There is similarly a course repository where all of the homework materials will stored. Youshould clone this repository as well:
In this assignment, you will extract meaningful information from unstructured data.
In this assignment, you will extract meaningful information from unstructured data.
Due Date: *Friday April 9, 2020 11:59 pm*
Due Date: *Friday April 9, 2021 11:59 pm*
## Initial Setup
## Initial Setup
These initial setup instructions assume you've done ``hw0``. Before you start an assingment you should sync your cloned repository with the online one:
These initial setup instructions assume you've done ``hw0``. Before you start an assingment you should sync your cloned repository with the online one:
...
@@ -86,6 +86,7 @@ It is up to you to read the documentation on the python xml module if you are co
...
@@ -86,6 +86,7 @@ It is up to you to read the documentation on the python xml module if you are co
def _reddit_extract(file)
def _reddit_extract(file)
```
```
That returns a Pandas DataFrame with three columns (*title*, *link*, *updated*). On `reddit.xml` your output should be a 25 row, 3 column pandas data frame.
That returns a Pandas DataFrame with three columns (*title*, *link*, *updated*). On `reddit.xml` your output should be a 25 row, 3 column pandas data frame.
Hint: if you are getting 26 rows, you are probably extracting the first dummy header row as well--this can be safely skipped.
### TODO 2. Extract Ticker Symbols
### TODO 2. Extract Ticker Symbols
Each title of a reddit post might mention a stock of interest and most use a consistent format to denote a ticker symbol (starting with a dollar sign). For example: "$ISWH Takes Center Stage at Crypto Conference". You will now write a function called extract ticker which given a single title extracts all of the ticker symbols present in the title:
Each title of a reddit post might mention a stock of interest and most use a consistent format to denote a ticker symbol (starting with a dollar sign). For example: "$ISWH Takes Center Stage at Crypto Conference". You will now write a function called extract ticker which given a single title extracts all of the ticker symbols present in the title:
This homework assignment is meant to be an introduction to Python programming and introduces some basic concepts of encoding and decoding.
Due Date: *Friday April 17, 2020 11:59 pm*
*Friday April 23, 2020 11:59 PM*
## Initial Setup
Entity Resolution is the task of disambiguating manifestations of real world entities in various records or mentions by linking and grouping. For example, there could be different ways of addressing the same person in text, different addresses for businesses, or photos of a particular object. In this assignment, you will link two product catalogs.
These initial setup instructions assume you've done ``hw0``. Before you start an assingment you should sync your cloned repository with the online one:
## Getting Started
First, pull the most recent changes from the cmsc13600-public repository:
```
```
$ cd cmsc13600-materials
$ git pull
$ git pull
```
```
Then, copy the `hw2` folder to your submission repository. Change directories to enter your submission repository. Your code will go into `analzey.py`. You can the files to the repository using `git add`:
Copy the folder ``hw1`` to your newly cloned submission repository. Enter that repository from the command line and enter the copied ``hw1`` folder. In this homework assignment, you will only modify ``encoding.py``. Once you are done, you must add 'encoding.py' to git:
```
```
$ git add encoding.py
$ git add analyze.py
$ git commit -m'initialized homework'
```
```
After adding your files, to submit your code you must run:
You will also need to fetch the datasets used in this homework assignment:
We will NOT grade any code that is not added, committed, and pushed to your submission repository. You can confirm your submission by visiting the web interface[https://mit.cs.uchicago.edu/cmsc13600-spr-20/skr]
Download each of the files and put it into your `hw2` folder.
## Delta Encoding
Delta encoding is a way of storing or transmitting data in the form of differences (deltas) between sequential data rather than complete files.
In this first assignment, you will implement a delta encoding module in python.
The module will:
* Load a file of integers
* Delta encode them
* Write back a file in binary form
The instructions in this assignment are purposefully incomplete for you to read Python's API and to understand how the different functions work. All of the necessary parts that you need to write are marked with *TODO*.
Before we can get started, let us understand the main APIs in this project. We have provided a file named `core.py` for you. This file loads and processes the data that you've just downloaded. For example, you can load the Amazon catalog with the `amazon_catalog()` function. This returns an iterator over data tuples in the Amazon catalog. The fields are id, title, description, mfg (manufacturer), and price if any:
```
## TODO 1. Loading the data file
>>>for a in amazon_catalog():
In `encoding.py`, your first task is to write `load_orig_file`. This function reads from a specified filename and returns a list of integers in the file. You may assume the file is formatted like ``data.txt`` provided with the code, where each line contains a single integer number. The input of this function is a filename and the output is a list of numbers. If the file does not exist you must raise an exception.
In `encoding.py`, your next task is to write `delta_encoding`. This function takes a list of numbers and computes the delta encoding. The delta encoding encodes the list in terms of successive differences from the previous element. The first element is kept as is in the encoding.
```
You can similarly, do the same for the Google catalog:
A matching is a pairing between id's in the Google catalog and the Amazon catalog that refer to the same product. The ground truth is listed in the file `Amzon_GoogleProducts_perfectMapping.csv`. Your job is to construct a list of pairs (or iterator of pairs) of `(amazon.id, google.id)`. These matchings can be evaluated for accuracy using the `eval_matching` function:
False positive refers to the false positive rate, false negative refers to the false negative rate, and accuracy refers to the overall accuracy.
Or,
## Assignment
Your job is write the `match` function in `analzye.py`. You can run your code by running:
```
python3 auto_grader.py
```
```
> data = [1,0,6,1]
Running the code will print out a result report as follows (accuracy, precision, and recall):
> enc = delta_encoding(data)
1,-1,6,-5
```
```
Your job is to write a function that computes this encoding. Pay close attention to how python passes around references and where you make copies of lists v.s. modify a list in place.
When we write this data to a file, we will want to represent each encoded value as an unsigned short integer (1 single byte of data). To do so, we have to "shift" all of the values upwards so there are no negatives. You will write a function `shift` that adds a pre-specified offset to each value.
*For full credit, you must write a program that achieves at least 50% accuracy in less than 5 mins on a standard laptop.*
## TODO 4. Write Encoding
The project is complete unstructured and it is up to you to figure out how to make this happen. Here are some hints:
Now, we are ready to write the encoded data to disk. In the function `write_encoding`, you will do the following steps:
* Open the specified filename in the function arguments for writing
* Convert the encoded list of numbers into a bytearray
* Write the bytearray to the file
* Close the file
Reading from such a file is a little tricky, so we've provided that function for you.
* The amazon product database is redundant (multiple same products), the google database is essentially unique.
## TODO 5. Delta Decoding
* Jaccard similarity will be useful but you may have to consider "n-grams" of words (look at the lecture notes!) and "cleaning" up the strings to strip formatting and punctuation.
Finally, you will write a function that takes a delta encoded list and recovers the original data. This should do the opposite of what you did before. Don't forget to unshift the data when you are testing!
For example:
* Price and manufacturer will also be important attributes to use.
```
> enc = [1,2,1,-1]
> data = delta_decoding(enc)
1,3,4,3
```
Or,
## Submission
After you finish the assignment you can submit your code with:
```
```
> data = [1,-1,6,-5]
$ git push
> data = delta_decoding(enc)
1,0,6,1
```
```
## Testing
We've provided a sample dataset ``data.txt`` which can be used to test your code as well as an autograder script `autograder.py` which runs a bunch of interesting tests. The autograder is not comprehensive but it is a good start. It's up to you to figure out what the test do and why they work.
This homework assignment introduces an advanced use of hashing called a Bloom filter.
This homework assignment is meant to be an introduction to Python programming and introduces some basic concepts of encoding and decoding.
Due Date: *Friday May 1st, 2020 11:59 pm*
Due Date: *Friday April 30, 2020 11:59 pm*
## Initial Setup
## Initial Setup
These initial setup instructions assume you've done ``hw0``. Before you start an assingment you should sync your cloned repository with the online one:
Before you start an assingment you should sync your cloned repository with the online one:
```
```
$ cd cmsc13600-materials
$ cd cmsc13600-materials
$ git pull
$ git pull
```
```
Copy the folder ``hw2`` to your newly cloned submission repository. Enter that repository from the command line and enter the copied ``hw2`` folder. In this homework assignment, you will only modify ``bloom.py``. Once you are done, you must add 'bloom.py' to git:
Copy the folder ``hw3`` to your newly cloned submission repository. Enter that repository from the command line and enter the copied ``hw3`` folder. In this homework assignment, you will only modify ``encoding.py``. Once you are done, you must add 'encoding.py' to git:
```
```
$ git add bloom.py
$ git add encoding.py
```
```
After adding your files, to submit your code you must run:
After adding your files, to submit your code you must run:
```
```
...
@@ -21,44 +21,65 @@ $ git push
...
@@ -21,44 +21,65 @@ $ git push
```
```
We will NOT grade any code that is not added, committed, and pushed to your submission repository. You can confirm your submission by visiting the web interface[https://mit.cs.uchicago.edu/cmsc13600-spr-20/skr]
We will NOT grade any code that is not added, committed, and pushed to your submission repository. You can confirm your submission by visiting the web interface[https://mit.cs.uchicago.edu/cmsc13600-spr-20/skr]
## Bloom filter
## Delta Encoding
A Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a set. False positive matches are possible, but false negatives are not – in other words, a query returns either "possibly in set" or "definitely not in set." Elements can be added to the set, but not removed (though this can be addressed with the counting Bloom filter variant); the more items added, the larger the probability of false positives. All of the necessary parts that you need to write are marked with *TODO*.
Delta encoding is a way of storing or transmitting data in the form of differences (deltas) between sequential data rather than complete files.
In this first assignment, you will implement a delta encoding module in python.
The module will:
* Load a file of integers
* Delta encode them
* Write back a file in binary form
Here's how the basic Bloom filter works:
The instructions in this assignment are purposefully incomplete for you to read Python's API and to understand how the different functions work. All of the necessary parts that you need to write are marked with *TODO*.
### Initialization
## TODO 1. Loading the data file
* An empty Bloom filter is initialized with an array of *m* elements each with value 0.
In `encoding.py`, your first task is to write `load_orig_file`. This function reads from a specified filename and returns a list of integers in the file. You may assume the file is formatted like ``data.txt`` provided with the code, where each line contains a single integer number. The input of this function is a filename and the output is a list of numbers. If the file does not exist you must raise an exception.
* For each hash function calculate the hash value of the item "e" (should be a number from 0 to m).
In `encoding.py`, your next task is to write `delta_encoding`. This function takes a list of numbers and computes the delta encoding. The delta encoding encodes the list in terms of successive differences from the previous element. The first element is kept as is in the encoding.
* Treat those calculated hash values as indices for the array and set each corresponding index in the array to 1 (if it is already 1 from a previous addition keep it as is).
### Contains An Item e
For example:
* For each hash function calculate the hash value of the item "e" (should be a number from 0 to m).
```
* Treat those calculated hash values as indices for the array and retrieve the array value for each corresponding index. If any of the values is 0, we know that "e" could not have possibly been inserted in the past.
> data = [1,3,4,3]
> enc = delta_encoding(data)
1,2,1,-1
```
## TODO 1. Generate K independent Hash Functions
Or,
Your first task is to write the function `generate_hashes`. This function is a higher-order function that returns a list of *k* random hash functions each with a range from 0 to *m*. Here are some hints that will help you write this function.
```
> data = [1,0,6,1]
> enc = delta_encoding(data)
1,-1,6,-5
```
Your job is to write a function that computes this encoding. Pay close attention to how python passes around references and where you make copies of lists v.s. modify a list in place.
* Step 1. Review the "linear" hash function described in lecture and write a helper function that generates such a hash function for a pre-defined A and B. How would you restrict the domain of this hash function to be with 0 to m?
## TODO 3. Integer Shifting
When we write this data to a file, we will want to represent each encoded value as an unsigned short integer (1 single byte of data). To do so, we have to "shift" all of the values upwards so there are no negatives. You will write a function `shift` that adds a pre-specified offset to each value.
* Step 2. Generate k of such functions with different random settings of A and B. Pay close attention to how many times you call "random.x" because of how the seeded random variable works.
## TODO 4. Write Encoding
Now, we are ready to write the encoded data to disk. In the function `write_encoding`, you will do the following steps:
* Open the specified filename in the function arguments for writing
* Convert the encoded list of numbers into a bytearray
* Write the bytearray to the file
* Close the file
* Step 3. Return the functions themselves so they can be applied to data. Look at the autograder to understand what inputs these functions should take.
Reading from such a file is a little tricky, so we've provided that function for you.
## TODO 2. Put
## TODO 5. Delta Decoding
Write a function that uses the algorithm listed above to add a string to the bloom filter. In pseudo-code:
Finally, you will write a function that takes a delta encoded list and recovers the original data. This should do the opposite of what you did before. Don't forget to unshift the data when you are testing!
* For each of the k hash functions:
* Compute the hash code of the string, and store the code in i
* Set the ith element of the array to 1
## TODO 3. Get
For example:
Write a function that uses the algorithm listed above to test whether the bloom filter possibly contains the string. In pseudo-code:
```
* For each of the k hash functions:
> enc = [1,2,1,-1]
* Compute the hash code of the string, and store the code in i
> data = delta_decoding(enc)
* if the ith element is 0, return false
1,3,4,3
* if all code-indices are 1, return true
```
Or,
```
> data = [1,-1,6,-5]
> data = delta_decoding(enc)
1,0,6,1
```
## Testing
## Testing
We've provided an autograder script `autograder.py` which runs a bunch of interesting tests. The autograder is not comprehensive but it is a good start. It's up to you to figure out what the test do and why they work.
We've provided a sample dataset ``data.txt`` which can be used to test your code as well as an autograder script `autograder.py` which runs a bunch of interesting tests. The autograder is not comprehensive but it is a good start. It's up to you to figure out what the test do and why they work.
This homework assignment introduces an advanced use of hashing called a Bloom filter.
*Due 5/14/20 11:59 PM*
Due Date: *Friday May 7, 11:59 pm*
Entity Resolution is the task of disambiguating manifestations of real world entities in various records or mentions by linking and grouping. For example, there could be different ways of addressing the same person in text, different addresses for businesses, or photos of a particular object. In this assignment, you will link two product catalogs.
## Getting Started
## Initial Setup
First, pull the most recent changes from the cmsc13600-public repository:
Before you start an assingment you should sync your cloned repository with the online one:
```
```
$ cd cmsc13600-materials
$ git pull
$ git pull
```
```
Then, copy the `hw3` folder to your submission repository. Change directories to enter your submission repository. Your code will go into `analzey.py`. You can the files to the repository using `git add`:
Copy the folder ``hw4`` to your newly cloned submission repository. Enter that repository from the command line and enter the copied ``hw4`` folder. In this homework assignment, you will only modify ``bloom.py``. Once you are done, you must add 'bloom.py' to git:
```
```
$ git add analyze.py
$ git add bloom.py
$ git commit -m'initialized homework'
```
```
You will also need to fetch the datasets used in this homework assignment:
After adding your files, to submit your code you must run:
Download each of the files and put it into your `hw3` folder.
We will NOT grade any code that is not added, committed, and pushed to your submission repository. You can confirm your submission by visiting the web interface[https://mit.cs.uchicago.edu/cmsc13600-spr-20/skr]
Before we can get started, let us understand the main APIs in this project. We have provided a file named `core.py` for you. This file loads and processes the data that you've just downloaded. For example, you can load the Amazon catalog with the `amazon_catalog()` function. This returns an iterator over data tuples in the Amazon catalog. The fields are id, title, description, mfg (manufacturer), and price if any:
## Bloom filter
```
A Bloom filter is a space-efficient probabilistic data structure, conceived by Burton Howard Bloom in 1970, that is used to test whether an element is a member of a set. False positive matches are possible, but false negatives are not – in other words, a query returns either "possibly in set" or "definitely not in set." Elements can be added to the set, but not removed (though this can be addressed with the counting Bloom filter variant); the more items added, the larger the probability of false positives. All of the necessary parts that you need to write are marked with *TODO*.
* An empty Bloom filter is initialized with an array of *m* elements each with value 0.
A matching is a pairing between id's in the Google catalog and the Amazon catalog that refer to the same product. The ground truth is listed in the file `Amzon_GoogleProducts_perfectMapping.csv`. Your job is to construct a list of pairs (or iterator of pairs) of `(amazon.id, google.id)`. These matchings can be evaluated for accuracy using the `eval_matching` function:
False positive refers to the false positive rate, false negative refers to the false negative rate, and accuracy refers to the overall accuracy.
## Assignment
### Adding An Item e
Your job is write the `match` function in `analzye.py`. You can run your code by running:
* For each hash function calculate the hash value of the item "e" (should be a number from 0 to m).
```
* Treat those calculated hash values as indices for the array and set each corresponding index in the array to 1 (if it is already 1 from a previous addition keep it as is).
python3 auto_grader.py
```
Running the code will print out a result report as follows (accuracy, precision, and recall):
*For full credit, you must write a program that achieves at least 50% accuracy in less than 5 mins on a standard laptop.*
* For each hash function calculate the hash value of the item "e" (should be a number from 0 to m).
* Treat those calculated hash values as indices for the array and retrieve the array value for each corresponding index. If any of the values is 0, we know that "e" could not have possibly been inserted in the past.
The project is complete unstructured and it is up to you to figure out how to make this happen. Here are some hints:
## TODO 1. Generate K independent Hash Functions
Your first task is to write the function `generate_hashes`. This function is a higher-order function that returns a list of *k* random hash functions each with a range from 0 to *m*. Here are some hints that will help you write this function.
*The amazon product database is redundant (multiple same products), the google database is essentially unique.
*Step 1. Review the "linear" hash function described in lecture and write a helper function that generates such a hash function for a pre-defined A and B. How would you restrict the domain of this hash function to be with 0 to m?
*Jaccard similarity will be useful but you may have to consider "n-grams" of words (look at the lecture notes!) and "cleaning" up the strings to strip formatting and punctuation.
*Step 2. Generate k of such functions with different random settings of A and B. Pay close attention to how many times you call "random.x" because of how the seeded random variable works.
*Price and manufacturer will also be important attributes to use.
*Step 3. Return the functions themselves so they can be applied to data. Look at the autograder to understand what inputs these functions should take.
## Submission
## TODO 2. Put
After you finish the assignment you can submit your code with:
Write a function that uses the algorithm listed above to add a string to the bloom filter. In pseudo-code:
```
* For each of the k hash functions:
$ git push
* Compute the hash code of the string, and store the code in i
```
* Set the ith element of the array to 1
## TODO 3. Get
Write a function that uses the algorithm listed above to test whether the bloom filter possibly contains the string. In pseudo-code:
* For each of the k hash functions:
* Compute the hash code of the string, and store the code in i
* if the ith element is 0, return false
* if all code-indices are 1, return true
## Testing
We've provided an autograder script `autograder.py` which runs a bunch of interesting tests. The autograder is not comprehensive but it is a good start. It's up to you to figure out what the test do and why they work.