![]() ![]() ![]() With only eight movie ratings (of which two may be completely wrong), and dates that may be up to two weeks in error, they can uniquely identify 99 percent of the records in the dataset. Narayanan’s and Shmatikov’s de-anonymization algorithm is surprisingly robust, and works with partial data, data that has been perturbed, even data with errors in it. It turns out, though, that this only makes the problem slightly harder. Netflix could have randomized its dataset by removing a subset of the data, changing the timestamps or adding deliberate errors into the unique ID numbers it used to replace the names. The obvious countermeasures for this are, sadly, inadequate. This would certainly hold true for our book reading habits, our internet shopping habits, our telephone habits and our web searching habits. It turns out that if you eliminate the top 100 movies everyone watches, our movie-watching habits are all pretty individual. ![]() What the University of Texas researchers demonstrate is that this process isn’t hard, and doesn’t require a lot of data. A data broker holding databases of several companies might be able to de-anonymize most of the records in those databases. Merchants who maintain detailed customer and purchase information could use their data to partially de-anonymize any large search engine’s data, if it were released in an anonymized form. Google, with its database of users’ internet searches, could easily de-anonymize a public database of internet purchases, or zero in on searches of medical terms to de-anonymize a public health database. Or Amazon’s online book reviews could be the key to partially de-anonymizing a public database of credit card purchases, or a larger database of anonymous book reviews. Someone with access to an anonymous dataset of telephone records, for example, might partially de-anonymize it by correlating it with a catalog merchants’ telephone order database. The researchers working with the anonymous Netflix data didn’t painstakingly figure out people’s identities-as others did with the AOL search database last year-they just compared it with an already identified subset of similar data: a standard data-mining technique.īut as opportunities for this kind of analysis pop up more frequently, lots of anonymous data could end up at risk. On one hand, isn’t that sort of obvious? The risks of anonymous databases have been written about before, such as in this 2001 paper published in an IEEE journal. The point of the research was to demonstrate how little information is required to de-anonymize information in the Netflix dataset. (While IMDb’s records are public, crawling the site to get them is against the IMDb’s terms of service, so the researchers used a representative few to prove their algorithm.) What they did was reverse the anonymity of the Netflix dataset for those sampled users who also entered some movie rankings, under their own names, in the IMDb. ![]() They did not reverse the anonymity of the entire Netflix dataset. Their research (.pdf) illustrates some inherent security problems with anonymous data, but first it’s important to explain what they did and did not do. The data was anonymized by removing personal details and replacing names with random numbers, to protect the privacy of the recommenders.Īrvind Narayanan and Vitaly Shmatikov, researchers at the University of Texas at Austin, de-anonymized some of the Netflix data by comparing rankings and timestamps with public information in the Internet Movie Database, or IMDb. Last year, Netflix published 10 million movie rankings by 500,000 customers, as part of a challenge for people to come up with better recommendation systems than the one the company was using. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |