Pending Bills Aim to Define Student Privacy in Digital Age
July 22 2015Student privacy is currently governed by a law that is over forty years old. The Family Educational Rights and Privacy Act (FERPA), passed in 1974, created a national framework for student privacy. It tells schools what they can and cannot do with student data, which FERPA refers to as “education records.” But where new digital tools have transformed educational systems, student privacy law has largely remained the same. Indeed, according to a Chamber of Commerce report, between 1994 and 1999 the percentage of schools connected to the Internet rose from 35% to 95%. And schools that are connected are using the Internet more and more extensively. Which opens a question: what should student privacy look like in the digital age?
The new crop of digital educational tools range from rudimentary websites offering online homework help to advanced “adaptive learning” tools that tailor lesson plans and assignments in real-time based on individual student’s information, performance, and behavior. Despite their many different forms and levels of sophistication, these tools have one common denominator: they produce a lot of new data about students, and this data doesn’t flow only to the schools.
Data about students isn't just flowing to schools, but to private companies and startups as well.
Instead, with these new tools, student data flows into the hands of third-party companies, some of which are well-established and others of which are startups that may be experimenting with new business models. This is a significant problem under FERPA, because it’s unclear when or how digital data qualifies as a student’s “education record.” Though the Department of Education has issued guidance, as the Center for Democracy and Technology notes, “the Department’s response to most questions regarding the proper evaluation of FERPA … was ‘it depends.’ This is likely because FERPA’s exceptions significantly complicate the analysis.”
That policy gap has naturally sparked a conversation among education privacy advocates, policymakers, and education technology companies about what student privacy should look like in the digital age. The key question in this conversation is how to balance student privacy against the educational benefits, and commercial rewards, that can stem from using or sharing such data more widely. To be sure, data collection and sharing can help tailor lesson plans and assignments to each student’s needs. But increased collection and sharing can also pose risks to the security of student information, and might turn each record of classroom struggles or misbehavior into fodder for targeted ads, future job evaluations, or other potentially sensitive uses. Education privacy advocates are focused on mandating that students and parents be given strong veto options over the use of student data, particularly when it comes to non-educational uses such as targeted advertising, the creation of marketing profiles, or resale of student data to commercial data brokers.
Members of Congress have introduced several bills that aim to resolve these questions, most of which take a fairly similar approach. These bills would prohibit companies from using student data to target ads on their services and also prohibit them from selling information they obtain to commercial data brokers. The Center for Democracy and Technology calls that the SAFE KIDS Act “a welcome development” because it “would provide clear guidelines for the use, sharing, and protection of student data.”
In tandem with these legislative efforts, the education technology industry has begun to take self-regulatory steps. For example, a “Student Privacy Pledge,” now signed by more than 100 companies that work on education technology, promises to empower students and parents to choose how their data will be used. But, as The New York Times has reported, not all signatories are living up to the pledge. More importantly, it’s not clear that making all uses of student data optional — that is, to only allow the uses if parents or students say yes — would even be a good idea in the first place. Instead, it may make sense to prohibit some uses outright. For example, if students have the option to provide every detail of their school records to prospective employers, those employers might require all job seekers to provide such information, or strongly prefer applicants who do. Under those circumstances, giving a student more choice would ultimately reduce her privacy.
Under certain circumstances, giving a student more choice about their data could actually undermine their privacy.
Prohibiting these companies from targeting ads or selling student data via legislation is certainly a laudable goal, but there are other important policy concerns that these bills don’t address. As danah boyd writes, the current proposals may turn out to be myopic and focused on the wrong risks:
the risks that we’re concerned about are shaped by the fears of privileged parents, not the risks of those who are already under constant surveillance, those who are economically disadvantaged, and those who are in the school-prison pipeline.
Legislation more focused on those concerns, she argues, might restrict other data uses. For example, the risk that student data might “be used to fuel the student debt ecosystem,” or the possibility that student performance data could help the police build new risk assessment models, are concerns that go unaddressed in the bills.
The conversation regarding student privacy in the digital age is an important one and the pending bills represent a good first step. But as boyd argues, conversations about “student privacy” are “really about who has the right to monitor which youth,” an issue that goes beyond advertising or selling data to a commercial data broker. A broader debate, leading to more fully considered legislation, could benefit generations of students.