Inspiration

Over the last decade, computer science programs across the world have had to face the question: are we training ethically-competent computer scientists? The need for ethics training in computer science curricula is, of course, unquestionable. Seemingly every day, there are new controversies caused by technologies, technology companies, and/or technologists. But teaching ethics is necessarily difficult, and it is especially so for a field that has only recently begun to grapple with its impact on society. At the University of Illinois Urbana-Champaign (UIUC), ethics is currently delivered through a standalone course, CS210: Ethics & Professional Issues, which is required for all engineering majors. This course begins with an introduction to ethical philosophy and logical argumentation, then proceeds to discuss topics such as privacy, professional ethics, and intellectual property. But what became clear to Ryan Cunningham, the professor for CS210 (and our mentor), was that students felt a disconnect between the highly-technical content of their core CS classes and the non-technical structure of their lone ethics course. As undergraduates, we felt strongly that programming-based ethics is a powerful way to learn about ethical dilemmas in computing. And so, we sought out to introduce a new programming assignment into CS210, focused on the ethics of content moderation.

What it does

Ethical Moderation is a programming project that forces students to grapple with the ethics of content moderation. The project was built for students, by students, and it is designed to be open-source: we want to be part of a larger wave of revamped ethics education in computer science

How we built it

  • Jekyll - Framework for public-facing website
  • Jupyter - Software to build the Ethical Moderation Lab
  • Python - Language for creating the naive algorithm

Challenges we ran into

A challenge we ran into was how to properly introduce the topic of human content moderation into our lab. Human content moderation is a huge ethical dilemma, and we wanted to make sure that we don't trivialize the psychological toll human content moderators go through, while at the same time, not making the topic at hand too heavy for the students. We found a nice middle ground by introducing when it is appropriate to use human moderation, while also mentioning why it is important to keep human reviewers at a minimum when possible.

Accomplishments that we're proud of

We are most proud of how unique and extendable our project truly is. Ethical Moderation is a lab that can be used in any university's ethics course. We are proud that we created a lab that helps students understand the delicate issues around content moderation. We built an entirely open-source lab and a public-facing website; we created an illustrative yet still practical example with real social media data; and we designed thoughtful discussion questions that computer science students everywhere ought tograpple with. All in all, we had an immensely rewarding weekend.

What we learned

In a technical sense, we got to understand the in and outs of frameworks such as Jekyll, as well as Bag-Of-Word algorithms, like the Naive Bayes Classifier algorithm. We also got to learn more about the importance of unbiased training data. In a more ethical and non-technical sense, we got to see how the ethics behind content moderation is not as black and white as we once thought.

What's next for Ethical Moderation

One of the best parts about Ethical Moderation is its scope. This is not a one-and-done project. With the help of our mentor, Ryan Cunningham, we aim to roll out Ethical Moderation in CS210 this fall semester. We also hope that we set a precedent for what robust ethics education could look like. We will keep this project open-source and we eencourage universities to employ or modify this project to fit it into their ethics course.

Built With

Share this project:

Updates