Artificial Intelligence for monitoring behavior is making its way onto the work floor and classroom. While monitoring employees and students has already been done for a long time (e.g. with timesheets, attendance lists), the minute-to-minute observations of AI technology are heavily contested for their questionable impact on people’s privacy and psychological wellbeing. Is this simply a more extensive way of monitoring behavior, or can it be characterized as a fundamental game changer?
Ever since Napoleon, our society has grown more and more accustomed to the collection of personal data by governments and, later on, corporations and educational institutions, albeit within the confines of privacy laws. The first and simplest kind of employee monitoring occurred in the late 19th century, with the invention of the timesheet in 1888 by a clock jeweler, who later merged with other time equipment companies to form IBM. Analysis on the psychological effects of systems like timesheets or productivity track boards has shown that, while workers sometimes experience more pressure and stress, a workplace atmosphere based on objective verification is often also perceived fairer. In general, these monitoring methods have been accepted by employees and students. Today, the rise of highly advanced AI monitoring technologies such as facial recognition software in schools is causing controversy because of their chilling effects on people’s freedom of expression, creativity, trust and ultimately, productivity.One of the main differences with traditional methods is that AI technology can collect data at a continuous minute-to-minute rate. In the Isaak system, for example, when someone is touching his computer, this is categorized as working, while not touching it for 5 minutes when logged in is interpreted as not working. Furthermore, these systems often lack any human interaction, because the algorithms determine whether someone is following the rules and requirements. This has been exemplified by Ibrahim Diallo, who got fired from his job by a machine because of a broken key card, or transgender Uber drivers who are kicked off the app because of their changing physical appearance. In these types of cases, important decisions are made automatically by the firm’s algorithm without any human involvement. What is more, whereas employers used to collect basic data such as attendance or sales figures per worker, they now have the ability to look into every minor activity or highly personal information (e.g. biometric information) and even monitor a worker’s emotions. Such data is not solely collected to monitor workers, but also to influence them through, for example, gamification. Finally, the collected data is so detailed that it can be of value to parties beyond the work floor or classroom, which creates a vulnerable position for the ones who are monitored but do not have control over their personal data.Monitoring behavior with AI technology can still simply be characterized as a more advanced, but not fundamentally different method for governments, companies or educational institutions to collect personal data for organizational purposes, as they have done for ages. However, the position of the ones being monitored might undergo a more fundamental change. This is not only caused by the decline of human interference or the collection of very detailed personal data, but also by automation bias, the tendency of people (in this case employers, teachers or officials) to believe in the validity of recommendations made by algorithms over human testimony (e.g. an employee who claims he was not skipping work but taking time to get inspiration for a work-related matter). An employer’s testimony might pale in comparison to conclusions made by an algorithm’s superior calculation power and its lack of human subjectivity. A lack of (verbal) skills, context or time to evaluate whether the computed conclusion comprises the right “verdict” can be a problem for both the employee and the employer. The ones being monitored might feel powerless against the reduction of their actual activities to activities that can be measured by an algorithm, and employers might not be able to justify a different conclusion about their employees than was given by an algorithm.