A few days ago, the ACM U.S. Public Policy Council (USACM) released a statement and a list of seven principles aimed at addressing potential harmful bias of algorithmic solutions. This effort was initiated by the USACM’s Algorithmic Accountability Working Group. Algorithmic solutions are now widely deployed to make decisions that affect our lives, e.g., recommendations for movies, targeted ads on the web, autonomous vehicles, suggested contacts or reading in social networks, etc. We have all come across systems making decisions that are targeted to us individually, and I am sure that many of us have wondered how a given recommendation was made to us, on the basis of which information and what kind of profile. Typically, no explanation is made available to us! Nor there is any means to track the origin of such decisions!
Interestingly, emerging regulatory frameworks, such as the EU General Data Protection Regulation, are introducing the “right to explanations” (see https://arxiv.org/abs/1606.08813) in particular related to Article 22 on Automated individual decision-making, including profiling. So, the regulatory framework is evolving, even though there is still no consensus on how to actually achieve this in practice.
Furthermore, algorithmic bias is a phenomenon that has been observed in various contexts (see for instance two recent articles of the New-York Times and the Guardian). Given their pervasive nature, ACM U.S. Public Policy Council acknowledges that it is imperative to address “challenges associated with the design and technical aspects of algorithms and preventing bias from the onset”. On this basis, they propose 7 principles, compatible with their code of ethics.
As a provenance researcher, I have always regarded the need to log flows of information and activities, and ascribe responsibility for these as crucial steps to making systems accountable. This view was echoed by Danny Weitzner and team in their seminal paper on Information Accountability. I was therefore delighted to see that “Data provenance” was listed as an explicit principle of the USACM list of seven principles. So, instead of paraphrasing them, I take the liberty of copying them below.

Figure 1: ACM US Public Policy Council list of seven principles for Algorithmic Transparency and Accountability
However, I feel that provenance, as understand it, encompasses several of these principles, something that I propose to investigate in the rest of this post. To illustrate this, I propose Figure 2, a block diagram outlining the high-level architecture of a transparent and accountable system. At the heart of such a system, we find its Business Logic which provides its primary functionality (e.g. Recommendations, Analytics, etc). In provenance-aware systems, applications log their activities and data flows, out of which a semantic representation is constructed, which I refer to as provenance. PROV is a standardised representation for provenance, which was recently published by the World Wide Web consortium and seeing strong adoption in various walks of life. In this context, provenance ◊ is defined as “a record that describes the people, institutions, entities, and activities involved in producing, influencing, or delivering a piece of data or a thing”.
There is no point constructing such a semantic representation, if it is not being exploited. Various capabilities can be built on top of such a provenance repository, including query interfaces, audit functionality, explanation service, redress mechanism and validation, which we discuss now in light of the seven principles.
The first principle (Awareness) identifies a variety of stakeholders: Owners, Designers and Builders, Users, but the second principle also mentions the role of Regulators, and we believe that potential third-party Auditors are also relevant in that context. While technology makes progress with algorithmic solutions, society is much slower to react, and there is indeed work required to increase awareness, and establish what the user rights are, and what the obligations on owners should be, whether by means of regulations or self-regulations. The SmartSociety project recently published a Social Charter for Smart Platforms, which is an illustration of what rights and obligations can be in “smart” platforms.
The second principle (Access and Redress) recommends mechanisms by which systems can be questioned and redress enabled for individuals.This principle points to the ability to query the system and its past actions, which is a typical provenance-based functionality. For those seeking redress, there is a need to be able to refer to an event that resulted in an unsatisfactory outcome; PROV-based provenance mandates that all outcome, data and activity instances are uniquely identified. Furthermore, we are of the view that such a redress mechanism, including reached resolutions, should be inspectable in a similar fashion; thus, provenance of redress requests and resolutions should also be inspectable.
The third principle (Accountability) is concerned with holding institutions responsible for the decisions made by their algorithmic systems. For this, one needs a non-repudiable account of what has happened, and suitable attribution of decisions to system components, their owners, and those legally responsible for the system’s actions. Again, such an account is exactly what PROV offers: therefore we see the third principle being implemented technically with queries over provenance representation, and socially with suitable regulatory and enforcement mechanisms.
The fourth principle (Explanation) requires explanations to be produced about the unfolding of activities and decisions. There is emerging evidence that provenance can serve as a form of computer-based narrative, out of which textual explanations can be composed and presented to users. We recently conducted some user studies about the perceived legibility of natural language explanations by casual users. We also used a similar technique in order to provide explanations about user ratings in a Ride Share application.
The fifth principle (Data Provenance) is explicitly focusing on training data used to train so-called “machine-learning” algorithms. We believe that it is not just training data that is relevant, but any external data, the business logic and designers may rely upon. It is expected that public scrutiny of such data offers opportunity to correct potential bias, and in general, any concern that may affect decisions. To operationalize this principle, one needs to have access to a description of the data (potentially, the data itself), but also how it is used in training algorithms, and how this potentially affects decisions. PROV-based Provenance, queries and explanations are required here to allow such scrutiny. Some of our recent work focused on analytics techniques to assess the quality of data, using provenance information; such a mechanism becomes useful to ensure some form of quality control in systems.
The sixth principle (Auditability) demands models, algorithms, data, and decision to be recorded, so that they can be audited. All these can easily be described in PROV, by means of “PROV entities“, which can be used or generated by “PROV activities“, under the supervision of responsible agents. Specific auditing functions (aimed at various stakeholders) can query the provenance to expose individual entities, but also their aggregate characteristics, over longer periods of time. Techniques that we have developed, such as provenance summarisation, become really critical in this context, since they enable us to investigate aggregate behaviour of applications, instead of individual circumstances.
The seventh principle (Validation and Testing) recommends regular validation of models and testing for harmful outcomes. This suggests that processing over provenance, checking whether some expected criteria has been met or not, can be implemented by policy-based approaches over provenance, detecting whether past executions comply with expectations, described as policies. We have applied this technique to decide whether processing was performed in compliance with usage policies. If this is good practice to undertake validation and testing, therefore, it also becomes a necessity to document such a practice, to be able demonstrate that such validation and testing takes place.
So overall, the provenance research community has been investigating issues around capturing, storing, representing, querying and exploiting provenance information, all of them having a critical role in the principles of Algorithmic Transparency and Accountability. There is still much to research however, including critical issues around (1) agreed domain-specific extensions of PROV to support transparency and accountability; (2) better integration of the software engineering methodologies with provenance; (3) enforceable compliance with architecture; (4) non repudiation of provenance; (5) querying and auditing facility; (6) compliance checks over provenance; (7) user-friendly explanation of complex algorithmic decisions; (8) scalability of all the above issues.
In the spirit of Principle 1, I hope this blog post contributes to raising awareness of these issues. Feedback and comments welcome!
Pingback: Provenance Reading List (v2) | Luc's Blog
Pingback: Provenance and explainability of AI decisions: PhD opportunity | Luc's Blog
Pingback: AI-based Automated Decisions and Explanations: A Provenance Perspective | Luc's Blog