Contact Us
Directing |

We guide our clients towards the responsible implementation of Ai.

Deployment of Ai, means thinking of the bigger picture

Ai Governance

The question of how to ensure that technological innovation in machine learning and artificial intelligence leads to ethically desirable impacts on business has generated much public debate in recent years.

Often referred to in the media as ‘Ai Ethics’, we focus on the practical aspects of ‘Ai Governance’ helping guide our clients to acceptable outcomes before they move straight into technological innovation.

We ensure that all Ai decisions are Explainable, Transparent, and Fair. We also push our clients towards Human-Centric, Ethically Trained systems that are designed to augment people and processes.

Our Approach

We work with your teams across four key pillars to arm your organisation with the right collateral to answer any of the important questions.

Explainability

Having an explanation for why an Ai system behaves in a certain way can be a big help in boosting people’s confidence and trust.

Safety Considerations

It is essential to take precautions against both accidental and deliberate misuse of Ai with risks to safety.

Human-Ai Collaboration

“Human in the loop” is shorthand for systems which include people at one or more points in the decision-making process of an otherwise automated system.

Liability Framework

Regulated organisations should remain responsible for the decisions they make and the manner in which they act on them (whether using Ai or humans or both).

What We Do

Click on a heading below to find out more. By working closely with clients across the following areas, it helps them with their PR, over-arching technical strategy, and also helps to predict any potential downfalls the future of automation might hold.

Explainability

Having an explanation for why an Ai system behaves in a certain way can be a big help in boosting people’s confidence and trust in the accuracy and appropriateness of its predictions. It is also important for ensuring there is accountability, not least in giving grounds for contesting the system’s output.

What we do:

  • Assemble a collection of best practice explanations, and industry examples along with commentary on their positive and negative characteristics to provide practical inspiration, and direction.

What you get:

  • Guidelines for hypothetical industry use-cases so clients can calibrate how to balance the benefits of using Ai systems against the practical constraints that different standards of explainability impose.
  • A set of proposed minimum acceptable standards mapped against existing regulation and application contexts.

Safety Considerations

It is essential to take precautions against both accidental and deliberate misuse of Ai with risks to safety. But this needs to be within reason, in proportion to the damage that could ensue and the viability of the preventative steps proposed, across technical, legal, economic, and cultural dimensions.

What we do:

  • Outline basic workflows and standards for your common application contexts that are sufficient to show due diligence in carrying out checks for automated systems.

What you get:

  • An overview of the QA process for any rules-based Ai system, in order to demonstrate that it carries out work-flow to the same standard, or better, than a human counterpart.

Human-Ai Collaboration

“Human in the loop” is shorthand for systems which include people at one or more points in the decision-making process of an otherwise automated system. The challenge is in determining whether and where in the process people should play a role, and what precisely that role should entail, taking into account the purpose of the system and the wider context of its application (including, where relevant, a comparison to whatever existing process it is replacing).

Ultimately, Ai systems and humans have different strengths and weaknesses. Selecting the most prudent combination comes down to a holistic assessment of how best to ensure that an acceptable decision is made, given the circumstances.

What we do:

  • Determine contexts when decision-making should not be fully automated by an Ai system, but rather would require a meaningful “human in the loop”.

What you get:

  • Market research, and input from your audience about how they feel Ai should be used.
  • An assessment of different approaches to enabling human review and supervision of Ai systems by staff.

Liability Frameworks

Regulated organisations should remain responsible for the decisions they make and the manner in which they act on them (whether using Ai or humans or both). Currently, most policies protecting against commercial general liability, auto liability, professional liability and products liability do not address the potential risks of automated systems properly.

No matter how complex or simple an Ai system, it must be the persons or organisations who are ultimately responsible for the actions of Ai systems within their design or control.

What we do:

  • Evaluate potential weaknesses in existing liability rules and explore complementary rules for specific high-risk applications.

What you get:

  • An exploration of potential insurance alternatives for settings in which traditional liability rules are inadequate or unworkable.
Get In Touch
Our experts are working closely with the Government APPG, industry regulators such as the FCA, and leading experts in the field of Ai governance. We regularly advise philanthropic foundations, the third sector, industry leaders and governments.
Who we're helping

Simplifying Ai.

Contact Us