Skip to main content Skip to Footer

BLOG


Digital Perspectives
New views. Applied now.

Digital Perspectives

New Views. Applied Now.

July 03, 2018
Intelligent automation: The future platform architecture and ecosystem
By: Sabyasachi Roy

How can we get people to trust AI-automated processes? How do we best explain the results? Is our data veracity really good enough? Does our human intuition stand any chance of spotting when it isn’t?

These are big questions for our future AI platform architectures and ecosystems. So when I got the chance to sit down at CogX with Vasilis Tsolis, CEO of Cognitiv+, and Mark Holmes, CEO at Waymark – two people deeply immersed in these issues – it was an excellent opportunity to scope out some of the answers.

Much of the challenges surrounding AI come back to confidence in the data. And ensuring that confidence means dealing with three things in particular – data provenance, data context, and data integrity.

The first, provenance, means ensuring the traceability of data across its entire lifecycle. The second, context, means ensuring that the data is suitable for a particular scenario, establishing the Data Intelligence practice by leveraging an organisation’s data sciences and cybersecurity capabilities to bring top-down accountability and industrialised repeatability to the practice of grading data.

The third, data integrity, is clearly a major challenge for many organisations. Vasilis made an interesting point about how sensitive AIs can be to seemingly innocuous aspects of the data. In the context of NLP, he talked about how something as simple as the underlining of passages in documents can throw an AI off course. Because the neural networks consider so many different data points, they may end up using any one to draw correlations and classify importance.

I thought Mark had some important things to say about combatting bias. It’s vital to be hyper-aware of where it can appear in the data. And one way of doing so is to ensure you’ve got enough diversity of viewpoints in your data science data pool. Setting up the correct tests and iterations before you begin and conducting audits to root out any malicious data manipulation is equally essential.

We also talked about the big shift underway in skills requirements. With so many rapid advances in platforms and architectures – API calls, microservices, cloud – creating AI solutions is getting ever more straightforward. For Mark, ensuring the right mix of people is key. As he put it, not everyone’s going to be a data scientist and aware of the impacts AI can have. So there’s a need to ensure the explainability of AI decisions.

So how will we improve explainability and solve AI’s “black box” problem? For Mark, it’s about trust as much as anything – giving the end users confidence that you’ve followed best practice and the relevant codes of ethics.

Vasilis made the valid point that today’s neural networks simply weren’t designed with these kinds of explanations in mind. Back then, it was all about getting better results. Factors like compliance, regulation, explicability just weren’t considered.

It makes explaining algorithmic decision making a real challenge. But, even if you can’t explain how an AI came to a decision, you can explain why. So, in NLP for example, you can show which words in a document the algorithm used to base its decision on. And that’s a step forward.

With the tech evolving so quickly, it’s important our thinking around data governance, architectures, and platforms keeps evolving at the same pace. It’s a fascinating space to keep watching.

DOWNLOAD VIDEO TRANSCRIPT [PDF]

To view Accenture’s other speakers featured across CogX, take a look at the review site here.

Popular Tags

    More blogs on this topic

      Archive