March 21, 2021

Silicon and Carbon - together

So far, we have seen how RPA can bring more value when integrated with AI. The level of the value is of course dependent on the business model and how you have embedded it into your automation. Be it a machine learning model that predicts what the outcome will be, be it a translation service that translates a content or another cognitive service that does a sentiment analysis. 

Regardless of what model you choose, the question remains... How can you trust AI ? Or can you trust AI fully ? 

I think the answer is still far from being a "OF COURSE, WHY NOT ?"

For sure, RPA & AI combination brings a huge value, it makes our lives more easier, but even though the AI model we use is "narrow" and is expected to take care of the task it was trained for, we cannot let it handle our business critical automation until it can prove that it can do the job flawlessly. "Flawlessly" is only possible if the model is well trained and until we see the outcome, we should keep AI under control. 

By control, we mean to bring human into this process and let the silicon work together with the carbon while in the meantime, letting the carbon train the silicon...

This means, you can still embed AI into your solution and set some rules. The confidence level can be one of those rules. The robot can for instance read a text, analyses it and classifies it as positive or negative if it is really certain with its finding with a score of 80%, meaning if it is 80% sure that the text is positive or negative. For an ML model, setting a confidence score is a no brainer.

Here comes a tricky question... What happens then if it is below 80% ? The simple answer is, you can program it to escalate it and ask for help from a human if this is the case. This situation is what we call as : "human-in-the-loop" state. When a robot ends up in this situation, it can send an email, warns someone with a pop-up message, or can find another way to ask for help. Regardless of the method, the receiver of this escalation is a human. The input from the human, then, can be used to train the robot so that it does not need to escalate when the same case occurs.

You might also want to ask another question..."What happens if the human does not answer in a timely manner ?"  This is a very relevant one since it might take a while until someone gets back to the robot.  Should it then wait until it receives an answer ? Of course, not. The robot should be able to put the process on hold until the answer comes and continue with another one, as otherwise it gets locked unnecessarily as it is impossible to know when this input will come.

Shortly, the robot can run another process when it has escalated the earlier one to a human and once it receives an input, it takes it from there and continues with the rest of the process.

Assume that I have an excel sheet with some movie critiques in it and I want my robot to read the content and provides me a feedback for every critique, either positive or negative. I want it to provide me with this info if it is 80% confident and otherwise, expect it to escalate it to get a human input.

Let's see how it works...



By using an ML model, the robot reads the content, classifies it if the confidence score is above 80%. if not, it expects to get a human feedback and also uses it to train itself. So the next time, it does need to involve a human for the same critique. Consider that my input got a confidence score of 100% since the robot fully respects my entry :)

This is a good example of a human-robot cooperation, where you can feel yourself more comfortable as you are in control and train the robot to become smarter. 

If we look from a scenario perspective, it fits to the sixth one in the below picture:






No comments:

Post a Comment

Understanding Documents thanks to RPAI (Part 2)

The summer vacation was great, but it is also exciting to be back and kick off the first blog entry after the break... In the first part of ...