Augmented Intelligence
When people think of AI, they often think of machines that are way smarter than humans and compute things we couldn’t possibly understand. Some believe AI will save us; others think it will destroy us.
The truth is, artificial intelligence is a lot like artificial light. The latter ushered in great possibilities by illuminating things we couldn’t see and extending our productive days. But it also had some negative consequences, like hurting migratory patterns and upsetting our sleep cycles.
It all comes down to how we humans choose to apply artificial light.
The same goes for AI.
The All-Knowing Black Box
People are so used to machines being able to compute better than us that when AI spits out results we don’t understand, we often shrug our shoulders and say, well it must be right. It must be good.
But it may not be. The problem is, because much of AI and its popular Deep Learning incarnation is a black box, there often isn’t any way to know why you get the results you do or assess their validity and, therefore, determine the best way to apply them.
The Human Touch
AI systems aren’t independent entities. They are programmed by humans. The data fed into it are generated or selected by humans. The algorithms computing data are chosen by people. And the AI applications’ goals are determined by us.
Which means everything is biased. It also means we humans must pay special attention to what we at Nara Logics call the AI Trinity: the data, algorithms and results that an organization’s business leaders and developers choose for each AI application.
AI Trinity: Eggs, Chicken and Bacon
Nara Logics equates the AI Trinity to eggs, chicken and bacon. All are equally critical to AI success. And all are driven by humans.
Eggs: Eggs are the data. It’s important that, rather than focusing on the amount of data your AI application uses, you focus on data relevant to the problem you want to solve. Massive data can be redundant or generic, and can actually get in the way of AI. Garbage in, garbage out. The key is to get quality not quantity. Less data is often better at the start anyway, so you can effectively test your AI system to make sure it’s delivering the kind of information you intended.
Chicken: This is the algorithm(s). There is no one-size-fits-all. The algorithms you choose will depend on your objectives and available data. You can’t separate the algorithm from the data; they depend on each other.
Bacon: In our AI Trinity, bacon refers to business results. It’s essential to define the results/requirements you want up front, and then measure at the end to make sure you’re getting them. Only then can you “bring home the bacon.”
We can’t accept the AI black box. We need explainable AI.
Explainable AI
So how do we humans know if our AI applications are really delivering the bacon? We can’t accept the AI black box. We need explainable AI.
That’s why Nara Logics built our Synaptic Intelligence Platform to be fully transparent. Every recommendation it delivers comes with multiple “whys.” The reasoning is clear, so it’s easy for humans to assess results, use them to augment your own knowledge base, and determine how to apply them for your organization’s good.