Since ChatGPT hit the scene, AI has been getting a lot of attention. AI chatbots are being portrayed in the media as everything from the technology that will end humanity to the most useful thing since sliced bread. But not all AIs are the same.
There are two essential types of AI – black box and white box – and a distinct difference between the process ChatGPT and similar black box AI use to solve problems and the white box AI that helps you manage projects and work streams in Moovila’s Perfect Project and Activate.
When you are implementing an AI model to help you manage essential business processes, it’s important to choose one you can trust.
The difference between black and white
When you pose a question to a black box AI model, you are putting your trust in an artificial intelligence that has become too complex for humans to understand. Your question – the input – goes into it. And an answer – the output – comes out. But you can’t examine, and probably wouldn’t understand, anything in between: the AI’s methodology, algorithm, or the vast amounts of data it studied to arrive at that answer. You can’t know if it picked up bad ideas, learned false assumptions, or is confidently making things up. It is impenetrable.
You can’t know if it picked up bad ideas,
learned false assumptions, or
White box AI, though, are transparent about how they work. You ask your question. It comes up with an answer. But if you have doubts about that answer, you can examine its methodology and the data that informed its results. It is transparent.
Let’s imagine, for example, that you are managing a large, complicated project. You ask both a white box and a black box AI model the question that’s keeping you up at night: “What is the probability this project will be completed on time?”
Can AI manage projects?
The black box AI has studied vast archives about projects and taught itself to identify potential problems in a project plan. It thinks there is a high probability the project will be completed on time. Wanting to match the 100% confidence the AI has, you peer into the model to see how it got its answer. It gives you answers so complicated and alien, you can’t understand most of them. Those you understand have nothing to do with your project or the people working on it.
The white box AI also does complex calculations to determine the health of a project using best practices in project management and your industry. It also scans the schedules of your staff, collects historical data about how quickly all types of tasks are completed in your company, and keeps copious notes on your team’s skills and workload.
When you ask it your question, it says there are too many unknowns in your plan to make an educated guess. It shows you a timescale where it sees things that could go wrong and why – because of vacations and other projects that impact the resources allocated to this one. And it flags where you didn’t provide information – or where it thinks you made mistakes – and offers suggestions on how to fix these problems. You fix your errors, reallocate some resources, and ask your question again. It gives you an answer very close to the one the black box AI gave.
Obviously, the white box AI is the one you want to work with – even though, in this example, both gave similar answers. There are too many nuances, specifics, dynamics, and uncertainties in project management to leave everything to a large, unmonitored calculator. With a white box AI, you can monitor, change, and understand the work the AI does. And it will tell you when it isn’t confident about a prediction – even as the project evolves. A black box AI will ignore the nuances and doubts and give you a prediction, as if it was a certainty.
The AI in Moovila’s Perfect Project is a white box AI and we wouldn’t have it any other way. Neither should you.
Your business processes are too important for that kind of uncertainty.
Black-box AI make big mistakes
Because of the power of machine learning, black box AIs can create impressive results. They also make scary mistakes.
“Give a man [ChatGPT], and you feed him for a day. Teach a man [Moovila], and you feed him for a lifetime.”
— Lao Tzu, slightly paraphrased
Sam Altman, CEO of OpenAI, the creator of ChatGPT, said of it in a recent tweet: “It's a mistake to be relying on it for anything important right now.” And ChatGPT is not the only black box AI to have its creator decide it can’t be trusted with anything important.
Amazon, famously, gave up on a hiring AI tool it developed because it didn’t like women. An innocent man was arrested because of a mistake made by a facial recognition AI and many states have stopped using these AI models because they are unreliable.
There are many stories like this, which is why this form of AI is currently most often used to answer open-ended questions where the answer isn’t crucially important or doesn’t need to be precise.
An AI you can trust
When you are working on a project with thousands of tasks, worth millions of dollars, and involving dozens or hundreds of contributors, it would be delightful if a super intelligence could tell you when things need to happen, who is doing what, what’s necessary today, and how to keep this entire mission moving forward on schedule and on budget. But only if the AI is always right – or at least admits what it doesn’t know.
Moovila’s interpretable AI is the one you can trust.
When the RPAX score in Perfect Project tells you that something is going off the rails, you can examine the relevant tasks, milestones, durations, and deadlines. You can reallocate resources, cut some fat out of your time estimates, or assign tasks to be worked on simultaneously to shorten the time to delivery. You can decide if the problem the AI is flagging is critical in this instance and if the assumptions the AI is making are accurate for your situation.
Want to see how our white box AI can help you automate work processes, save time, and identify problems in your project or resourcing plans before anything goes wrong? Sign up for a demo today.