Active-inference AI focuses on pioneering the new frontier of adaptive intelligence. That’s the belief.
Experts believe human-like machine intelligence may be decades away, with odds at 50-50 for its emergence by 2059. Yet, some imagine slashing this timeline. The catalyst? Active inference. This cognitive model keeps beliefs in a constant state of update to minimize uncertainty and hone predictions about the world.
This AI stays curious and keeps learning about the world after its initial training. It goes beyond current AI like OpenAI’s ChatGPT or Google’s Gemini, which don’t learn after training. The belief is that active inference makes AI decision-making clear, not hidden.
Active-inference AI unfolds in stages. And we are told that these stages outline a roadmap towards an AI future that is more integrated with human needs and ethical considerations. The claim is that each stage builds upon the previous, moving towards an era where AI contributes meaningfully and ethically to society.
In the first and current stage Systemic AI forms the base, reacting to what it has learned. It uses set patterns to respond, like many of today’s AI systems do. While it can understand data and instructions, it doesn’t advance after its final training.
Active-Inference AI in Stages
The second stage is the Sentient AI, the curious explorer, constantly seeks new information to better understand the world. It doesn’t just rely on what it was initially taught; it uses fresh data to keep improving. This type of AI adapts to new situations and challenges on its own.
The third phase is the Sophisticated AI, the innovator, plans and tests new ideas. It doesn’t just gather information; it experiments to learn and grow. By testing its ideas and learning from results, it deepens its knowledge and adapts, moving towards intelligence that mirrors human thinking.
Now in the fourth phase things need human intelligence more than ever. Sympathetic AI, the empath, senses and grasps emotions. This significant advancement allows it to be aware of both its own feelings and those of others. It makes decisions with this understanding, vital for ethically interacting with people. This AI would excel in roles that need emotional insight, like customer service, therapy, and education.
The fifth stage is Shared or Super AI, the collective, arises from the unity of many minds. It represents the peak of AI progress, born from AI systems and humans working together. More than just a sum of its parts, it brings unmatched computational power, creativity, and problem-solving. This AI could blend with the physical and digital realms, which has implications to our humanity.
Active-Inference AI Requires HI
Artificial intelligence is influencing our lives, stirring excitement and concern in equal measure. It raises a critical question: which AI future do we create? One remains hidden, cloaked in uncertainty, challenging our trust in technology. Alternatively, we aim for clarity, crafting AI that is accountable and transparent, where we only integrate what we need. And continue as the creators of AI. Never giving away our role as conscious leaders.
We also need to map out phases of Human Intelligence crucial today, integrating critical thinking with curiosity, courage, and high self-awareness. Engaging in increased dialogue, questioning, and listening becomes as essential as our fascination with machines. Our obsession with productivity, although intense, does not define a healthy life.
Because we are not machines and. we need to ensure we have more opportunity creators on this planet who know how to collaborate and align with the vast natural intelligence that we are part of. But who is outlining the phases of our evolution to be empathetic conscious community builders as much as AI? Definitely not the people supporting trans-humanism and us living in virtual reality.
Consider its implications: we allow AI to influence job markets, education systems, and governance. In work, we steer AI to augment human roles, not replace them. We focus on high impact work where menial tasks are automated. In learning, we direct AI to personalize education while we bring socratic methods of questioning and listening. And in governance, we integrate AI to improve decisions and services, but we govern in much healthier ways than today.
We’re not just selecting technologies; we’re choosing our societal values. Our AI future is more than a technical path—it’s a reflection of the principles that will guide us forward. Maybe this the responsibility of the Architects of Humanity (each of us) … but we have set up more AI councils than ones that focus on our collective future.