OntoBrain Explained

OntoBrain objectives

  • Automated Reasoning: AI Legal Reasoning established the goal for transparent and fully auditable automated reasoning
  • Symbiotic: collaborative with humans for security, safety, and ethics.
  • Standards based on W3C: for compliance, audit, and investment protection
  • Modular: in order to easily incorporate new AI component technologies as they emerge e.g. quantum reasoning
  • Multi-domain: includes sub-domains e.g., legal reasoning includes employment law, fraud, etc. 
  • Architecture: AI agents and Digital Twins manage domain requirements of an Automated Reasoning core that can be applied to all domains of knowledge including
    • law, litigation, and regulation
    • medical research
    • safe social media management
    • industrial automation
    • any mission critical applications.
  • A path to AGI? Self-learning, human-like reasoning, safe and secure digital guardians, and knowledge companions.

We see Neuro-symbolic AI as a pathway to achieve artificial general intelligence. By augmenting and combining the strengths of statistical AI, like machine learning, with the capabilities of human-like symbolic knowledge and reasoning, we're aiming to create a revolution in AI, rather than an evolution.” – IBM Research

“Neurosymbolic AI” implies (Neural Network based) Machine Learning technologies combined with (Symbolic AI) Knowledge Representation technologies.

In reality, Neural Networks are an AI subsystem and Symbolic AI is another AI subsystem. To provide an AI solution, both may join with other AI subsystems such as

  • Natural Language Processing
  • Deep Learning
  • Knowledge Representation
  • Automated Reasoning
  • AI Agents.

Recently, the term “Agentic AI” has also emerged to signify the use of one or more autonomous software programs that use Large Language Models to carry out various tasks in an AI solution. For example, an AI agent may invoke tasks in response to a specific user interaction.

Neurosymbolic AI addresses shortcomings of current approaches as follows:

  • Generative Pre-trained Transformer (GPT) technology, like the ChatGTP, Claude, Gemini and other “Chatbots”, rely on Deep Learning statistical methods to perform pattern matching. When responding to prompts (instructions or questions), these AI systems look for likely matches (words and/or images) to the prompt. If they can’t find an exact match, they may respond with something that is a close match, rather than a correct one. Such incorrect responses are broadly referred to as hallucinations.
  • GPT technologies learn from very large datasets called Large Language Models (LLMs) that draw data from online sources that may contain misinformation, which can exacerbate the hallucination problem. Small Language Models, such as used in IBM’s “Granite” platform, can improve “guardrails” for Chatbot deployments to minimise hallucinations and improve auditability.
  • Generating LLM’s is a very inefficient task consuming massive amounts of electricity and expensive processing hardware in data centres whereas neurosymbolic AI uses much more sophisticated techniques that can be processed on a PC. 
  • Finally, GPT technologies deploy Artificial Neural Networks (ANN’s) that were an attempt to emulate human brain structure, but we now know the brain is much more complex than we once thought. ANN’s are complex and opaque, so their output is not auditable.

If neurosymbolic AI is the next generation, why hasn’t it emerged sooner?

1. Neurosymbolic AI balances the probability based Neural Networks and Large Language Models of Generative AI with logic-based Symbolic AI. Symbolic AI, often referred to as Traditional AI or Good Old Fashioned AI (GOFAI), is based on the mathematically complex Symbolic Logic. Just a few years ago it was known for being rule-based and dependent on stringent programming for its intended output.

“Symbolic AI used tools such as logic programming, production rules, semantic nets and frames, and it developed applications such as knowledge-based systems (in particular, expert systems), symbolic mathematics, automated theorem provers, ontologies, the semantic web, and automated planning and scheduling systems.” [Wikipedia]

2. But, conventional wisdom remains that building ontologies is a manual process of complex hand-coding.

  • Writing about Palantir’s competitive advantage being a virtual monopoly on future AI, Morgan Stanley made these comments on 21 March 2026:
  • “The source of that growing confidence is Palantir’s Ontology, the technology that sits at the core of everything the company builds.”
  • “Building a high-quality Ontology cannot be automated or purchased off the shelf. It requires deep, organization-specific domain knowledge captured over a lengthy period of hands-on engagement.”
  • These statements reflect common belief that is now misinformed.

3. To develop our OntoBrain neurosymbolic AI solution, we invented technologies for automating the generation of ontologies. Our ontologies do not require hand-on engagement and are of sufficient quality for matching a litigation casefile to relevant legislation, case precedents, arguments and judgements. In theory at least, they can go even further to analyse the quality of a court judgement for consideration of appeals.

4. We believe that OntoBrain is a fundamental AI breakthrough that represents a paradigm shift in the practice of law and regulation. Ontologies that are specific to legal subdomains can be automatically generated and utilised for automated human-like reasoning based on logic and expert level peer-review and/or collaboration.

5. Similarly, OntoBrain can facilitate the creation of self-learning and intelligent Digital Twins that can support human learning, reasoning, and safe online engagement.