fundamental AGI

putting the fun into a fundamental augmented general intelligence framework as funAGI

funAGI is a development branch of easyAGI. easyAGI was not being easy and SimpleMind neural network was proving to not be simple. For that reason is was necessary to remove reasoning.py and take easyAGI back to its roots of BDI Socratic Reasoning from belief, desire and intention. So this back to basics release should be taken as a verbose logging audit of SocraticReasoning and logic to create fundamental funAGI as a modular point of departure towards a reasoning machine and an autonomous general intelligence framework. funAGI is an exercise in AGI fundamentals. Here, AGI is defined as augmented general intelligence.

Central to the reasoning engine is an understanding of draw_conclusion(self)

def draw_conclusion(self):
        if not self.premises:
            self.log('No premises available for logic as conclusion.', level='error')
            return "No premises available for logic as conclusion."

        premise_text = "\n".join(f"- {premise}" for premise in self.premises)
        prompt = f"Premises:\n{premise_text}\nConclusion?"

        self.logical_conclusion = self.chatter.generate_response(prompt)
        self.log(f"{self.logical_conclusion}")  # Log the conclusion directly

        if not self.validate_conclusion():
            self.log('Invalid conclusion. Please revise.', level='error')

        return self.logical_conclusion

The draw_conclusion method in the SocraticReasoning class processes the premises and generates a conclusion. 

Workflow:

    Check for Premises:
        method begins by checking if there are any premises available (if not self.premises:).
        If no premises are available, it logs an error message (self.log('No premises available for logic as conclusion.', level='error')) and returns the string "No premises available for logic as conclusion.".

    Prepare Premise Text:
        If premises are available, it constructs a string (premise_text) that lists all the premises, each prefixed with a dash (-). This is done using a join operation on the list of premises ("\n".join(f"- {premise}" for premise in self.premises)).

    Formulate the Prompt:
        It then creates a prompt string for the language model by combining the premise text with a query for the conclusion (prompt = f"Premises:\n{premise_text}\nConclusion?").

    Generate the Conclusion:
        method calls the generate_response method of the chatter object (an instance of a class like GPT4o, GroqModel, or OllamaModel) with the formulated prompt. This method interacts with an external AI service to generate a conclusion (self.logical_conclusion = self.chatter.generate_response(prompt)).

    Log the Conclusion:
        The generated conclusion is logged directly (self.log(f"{self.logical_conclusion}")).

    Validate the Conclusion:
        It then validates the conclusion using the validate_conclusion method. This checks if the conclusion is logically valid using truth tables (if not self.validate_conclusion():).
        If the conclusion is not valid, it logs an error message (self.log('Invalid conclusion. Please revise.', level='error')).

    Return the Conclusion:
        Finally, the method returns the generated conclusion (return self.logical_conclusion).
+--------------------+
|   draw_conclusion  |
+--------------------+
          |
          v
+-----------------------------+
| Check if premises are empty |
+-----------------------------+
          |
          v
+-------------------------------------------+
| Construct premise_text by joining premises |
+-------------------------------------------+
          |
          v
+-----------------------------------+
| Formulate the prompt with premises |
+-----------------------------------+
          |
          v
+-------------------------------------------------+
| Generate response from chatter (external AI API) |
+-------------------------------------------------+
          |
          v
+----------------------+
| Log the conclusion   |
+----------------------+
          |
          v
+--------------------------------+
| Validate the generated conclusion |
+--------------------------------+
          |
          v
+--------------------------+
| Return the logical decision |
+--------------------------+

from input premises:

  • “Premise 1: All humans are mortal.”
  • “Premise 2: Socrates is a human.”

The prompt created would be:

  • All humans are mortal.
  • Socrates is a human.
    Conclusion?”

response:

"Socrates is mortal."

Welcome to the funAGI project. The funAGI project was designed to create a solid fundamental understanding of AGI reasoning from SocraticReasoning and logic . More information about funAGI can be found at

logic.py truth_tables

Related articles

Autonomous Generative Intelligence Framework

Autonomous General Intelligence (AGI) framework

As we celebrate the establishment of the easy Autonomous General Intelligence (AGI) framework, it’s essential to appreciate the intricate steps that transform a user’s input into a well-reasoned response. This article provides a verbose detailing of this entire workflow, highlighting each component’s role and interaction. Let’s delve into the journey from user input to the final output. Stage one is nearly complete. reasoning from logic. 1000 versions later. This is the basic framework so far. […]

Learn More
Symbolic Logic

LogicTables Class: Managing Logic and Beliefs

The LogicTables class in logic.py is designed to handle logical expressions, evaluate their truth values, and manage beliefs as valid truths. It integrates with the SimpleMInd or similar neural network system to process and use truths effectively. Key Features: Initialization and Logging The LogicTables class initializes with logging configuration to capture debug information: Adding Variables and Expressions Truth tables are generated to evaluate logical expressions: Expressions are evaluated using logical operators: def evaluate_expression(self, expr, values):allowed_operators […]

Learn More

RAGE MASTERMIND with aGLM

RAGE MASTERMIND with aGLM: A Comprehensive Analysis In the rapidly evolving field of artificial intelligence and machine learning, the integration of advanced generative models with autonomous systems has become a focal point for developers and researchers. One such integration is the RAGE MASTERMIND with aGLM (Autonomous General Learning Model), a pioneering approach in AI development. This report delves into the specifics of this integration, exploring its components, functionalities, and potential implications in the broader context […]

Learn More