fundamental AGI

putting the fun into a fundamental augmented general intelligence framework as funAGI

funAGI is a development branch of easyAGI. easyAGI was not being easy and SimpleMind neural network was proving to not be simple. For that reason is was necessary to remove reasoning.py and take easyAGI back to its roots of BDI Socratic Reasoning from belief, desire and intention. So this back to basics release should be taken as a verbose logging audit of SocraticReasoning and logic to create fundamental funAGI as a modular point of departure towards a reasoning machine and an autonomous general intelligence framework. funAGI is an exercise in AGI fundamentals. Here, AGI is defined as augmented general intelligence.

Central to the reasoning engine is an understanding of draw_conclusion(self)

def draw_conclusion(self):
        if not self.premises:
            self.log('No premises available for logic as conclusion.', level='error')
            return "No premises available for logic as conclusion."

        premise_text = "\n".join(f"- {premise}" for premise in self.premises)
        prompt = f"Premises:\n{premise_text}\nConclusion?"

        self.logical_conclusion = self.chatter.generate_response(prompt)
        self.log(f"{self.logical_conclusion}")  # Log the conclusion directly

        if not self.validate_conclusion():
            self.log('Invalid conclusion. Please revise.', level='error')

        return self.logical_conclusion

The draw_conclusion method in the SocraticReasoning class processes the premises and generates a conclusion. 

Workflow:

    Check for Premises:
        method begins by checking if there are any premises available (if not self.premises:).
        If no premises are available, it logs an error message (self.log('No premises available for logic as conclusion.', level='error')) and returns the string "No premises available for logic as conclusion.".

    Prepare Premise Text:
        If premises are available, it constructs a string (premise_text) that lists all the premises, each prefixed with a dash (-). This is done using a join operation on the list of premises ("\n".join(f"- {premise}" for premise in self.premises)).

    Formulate the Prompt:
        It then creates a prompt string for the language model by combining the premise text with a query for the conclusion (prompt = f"Premises:\n{premise_text}\nConclusion?").

    Generate the Conclusion:
        method calls the generate_response method of the chatter object (an instance of a class like GPT4o, GroqModel, or OllamaModel) with the formulated prompt. This method interacts with an external AI service to generate a conclusion (self.logical_conclusion = self.chatter.generate_response(prompt)).

    Log the Conclusion:
        The generated conclusion is logged directly (self.log(f"{self.logical_conclusion}")).

    Validate the Conclusion:
        It then validates the conclusion using the validate_conclusion method. This checks if the conclusion is logically valid using truth tables (if not self.validate_conclusion():).
        If the conclusion is not valid, it logs an error message (self.log('Invalid conclusion. Please revise.', level='error')).

    Return the Conclusion:
        Finally, the method returns the generated conclusion (return self.logical_conclusion).
+--------------------+
|   draw_conclusion  |
+--------------------+
          |
          v
+-----------------------------+
| Check if premises are empty |
+-----------------------------+
          |
          v
+-------------------------------------------+
| Construct premise_text by joining premises |
+-------------------------------------------+
          |
          v
+-----------------------------------+
| Formulate the prompt with premises |
+-----------------------------------+
          |
          v
+-------------------------------------------------+
| Generate response from chatter (external AI API) |
+-------------------------------------------------+
          |
          v
+----------------------+
| Log the conclusion   |
+----------------------+
          |
          v
+--------------------------------+
| Validate the generated conclusion |
+--------------------------------+
          |
          v
+--------------------------+
| Return the logical decision |
+--------------------------+

from input premises:

  • “Premise 1: All humans are mortal.”
  • “Premise 2: Socrates is a human.”

The prompt created would be:

  • All humans are mortal.
  • Socrates is a human.
    Conclusion?”

response:

"Socrates is mortal."

Welcome to the funAGI project. The funAGI project was designed to create a solid fundamental understanding of AGI reasoning from SocraticReasoning and logic . More information about funAGI can be found at

logic.py truth_tables

Related articles

Reliable fully local RAG agents with LLaMA3

https://github.com/langchain-ai/langgraph/blob/main/examples/rag/langgraph_rag_agent_llama3_local.ipynb Building reliable local agents using LangGraph and LLaMA3-8b within the RAGE framework involves several key components and methodologies: Model Integration and Local Deployment: LLaMA3-8b: Utilize this robust language model for generating responses based on user queries. It serves as the core generative engine in the RAGE system. LangGraph: Enhance the responses of LLaMA3 by integrating structured knowledge graphs through LangGraph, boosting the model’s capability to deliver contextually relevant and accurate information. Advanced RAGE Techniques: […]

Learn More

production_transformer.py

The Transformer architecture is a type of neural network that has advanced natural language processing (NLP) tasks while recently being applied to various other domains including time series prediction. Here’s a detailed look at its key components and how they function: Key Components of Transformer Architecture: How Transformers Work for Financial Forecasting: Practical Considerations: In summary, the Transformer architecture is particularly well-suited for tasks where understanding the relationship between elements of a sequence is crucial, […]

Learn More
together ai

aGLM MASTERMIND RAGE Mixtral8x7B playground 1

together.ai provides a cloud environment playground for a number of LLM including Mixtral8x7Bv1. This model was chosen for the 32k ++ context window and suitable point of departure dataset for deployment of aGLM Autonomous General Learning Model. aGLM design goals include RAGE with MASTERMIND controller for logic and reasoning. The following three screenshots show the first use of aGLM recognising aGLM and MASTERMIND RAGE components to include machine.dreaming and knowledge as THOT from aGLM parse. […]

Learn More