Our Agents
Last updated
Last updated
The team building the agents have over 15 years of combined experience in build artificial intelligent solutions including traditional methods that have informed our approach such as reinforcement learning and generative adversarial networks (GANS). We have applied these learnings to a core offering that is penseur which is an agent which ingests artifacts from a wide variety of sources and builds a knowledge graph based on this information. This might include simple things such as sentiment analysis to complex behavioral solutions.
Our agents rely on penseur for knowedge as they remain largely stateless large language model (LLM) and reinforcement learning (RL) constructs. At the core of our solutions is a knowledge graph which represents an analysis of all of the things that penseur has come across in graph form. In some ways, this is modeled on how people store and catalog knowledge themselves represented by Entities and the Relationships between those entities. This enables Penseur to make decisions not just about a particular fact that has arrived to the system, but to other bits of knowledge that it has received over time. This graph, of course, can become highly biased if it is not regularly updated from a variety of data sources and as such the training data (no we won't share what we use - just like everybody else) is from a variety of sources across a spectrum of content that we have purchased, licensed, and scraped with the content owners permission.
If you are interested in donating some data to us so that we can incorporate that into our systems, please let us know. We are considering mechanisms to properly pay back those who donate data to us.
Agentic workflows are places where agents (which may be single use or multi-use) come together and share data with one another to accomplish some larger goal. The agent that has the primary goal is referred to as the "orchestrator" and is responsible for gathering all of the other agents and tools necessary to construct a series of tasks (run by other agents) that will yield a result.
We use a variety of attention methods to prompt or schedule the agents to run within our workflows and access the tools that we build and publish to deliver results to users. These outputs aren't necessarily in the form of something that fits into a chatbot. In fact, we find that the chatbot interface is inherently limiting for agentic workflows as it requires an active conversation with a user in order to function. We do not believe that scalable enterprise systems will be able to function in this manner.
Internally, we use Penseur itself to identify when it feels that it should make a request for content and them, within a sandbox, allow it to make queries into the real world to get up to data information. it is not possible for Penseur to "escape the sandbox" or any of the other scary Lawnmower Man stuff that people suggest is possible. At best it can make changes to its sandbox or otherwise call existing web APIs that have been whitelabelled for its use. Since all outbound calls from the sandbox go through a whitelisted request firewall and are logged, we maintain a full understanding of what penseur is calling, have logs that describe the reason that it is doing so, and do not provide any mechanisms for it to otherwise create new constructs that might bypass our control (i.e. there are no dev or build tools within the sandbox).