How is SELF different from other AI?

AI platforms are commonly based on Large Language Models (LLMs) that are able to comprehend and generate human language. They work by analyzing massive data sets of language. LLMs like ChatGPT, Bard and Llama crawl the entire web and leverage huge networks of AI chips to continually train their algorithms and improve their capabilities. When a user submits a query, the LLM draws on the information it was trained on, up until its last update.

SELF functions differently in that it generates each individual’s Personal Language Model, which resides under the user’s control and instruction. SELF can access a number of sources when responding to a user query, including search platforms, databases and LLMs. However, no personally identifiable data is used or shared.

A user query triggers SELF to compile a range of anonymized preference information. SELF takes into account the specific wording of the query, combines it with the compilation of preference information and sends a request to the other services as required. One of more services return the results, SELF processes the information through the user’s preference filters and once quality and relevance checking processes are complete, SELF provides the user with a hyper-personalized result.

SELF manages its resources effectively and only utilizes the compute power necessary to answer each query, which minimizes usage costs. SELF is also able to take advantage of similar optimization processes present in the larger LLMs, which further reduces usage costs. In many instances, SELF directly draws information from a large number of free resources instead of going through an LLM.

Over time, SELF will gradually transition into training on its own AI chip infrastructure. In the initial stages SELF mainly specializes in being easy to interact with, being of assistance, providing recommendations, and summarizing product/service reviews. Later, it will add productivity specialization and life optimization to the mix. During these stages it is advantageous to piggyback on existing infrastructure for information retrieval. Technology will keep advancing at an exponential rate which is likely to lead to reduced computer power requirements and we are committed to continually update SELF so that it uses the most rapid yet secure infrastructure.

During the later stages of its development, SELF will add education and sharing of information and news to its mix of specialities, and at this point it will be of vital importance to run those algorithms on SELF’s own AI chip infrastructure. The reasons for this are many, not the least to avoid the bias, censorship and sophisticated information weaponization that we think is inevitable among 'Big Tech' LLMs. As far as we can tell, these prioritize what is technically possible over ethical considerations; in other words, they are developed with a scientific focus rather than a human-first ethos.

SELF won’t initially specialize in writing and proofreading more advanced texts, writing and editing code, or generating images and videos. In order to provide the best possible experience for the user in the areas SELF specializes in, such capabilities are less of a priority than others during the initial period. While SELF certainly will have capability in these areas, it won’t be up to par with larger LLMs that put more focus on these specific abilities and less on what we believe the ultimate utility of AI is: hyper-personalization.

Last updated

© Next Ideas SEZC 2024