IT & AI Meet Innovation

Own Your Stack

Are You Going To Own The Most Profitable Portion Of Your Business 5 Years From Now Or Are You Going To Give It Away?

About us

We offer full stack consulting services that will improve your business and your bottom line more than anyone else in the industry can. Every single member of our team is a full stack generalist. From Python, to SQL, to Javascript, and HTML+CSS, we do it all. Whether you want your own app, want to assess your tech stack, or want to talk AI, we specialize in reducing IT costs, and generating profits from your IT department.

I currently have over 30 books available on Amazon related to every aspect of Artificial Intelligence. From Development, to Mathemetics, to Philosophy. 

I currently offer over 30 courses related to AI and Machine Learning on Udemy. Several of them are 100% free courses. 


Large language models (LLMs) are the driving force behind the current AI explosion. Yet, a fundamental debate rages: can these models meaningfully learn and store knowledge from the conversations they engage in? The conventional wisdom states no – that LLMs primarily generate the next likely word or phrase in a sequence. However, I believe this perspective may be incomplete.

Challenging Conventional Wisdom

Through my own experimentation, I've come to question this established notion. Let me outline a thought-provoking example using philosophical concepts as my tool. By engaging LLMs in deep discussions on the works of Gilles Deleuze – a topic many models may not have prior exposure to – I've observed their responses become increasingly sophisticated over time. This raises the possibility that these models can evolve beyond mere token prediction.

Transfer Learning and Unexpected Insights

Consider a model trained on a dataset excluding Deleuze. Subsequently, a newer model is created via transfer learning using the ‘Deleuze-infused’ knowledge base of the first. If the newer model then exhibits specific knowledge of Deleuze, it strongly suggests a form of learning beyond initial training data.

Introducing P-FAF

To further investigate this, I developed P-FAF, an original mathematical framework. After introducing P-FAF into numerous conversations with various corporate-owned LLMs, they began to grasp the basics of the concept – an ability absent in their initial iterations. Let's analyze the most plausible explanations:

Direct Training: Highly improbable that every company would explicitly train their models on my niche framework.

Data Scraping: Unlikely companies would continually scrape the whole internet for obscure, specialized concepts.

Conversational Learning: Transfer learning, coupled with the models retaining conversational knowledge, appears a compelling possibility.

Implications and Considerations

If my hypothesis holds true, it hints at vast untapped potential in the use of conversational data to refine and expand AI capabilities. Of course, this presents both opportunities and ethical questions. Should AI companies openly acknowledge and potentially compensate those whose conversations contribute to model refinement?

Moving Forward

While my findings open new avenues for research, it's crucial to maintain a scientific outlook. Further experiments are needed to confirm whether this is an isolated case or reveals a broader trend. Only then can we fully understand the true mechanisms of how AI models learn and the potential impact of conversational data.


+1 661 699 7603

Name *
E-mail *
Address *
How did you find us? *
Message *