Knowing your LLM limits
If you read the news, and believe the hype, then you have to conclude that the world has already been taken over by Artificial Intelligence (AI).
From what I read AI has become an integral part of many industries, revolutionising the way businesses operate and making informed decisions. I am told that one such area where AI is gaining significant traction is in large language models (LLMs), which have opened up new avenues for corporate environments to harness the power of industrial-scale AI. I am sure you have read this type of thing.
I am just not sure where this is all happening though. I have been around the IT business for ages. Seriously, there are developers around now who weren’t even born when I started writing HTML. I have met and worked with a lot of people over the years and talked to lots of my old collegues over the last few months. These people are in just about every type of business out there. None of them are using AI within the firm they are working for. None. That is people in the UK and Europe. I don’t really know about the US.
Don’t get me wrong, all the developers are using tools like CoPilot, or ChatGPT to help them write code, but none of the companies are using AI, or LLMs. Again, there are small parts of certain companies making some use of AI, but no where near the level that is talked about. Part of the reason is that the technology is not that mature. LLMs are changing by the day and chatGPT was rolled out less than 2 years ago. The truth is you would have to be insane to go all in on LLMs within a corporate set up today.
That said, there is no excuse for sitting on your hands either. Now, I have spent a lot of time over the years working on some large intranets, and with some large SharePoint sites, multi million document size sites. The reason I tell you this is because AI can make an incredible difference in this environment, also, it is going to be a b*stard of a job to figure out how to do it.
This is easy to say and difficult to do
In a typical corporate setting, behind a firewall, there are millions of documents, emails, reports, and other digital assets that require processing, analysis, and organisation. This is where LLMs can come into play, offering unparalleled capabilities in natural language processing (NLP), machine learning, and predictive analytics. But this is easy to say and probably difficult to do. I say probably difficult because I am not aware of anyone who has done it yet.
Let’s take a step back and get started at the beginning. Just what are Large Language Models?
LLMs are AI models that have been trained on vast amounts of text data from various sources, including books, articles, research papers, and online content. These models are designed to process and analyze large volumes of unstructured text data with remarkable accuracy, speed, and efficiency. LLMs can learn patterns, relationships, and context within the data, enabling them to perform tasks such as language translation, sentiment analysis, topic modeling, and text summarization. They have been trained on hundreds of millions of documents.
The problem is that these documents are out in the world, they are not your documents, hidden behind your firewall. Also, to train these LLMs requires a lot of computing power, which means they cost a lot to create. An LLM such as ChatGPT costs 10s of millions of dollars to train. That is big money even for a corporate.
Industrial-Scale AI in a Corporate Environment
A corporate environment behind a firewall presents a unique challenge for implementing industrial-scale AI. With millions of documents, emails, and reports scattered across different departments, job positions, and security levels, the need to process and analyze this vast amount of data becomes crucial.
At this poing I am sure there are a number of people out tWhat is cool though is that LLMs can help. It should be possible to automate document processing. LLMs can be trained to recognize patterns in document formatting, such as headers, footers, and metadata. They can then extract relevant information, classify documents, and categorize them based on their content.
By analyzing vast amounts of text data, LLMs can improve search functionality within a corporate environment. This enables employees to quickly find specific documents, emails, or reports related to a particular topic or project.
As the security levels in a corporate environment are strict, it is essential to maintain confidentiality and anonymity of sensitive information. LLMs can be trained to identify and redact confidential data, ensuring compliance with regulatory requirements.
LLMs can help automate document management tasks such as indexing, tagging, and metadata creation. This reduces manual effort and minimizes the risk of human error.
LLMs can also do the cool stuff that we already know about such as chatbots, content generation and editing, document summarisation, text analysis and classification and so on.
There is absolutely no doubt in my mind that this is something corporates have to do. It is a no-brainer, by implementing LLMs in a corporate environment behind a firewall, organizations can reap numerous benefits, which will lead to cost savings which is music to an accountants ears.
For me though, the question still remains, how do we implement AI LLMs at scale in a corporate environment? Are we going to settle on RAG or Fine-tuning or a combo of both? What hardware is required? I’ll leave that one for the infrastructure team, but I can tell you now industrial grade GPUs are not cheap. There is going to have to be budget set aside for sure.
I can hear you saying Bry, you are so 00’s dude. Haven’t you heard of the cloud? If companies load all their documents into the cloud then they can take advantage of the processing power of companies like Meta, Microsoft, Google etc. They can process all their documents in not time. To which my reply is simple… Dream on.
I know that the cloud is secure and I know that my documents are safe in the cloud, but if I am a corporate, I don’t care. I want my documents safely tucked away behind my firewall thank you very much. I’ll use the cloud for somethings for sure. But I want my content management system on-prem for now.
So what are we going to do? How are we going to fix this? Well, I think that the solution may turn out to be simpler than we think. Don’t get me wrong, it is still going to be a shocker for the people who have to implement the solution, but I think there is a really cool way to do it and I will post on this shortly.
In the mean time, what can wedo today to start taking advantage of LLMs? The first thing we need to do is educate your work force on what they can and cannot do with LLMs today. Are you happy for them to use chatGPT while at work? If not explain why, if you have a legitamate concern, voice that concern so people know where they stand today.
Decide on a course of action, think through the ramifications of your plan, make informed decisions.
The next thing to do is train your people on how to use LLMs. Many SharePoint installation jobs I worked on, the company failed to do adequate training, so people mostly hated SharePoint. They learned to use it but rarely got the best out of it. On the flip side, those we did train, thought it was marvelous to a person. Everyone who was trained thought it was cool.
The difference between love and hate? A half day training session!
To this end, I’m going to put together some free training brochures. These are not training courses. It is not possible to write a generic training course for corporations, they are all different. If they weren’t most of us would have been out of a job decades ago. This is a little more abstract. But it gives you an idea of what you should be building and what you should be training people on. You’ll get the idea when you see the PDF.
For many of us, we will see the real advantages of AI LLMs to our jobs once they appear on the corporate intranet. The sooner this happens the better, but it is going to be a challenge for sure.