- The Boost
- When Prompting Isn’t Enough: How To Fine Tune AI Models
When Prompting Isn’t Enough: How To Fine Tune AI Models
Learn how to fine-tune GPT models like GPT3.5 for better results
Hi Friends 👋
There’s never a dull moment in AI.
But this week’s drama is all about people. More on that in the news section.
Also in this week’s email:
When prompting isn’t enough: how to fine-tune AI models
OpenAI: It’s not about AI anymore
Try out different LLM models quickly using Poe
Let’s get started…
💬 OpenAI: It’s not about AI anymore…
Sam Altman was removed as CEO of OpenAI by the board. Then Greg Brockman left along with many other senior team members, to join Microsoft with Sam. It was even said that 90% of the company would quit if Sam didn't return. There are a lot of rumours going around, but I don't want to contribute to the speculation.
However, I believe there is a more important story than just the AI aspect.
I have always focused on how AI can greatly impact business growth and creativity.
In this situation, Sam Altman, the leader of the most influential AI company ever, shows us something even more important: its most valuable asset is its people.
Microsoft’s recent history since Satya Nadella took over as CEO backs this up…
Before Satya Nadella’s tenure, Microsoft was Silicon Valley's outcast. Its products were used more out of necessity than preference.
But Nadella transformed Microsoft.
Now, Silicon Valley's elite such as Sam Altman are happy to join Microsoft, even though it is a $2.7 trillion company.
People are still the most important thing.
Whether at Microsoft, OpenAI, or a small ten-person company, the leader sets the tone for ambition, standards, communication, training, strategy, leadership, the list goes on.
Will your team follow you into the fire?
🚀 Testing Other LLMs
Adam D’Angelo is one of the board members responsible for removing Sam Altman. He was also CTO at Facebook before founding Quora. Quora has recently launched its AI chatbot called Poe. Poe lets you try out many different LLM’s including GPT and all the open-source models like Palm and Llama. It also has bots which have been around for a while and look a lot like OpenAIs GPTs [Link]
When Prompting Isn’t Enough: How To Fine Tune AI Models
Learn how to fine-tune GPT models like GPT3.5
Why you should care: Do you sometimes find you just can’t prompt the right result? But feel ChatGPT should be giving you the result you want? Fine-tuning can solve this, and it’s easy to create a fine-tuned model without any technical experience.
What you need to know: When using ChatGPT, you tell it what you want. But with fine-tuning, you teach the AI what you want by showing it. Now you can create a new model specific to your use case by fine-tuning an LLM like GPT3.5 and feeding it examples. Examples are input and output pairs of data.
Aug. 19, 2008. Beijing Olympics
American hurdler Lolo Jones was leading the 100-meter hurdle race.
Right until the second-to-last hurdle.
But as she jumped, she clipped the crossbar.
Split seconds later, the race was finished, with Lolo finishing in seventh place.
Dropping to her knees in disbelief, Lolo symbolised the pain of broken Olympic dreams.
But that didn't stop Lolo.
Lolo sought new challenges, setting her sights on the Winter Olympics bobsled team.
Lolo was already an athletic machine, but bobsledding required different training.
Bobsled training is a regime of repetition:
pushing the sled over and over,
drilling the coordination with the team,
and memorising each curve of the bobsled track.
This relentless, repetitive training was Lolo’s pathway to success. In 2013 in St.Moritz, she became the bobsled world champion.
Lolo had fine-tuned her mind and body to that of a bobsled champion.
We all know that ChatGPT is the most athletic LLM out there.
And with good prompting, ChatGPT produces amazing results.
But sometimes, we just can’t tame the athletic ChatGPT beast. It goes off on a tangent, overpowered.
Fine-tuning LLMs like ChatGPT works exactly the same way as Lolo’s repetitive training.
We fine-tune by giving examples, inputs and outputs, and the model is retrained on that data.
Training is easier than prompt engineering as all we have to do is give it our data.
The retrained model gets smart at doing a specific task, like categorising data or writing in a certain style.
The big challenge is this: Fine-tuning requires programming skills.
That is until you find Entrypoint.ai
Entrypoint is the no-code solution that allows anyone to Fine-Tune LLMs like ChatGPT.
Just like OpenAI’s new GPTs mean anyone can create a custom GPT, Entrypoint means anyone can create their own fine-tuned model.
I was lucky to interview Mark and Miha, the Founders of EntryPoint.Ai, to help us learn how.
In this video interview, you’ll learn:
What is fine-tuning
Where fine-tuning beats GPTs
Use cases for fine-tuning
How to create your own blog writing AI
ChatGPT prompts miss the middle and focus on the beginning and end
With ChatGPT you tell it what you want
With Fine-tuning you show it what you want through examples, inputs and outputs
You can then access your own unique model to run very specific tasks
The outputs of these models are more constant and cheaper than models like GPT4
Entrypoint is a no-code solution to create Fine-tuned models
In a crazy week for AI, why not stop reading the news and learn to make your own fine-tuned model, built specifically for the task in hand.
That’s a wrap - how was today's newsletter?
Help me deliver more value to you
If you enjoyed today's issue please do reply (it helps with deliverability). If you didn't you can unsubscribe at the bottom.
If someone forwarded you this email, you can subscribe here.
Thanks for reading,