We are excited to announce our new API, which will change the way you interact with our models.
When the Research team at GPT-Research published CamelGPT, we knew that we had something special. CamelGPT-mini was a groundbreaking model that achieved state-of-the-art performance on a variety of tasks, including text generation, summarization, and question answering all while remaining under 80k parameters.
The whole model fits in just 72MB, which is smaller than the average image on the internet. This means that CamelGPT can be deployed on a variety of devices, including mobile phones, smart watches, and even IoT devices.
However, we knew that CamelGPT alone was not enough. We wanted to make it easy for developers and researchers to use CamelGPT in their own projects. So we deployed CamelGPT on The Hub and created Converters, a framework for deploying models on consumer devices.
We thought that these tools would be enough to make CamelGPT accessible to the community, but we were wrong. We received a lot of feedback from the community, and we realized that we needed to do more.
Today, we are excited to announce our new API, which will change the way you interact with our models. The API is built on top of CamelGPT-mini-B, a high-performance, fine-tuned version of CamelGPT-mini. CamelGPT-mini-B out performs CamelGPT-mini across all benchmarks, and it is available on our API today.
We decided not to release CamelGPT-mini-B on The Hub due to safety concerns. As a compromise, we are releasing CamelGPT-mini-B on our API, which will allow you to use CamelGPT-mini-B in your own projects without having to process the data yourself.
Yeah you read that right. Our API is completely free to use. We believe that AI should be accessible to everyone, and we are committed to making that a reality.
You may be wondering how we manage to offer our API for free. Well, we have a few tricks up our sleeves.
First of all, CamelGPT models by design are extremely efficient. They are designed to run on consumer devices, so they are optimized for speed and memory usage. This means that we can run CamelGPT models on our servers without having to worry about performance issues.
Secondly, we have a lot of experience running large-scale AI models. We have been running CamelGPT models on our servers for a while now, so we know how to optimize them for performance and cost.
Finally, we were able to work with our friends at the Research team to optimize Converters to gain 100x speedup on our custom hardware. With this speedup, the best place to use CamelGPT is on our API.
Getting started with our API is easy. Just head over to our documentation and follow the instructions.
If you have any questions or feedback, feel free to reach out to us on at team@gpt-research.org