The SDK that enableslocal AI execution
Allow your users to opt-in for privacy while you save on infrastructure and inference costs.
Why Offload?
Many people are concerned about data privacy when using AI features, as these typically send their data to third-party inference APIs.
With the Offload SDK, your users can opt-in for local AI execution, without any extra effort on your part.
This increases user privacy and also reduces your infrastructure and inference costs, since a significant amount of computation happens directly on the user device.
The Offload widget
When you integrate Offload, our widget automatically appears to the users whose device has enough resources to perform inference locally.
Easy to add to any project
Offload replaces any SDK you are currently using - just change the inference calls.
AI tasks are processed on the user"s device when possible, with automatic fallback to any API you configure in the dashboard.
How to install
<!-- Include the Offload library on your app -->
<script src="//unpkg.com/offload-ai" defer></script>
Simply add the library either from CDN script or importing from npm.
How to run inference
// Configure offload instance, just once in your app
Offload.config({
appUuid: "your-app-uuid-from-dashboard",
promptUuids: {
user_text: "your-prompt-uuid-from-dashboard"
}
});
// Run inference. You can use streams, force JSON output, etc.
const { text } = await Offload.offload({
promptKey: "user_text",
});
And you are done!
Frequently Asked Questions
FAQ
Start offloading right now!
Get Started for free!Offload © 2024