Running Models on VaikerAI Using Python
Last updated
Last updated
Learn how to integrate and run machine learning models on VaikerAI directly from your Python code, whether it's in an app, notebook, or script.
To interact with VaikerAI, you'll need to install our open-source Python client. Use pip to install it:
Before running models, you'll need to authenticate with VaikerAI. Generate an API token by visiting . Copy the token and set it as an environment variable in your shell:
You can run any public model on VaikerAI with just a few lines of Python. Here’s an example that uses the stability-ai/sdxl
model to generate an image based on a text prompt:
The output will be a URL to the generated image:
Some models require files as input. You can use local files or provide a file's HTTPS URL.
Here’s an example using a local image file with the LLaVA
vision model, which processes an image and a text prompt to generate a response:
The model's response might be:
If your file is already hosted online or is large, using its URL as input is more efficient.
Here’s an example using a public HTTPS URL of an image:
The model will return a text response similar to:
Some models stream their output as they process the input. These models return an iterator, allowing you to process each chunk of output as it becomes available.
Here’s how to handle streamed output from the mistralai/mixtral-8x7b-instruct-v0.1
model:
As the model runs, you might see output like this:
For more detailed information and advanced usage, refer to the full Python client documentation available on .