Configure Different Models
Step-by-step guide to configure and use different AI models with AskUI
This guide shows you how to configure and use different AI models with AskUI to optimize performance for your specific automation tasks.
Overview
AskUI supports multiple AI model providers, each with different strengths:
- AskUI Models: Fast, production-ready models optimized for UI automation
- Anthropic Models: Advanced language models for complex reasoning tasks
- Hugging Face Models: Open-source models for experimentation
- Self-Hosted Models: Custom models you host yourself
Quick Reference
Platform | Available Models |
---|---|
AskUI | askui , askui-combo , askui-pta , askui-ocr , askui-ai-element |
Anthropic | anthropic-claude-3-5-sonnet-20241022 |
Hugging Face | AskUI/PTA-1 , OS-Copilot/OS-Atlas-Base-7B , showlab/ShowUI-2B , Qwen/Qwen2-VL-2B-Instruct , Qwen/Qwen2-VL-7B-Instruct |
Self-Hosted | UI-Tars |
For detailed specifications of all available AI models, see the AI Models Reference.
Step 1: Choose Your Model Provider
First, decide which model provider best fits your needs:
When to Use AskUI Models
- Production environments: Fast, reliable, and enterprise-ready
- Standard UI automation: Optimized for clicking, typing, and data extraction
- Cost-conscious projects: Lower cost per operation
When to Use Anthropic Models
- Complex reasoning tasks: Advanced decision-making capabilities
- Natural language interactions: Better understanding of complex instructions
- Experimental projects: Cutting-edge AI capabilities
When to Use Hugging Face Models
- Open-source requirements: Community-driven development
- Research and experimentation: Access to latest research models
- Budget constraints: Free tier available (rate-limited)
Step 2: Set Up Authentication
Configure authentication for your chosen model provider:
AskUI models require workspace credentials from your AskUI account.
Required Environment Variables:
ASKUI_WORKSPACE_ID
ASKUI_TOKEN
Get Your Credentials:
- Sign in to hub.askui.com
- Navigate to your workspace settings
- Copy your Workspace ID and generate an access token
AskUI models require workspace credentials from your AskUI account.
Required Environment Variables:
ASKUI_WORKSPACE_ID
ASKUI_TOKEN
Get Your Credentials:
- Sign in to hub.askui.com
- Navigate to your workspace settings
- Copy your Workspace ID and generate an access token
Anthropic models require an API key from Anthropic.
Required Environment Variables:
ANTHROPIC_API_KEY
Get Your API Key:
- Sign up at console.anthropic.com
- Navigate to API Keys
- Generate a new API key
Hugging Face models work without authentication but have rate limits.
No authentication required - models are accessible via Hugging Face Spaces API.
Rate limits apply to unauthenticated requests. For higher limits, consider using Hugging Face Pro.
Step 3: Configure Environment Variables
Set the required environment variables for your operating system:
Add these to your ~/.bashrc
or ~/.zshrc
file to persist across sessions.
Add these to your ~/.bashrc
or ~/.zshrc
file to persist across sessions.
Use [Environment]::SetEnvironmentVariable()
to set permanent system variables.
Step 4: Use Models in Your Code
Specify which model to use by adding the model
parameter to your commands:
Basic Model Usage
Model Selection Strategy
Choose models based on your task requirements:
Step 5: Verify Your Configuration
Test your model configuration with a simple script:
Troubleshooting
Authentication errors
Authentication errors
- Verify environment variables are set correctly
- Check for extra spaces in API keys
- Ensure API keys are valid and not expired
- Restart your terminal/IDE after setting variables
Model not found errors
Model not found errors
- Check model name spelling (case-sensitive)
- Verify the model is supported for your command
- Some models may not support all AskUI commands
Rate limiting issues
Rate limiting issues
- Hugging Face models have rate limits
- Consider upgrading to paid tiers for higher limits
- Implement retry logic with exponential backoff
Next Steps
Now that you have models configured:
- Optimize Performance: Learn about model selection best practices
- Advanced Usage: Explore agentic workflows
- Production Deployment: Review enterprise considerations