Configuring LLM Settings & Tooling (VTA Setup Page 6)
Written By Asad Jobanputra
Last updated 3 months ago
Overview
This guide covers the LLM Settings page, where you control the advanced technical features of your VTA. This includes selecting the AI model, granting the VTA access to external tools (like websearch or code generation), and configuring debugging options.
Who it's For
Instructors: To enable or disable specific functionalities (like Image Generation).
LMS Admins/IT Staff: To select preferred LLM vendors (e.g., GPT 4o) and configure traceability for debugging.
Why Use It
These controls define the VTA's core intelligence and capabilities. They allow you to:
Optimize Performance and Cost: Choose the model that balances speed, accuracy, and institutional cost.
Expand Utility: Grant the VTA specific tools (like Code Interpreter) to make it useful for specialized courses (like programming).
Ensure Oversight: Enable Save Chat History for pedagogical review and auditing.
Part 1: LLM Available Models
This section allows you to select which AI models the VTA will use to process student requests.
Locate the LLM Available Models section.
Choose what models the LLM is available to use by selecting the corresponding toggle.
Example: You can choose to run the VTA using either GPT 4o or GPT 4.1.
Note: The underlying vendor (e.g.,
azure_openai) is often displayed below the model name. Your available options are dependent on your institution's subscription and data governance policies.
Part 2: LLM Tooling
This section grants the VTA access to external functionalities beyond its core knowledge base. Toggle ON a tool to enable its capability.
Part 3: Debugging Options
These controls are primarily used by administrators and instructors to troubleshoot issues and gather usage data.
Best Practices / Notes
Limit External Tools: For most academic courses, it is recommended to disable Websearch and rely on the content provided in the Training Data to maintain the integrity of the course curriculum.
Performance vs. Cost: If you have multiple models available, you may choose a slightly older model (e.g., GPT 4.1) for simpler tasks to optimize resource usage, reserving the newest models (e.g., GPT 4o) for high-stakes courses.