-
Vishnu Ravi authored
# Adds support for on-device LLMs with SpeziLLMLocal ##
♻ Current situation & Problem Currently, the app can only use OpenAI models such as GPT-4. However, users may prefer to run an LLM on-device for increased privacy. This is now supported via the [SpeziLLMLocal](https://swiftpackageindex.com/stanfordspezi/spezillm/documentation/spezillmlocal) target of the [SpeziLLM module](https://github.com/Stanfordspezi/spezillm) and can be enabled in HealthGPT. ##⚙ Release Notes - Adds a step in onboarding for downloading and storing the Llama3 8B model - Adds an `--llmLocal` feature flag for toggling local execution - Adds an onboarding step allowing the user to choose between OpenAI or local execution - Updates the `HealthDataInterpreter` to use the local LLM if the flag is set ##📚 Documentation Updated documentation ##✅ Testing - Adds UI tests for local LLM configuration during onboarding ### Code of Conduct & Contributing Guidelines By submitting creating this pull request, you agree to follow our [Code of Conduct](https://github.com/StanfordBDHG/.github/blob/main/CODE_OF_CONDUCT.md) and [Contributing Guidelines](https://github.com/StanfordBDHG/.github/blob/main/CONTRIBUTING.md): - [X] I agree to follow the [Code of Conduct](https://github.com/StanfordBDHG/.github/blob/main/CODE_OF_CONDUCT.md) and [Contributing Guidelines](https://github.com/StanfordBDHG/.github/blob/main/CONTRIBUTING.md ). --------- Co-authored-by: Paul Schmiedmayer <PSchmiedmayer@users.noreply.github.com>
Loading