Skip to contents

Chat with a LLM through Ollama

Usage

query(
  q,
  model = NULL,
  screen = TRUE,
  server = NULL,
  images = NULL,
  model_params = NULL,
  format = NULL,
  template = NULL,
  verbose = getOption("rollama_verbose", default = interactive())
)

chat(
  q,
  model = NULL,
  screen = TRUE,
  server = NULL,
  images = NULL,
  model_params = NULL,
  template = NULL
)

Arguments

q

the question as a character string or a conversation object.

model

which model(s) to use. See https://ollama.com/library for options. Default is "llama3.1". Set option(rollama_model = "modelname") to change default for the current session. See pull_model for more details.

screen

Logical. Should the answer be printed to the screen.

server

URL to an Ollama server (not the API). Defaults to "http://localhost:11434".

images

path(s) to images (for multimodal models such as llava).

model_params

a named list of additional model parameters listed in the documentation for the Modelfile such as temperature. Use a seed and set the temperature to zero to get reproducible results (see examples).

format

the format to return a response in. Currently the only accepted value is "json".

template

the prompt template to use (overrides what is defined in the Modelfile).

verbose

Whether to print status messages to the Console (TRUE/FALSE). The default is to have status messages in interactive sessions. Can be changed with options(rollama_verbose = FALSE).

Value

an httr2 response.

Details

query sends a single question to the API, without knowledge about previous questions (only the config message is relevant). chat treats new messages as part of the same conversation until new_chat is called.

Examples