llamaR

AI Agent Runtime in R with Context Managed via Workspace

Last updated: 2026-03-25

llamaR logo

An AI agent runtime for R. Self-hosted, model-agnostic, tinyverse.

In Spanish, llamar (pronounced “Ya Mar”) means “to call.”


Quick Start

1# Install the CLI
2r -e 'llamaR::install_cli()'
3
4# Start the agent
5llamar

That’s it. You’re talking to an AI agent that can read files, run shell commands, query git, search the web, and execute R code in a persistent session.

1llamar | claude-sonnet-4-20250514 @ anthropic | 24 tools
2
3> What R packages are loaded?
4  [run_r] Running: loadedNamespaces()
512 packages loaded: base, utils, stats, ...
6
7> llamar --provider ollama --model llama3.2

Why R?

In the agent world, tools, skills, and the servers that expose them are three separate things you have to design, host, and wire together yourself. R packages bundle all three: the capability, the documentation that teaches you how to use it, and the mechanism that makes it available to your session. Versioned, tested, and delivered in a single install.packages() call. CRAN has been doing integrated tool distribution for 20 years. The agent ecosystem is just now figuring out that’s hard.

  • install.packages() works the same on Windows, Mac, and Linux. No brew, no nvm, no Docker, no venv.
  • Packages are tools, skills, and servers in one artifact. The 20,000+ packages on CRAN are agent capabilities waiting to be wired up.
  • CRAN reviews packages by hand and tests on 3 OSs. That’s a quality gate most tool registries don’t have.

Three Ways to Use llamaR

1. CLI agent (llamar)

The primary interface. A terminal agent with session management, voice mode, and context compaction. Uses the MCP server internally for tool execution.

1llamar                    # Start agent
2llamar --resume           # Resume last session
3llamar --provider ollama  # Use local models

2. MCP server (serve())

Exposes a persistent R session to external MCP clients (Claude Code, Claude Desktop, VS Code). This is how other tools get stateful R access: objects persist across tool calls, packages stay loaded, and runtime inspection tools like mirar work on real session state.

1{
2  "mcpServers": {
3    "llamaR": {
4      "command": "Rscript",
5      "args": ["-e", "llamaR::serve()"]
6    }
7  }
8}

3. In-session agent (chat())

Runs inside your R console. Tools execute as direct function calls, no MCP server, no subprocess. Your .GlobalEnv objects are the agent’s objects.

1chat()                           # Claude (default)
2chat(provider = "openai")        # GPT-4o
3chat(provider = "ollama",        # Local
4     model = "llama3.2")

This is the most interesting mode architecturally (the agent lives in the same process as the data), but current LLMs are trained on bash-style CLIs, so they perform better through the terminal interface. The /r <code> command lets you eval R directly without an LLM roundtrip.


Tools

ToolDescription
bashRun shell commands
read_fileRead file contents
write_fileWrite or create files
list_filesList directory contents
grep_filesSearch file contents
run_rExecute R code in the session
r_helpQuery R documentation via fyi
installed_packagesList and search R packages
git_statusGit working tree status
git_diffGit diff
git_logGit commit history
web_searchSearch the web (requires Tavily key)
fetch_urlFetch web content

Installation

1# llamaR (not yet on CRAN)
2remotes::install_github("cornball-ai/llamaR")
3
4# LLM provider abstraction (not yet on CRAN)
5remotes::install_github("cornball-ai/llm.api")

API Keys

Set in ~/.Renviron:

1ANTHROPIC_API_KEY=sk-ant-...
2OPENAI_API_KEY=sk-...
3TAVILY_API_KEY=tvly-...   # Optional, for web search

CLI (Optional)

1# Install the llamar CLI to ~/bin
2llamaR::install_cli()

Landscape

Anthropic’s Claude Agent SDK gives you Claude Code as a library, in Python and TypeScript. nanoclaw builds on it. There’s no R equivalent.

llamaR fills that gap. Not by wrapping Anthropic’s SDK, but by building an R-native agent runtime from scratch. Model-agnostic (Anthropic, OpenAI, Ollama), small enough to read in an afternoon.

RolePosit (tidyverse)cornyverse
LLM API clientellmerllm.api
Context toolsbtwfyi
MCP bridgemcptoolsllamaR

mcptools integrates R into the broader MCP ecosystem (Claude Desktop, VS Code, Positron). It’s polished, on CRAN, and backed by Posit.

llamaR is a standalone agent runtime. chat() runs inside your R session. The CLI runs from your terminal. No external client required.


Design Philosophy

  • Small enough to understand: one package, one import (jsonlite), readable source
  • Model-agnostic: Anthropic, OpenAI, Ollama
  • R-native: packages are skills, install.packages() is the marketplace
  • Hackable: add tools by writing R functions

Platform Support

PlatformStatus
LinuxFully supported
macOSExpected to work
WindowsPartial (stdin handling pending)

Status

Experimental. Interfaces may change.


License

MIT

Reference

See Function Reference for complete API documentation.

Functions