Skip to content

Agents

Quickest start, this script runs the Quickstart commands below.

docs/agent_quickstart.sh

QUICKSTART: build and run a python agent on NearAI

  1. Install the NearAI CLI.

  2. Create a new folder for your agent;

    we recommend placing it inside your local registry mkdir -p ~/.nearai/registry/example_agent.

  3. Create a metadata.json file for your agent

nearai registry metadata_template ~/.nearai/registry/example_agent agent "Example agent" and edit it.

  1. Create an agent.py file in that folder.

  2. Run your agent locally using the cli and passing it a folder to write output to.

    nearai agent interactive example_agent /tmp/example_agent_run_1 --local
    

Example agent.py

# In local interactive mode, the first user input is collected before the agent runs.
prompt = {"role": "system", "content": "You are a travel agent that helps users plan trips."}
result = env.completion([prompt] + env.list_messages())
env.add_message("agent", result)
env.request_user_input()

About Agents

Agents are programs of varying complexity that can combine capabilities from across NearAI: authentication, inference, data stores, tools, apis, smart contract calls, reputation, compliance, proofs, and more.

Agents run in response to messages, usually from a user or another agent. Messages can also be sent to an agent from other systems such as a scheduler or indexer.

Agent Operation and Features:

  • interactive mode runs the agent in an infinite loop until: it is terminated by typing "exit" in the chat; is forcibly exited with a code; or stopped by the user with "Ctrl+C".
  • The execution folder is optional; by default, the initial agent's folder may be used instead.
  • If you use a folder other than the local registry, provide the full path to the agent instead of just the agent name.

Command:

nearai agent interactive AGENT [EXECUTION_FOLDER] --local
Example:
nearai agent interactive example_agent --local

  • The agent can save temporary files to track the progress of a task from the user in case the dialogue execution is interrupted. By default, the entire message history is stored in a file named chat.txt. The agent can add messages there by using env.add_message(). Learn more about the environment API.
  • During its operation, the agent creates a file named .next_agent, which stores the role of the next participant expected in the dialogue (either user or agent) during the next iteration of the loop. The agent can control this value using env.set_next_actor().
  • The agent can use local imports from the home folder or its subfolders. It is executed from a temporary folder within a temporary environment.

Running an existing agent from the registry

List all agents

nearai registry list --category agent

Download an agent by name

nearai registry download flatirons.near/xela-agent/5

The --force flag allows you to overwrite the local agent with the version from the registry.

⚠️ Warning: Review the agent code before running it!

Running an agent interactively

Agents can be run interactively. The environment_path should be a folder where the agent chat record (chat.txt) and other files can be written, usually ~/tmp/test-agents/<AGENT_NAME>-run-X.

  • command nearai agent interactive AGENT ENVIRONMENT_PATH
  • example
    nearai agent interactive flatirons.near/xela-agent/5 /tmp/test-agents/xela-agent-run-1
    

Running an agent as a task

To run without user interaction pass the task input to the task

  • command nearai agent task <AGENT> <INPUT> <ENVIRONMENT_PATH>
  • example
    nearai agent task flatirons.near/xela-agent/5 "Build a command line chess engine" ~/tmp/test-agents/xela-agent/chess-engine
    

Running an agent through AI Hub

To run an agent in the AI Hub: 1. Select the desired agent. 1. Navigate to the Run tab. 1. Interact with the agent using the chat interface

Note: . Agent chat through the AI Hub does not yet stream back responses, it takes a few seconds to respond.

The Environment API

This is the api your agent will use to interact with NearAI. For example, to add an agent's response you could call completions and add_message.

prompt = {"role": "system", "content": "You are a travel agent that helps users plan trips."}

conversation = env.list_messages() # the user's new message is added to this list by both the remote and local UIs.

agent_response = env.completion([prompt] + conversation)

env.add_message("agent", agent_response)

Your agent will receive an env object that has the following methods:

  • request_user_input: tell the agent that it is the user's turn, stop iterating.
  • completion: request inference completions from a provider and model. The model format can be either PROVIDER::MODEL or simply MODEL. By default the provider is fireworks and the model is llama-v3p1-405b-instruct-long. The model can be passed into completion function or as an agent metadata:
    "details": {
      "agent": {
        "defaults": {
          // All fields below are optional.
          "model": "llama-v3p1-405b-instruct-long",
          "model_max_tokens": 16384,
          "model_provider": "fireworks",
          "model_temperature": 1.0
        }
      }
    }
    
  • list_messages: returns the list of messages in the conversation. You have full control to add and remove messages from this list.
  • add_message: adds a message to the conversation. Arguments are role and content.
    env.add_message("user", "Hello, I would like to travel to Paris")
    
    Normal roles are:
    • system: usually your starting prompt
    • agent: messages from the agent (i.e. llm responses, programmatic responses)
    • user: messages from the user

Additional environment methods

There are several variations for completions:

For working with files and running commands the following methods are also available on env. You may call these directly or use them through the tool_registry and passing them to a completions method.

Logging

  • add_system_log: adds a system or environment log that is then saved into "system_log.txt".
  • add_agent_log: any agent logs may go here. Saved into "agent_log.txt".

Tool registry and function Tool Calling

NearAI supports function based tool calling where the LLM can decide to call one of the functions (Tools) that you pass it. You can register your own function or use any of the built-in tools (list_files, read_file, write_file, exec_command, query_vector_store, request_user_input).

The tool registry supports OpenAI style tool calling and Llama style. When a llama model is explicitly passed to completion(s)_and_run_tools a system message is added to the conversation. This system message contains the tool definitions and instructions on how to invoke them using <function> tags.

To tell the LLM about your tools and automatically execute them when selected by the LLM, call one of these environment methods:

By default, these methods will add both the LLM response and tool invocation responses to the message list. You do not need to call env.add_message for these responses. This behavior allows the LLM to see its call then tool responses in the message list on the next iteration or next run. This can be disabled by passing add_to_messages=False to the method.

  • get_tool_registry: returns the tool registry, a dictionary of tools that can be called by the agent. By default it is populated with the tools listed above for working with files and commands plus request_user_input. To register a function as a new tool, call register_tool on the tool registry, passing it your function.
    def my_tool():
        """A simple tool that returns a string. This docstring helps the LLM know when to call the tool."""
        return "Hello from my tool"
    
    tool_registry = env.get_tool_registry()
    tool_registry.register_tool(my_tool)
    tool_def = tool_registry.get_tool_definition('my_tool')
    response = env.completions_and_run_tools(messages, tools=[tool_def], model="llama-v3p1-405b-instruct")
    

To pass all the built in tools plus any you have registered use the get_all_tool_definitions method.

all_tools = env.get_tool_registry().get_all_tool_definitions()
response = env.completions_and_run_tools(messages, tools=all_tools, model="llama-v3p1-405b-instruct")
If you are registering several tools and do not want to use the built in tools, instantiate a new ToolRegistry
    tool_registry = ToolRegistry()
    tool_registry.register_tool(my_tool)
    tool_registry.register_tool(my_tool2)
    response = env.completions_and_run_tools(messages, tools=tool_registry.get_all_tool_definitions())

Uploading an agent

  • You need a folder with an agent.py file in it, ~/.nearai/registry/example_agent in this example.
  • The agent may consist of additional files in the folder.

⚠️ Warning: All files in this folder will be uploaded to the registry! * Add a metadata file nearai registry metadata_template ~/.nearai/registry/example_agent * Edit the metadata file to include the agent details

{
  "category": "agent",
  "description": "An example agent that gives travel recommendations",
  "tags": [
    "python",
    "travel"
  ],
  "details": {
    "agent": {
       "defaults": {
         // All fields below are optional.
         "model": "llama-v3p1-405b-instruct-long",
         "model_max_tokens": 16384,
         "model_provider": "fireworks",
         "model_temperature": 1.0
       }
     }
  },
  "show_entry": true,
  "name": "example-travel-agent",
  "version": "0.0.5"
}

  • You must be logged in with NEAR to upload, nearai login
  • Upload the agent nearai registry upload ~/.nearai/registry/example_agent

⚠️ You can't remove or overwrite a file once it's uploaded, but you can hide the entire agent by setting the "show_entry": false field.

Running an agent remotely through the CLI

Agents can be run through the CLI using the nearai agent run_remote command. A new message can be passed with the new_message argument. A starting environment (state) can be passed with the environment_id argument.

  nearai agent run_remote flatirons.near/example-travel-agent/1 \
  new_message="I would like to travel to Brazil"

This environment already contains a request to travel to Paris and an agent response. A new_message could be included to further refine the request. In this example without a new_message the agent will reprocess the previous response and follow up about travel to Paris.

  nearai agent run_remote flatirons.near/example-travel-agent/1 \
  environment_id="flatirons.near/environment_run_flatirons.near_example-travel-agent_1_1c82938c55fc43e492882ee938c6356a/0"

Running an agent through the API

Agents can be run through the /agent/runs endpoint. You will need to pass a signed message to authenticate. This example uses the credentials written by nearai login to your ~/.nearai/config.json file.

auth_json=$(jq -c '.auth' ~/.nearai/config.json);

curl "https://api.near.ai/v1/agent/runs" \
      -X POST \
      --header 'Content-Type: application/json' \
      --header "Authorization: Bearer $auth_json" \
-d @- <<'EOF'
  {
    "agent_id": "flatirons.near/xela-agent/5",
    "new_message":"Build a backgammon game",
    "max_iterations": "2"
  }
EOF

The full message will look like this. An environment_id param can also be passed to continue a previous run.

curl "https://api.near.ai/v1/agent/runs" \
      -X POST \
      --header 'Content-Type: application/json' \
      --header 'Authorization: Bearer {"account_id":"your_account.near","public_key":"ed25519:YOUR_PUBLIC_KEY","signature":"A_REAL_SIGNATURE","callback_url":"https://app.near.ai/","message":"Welcome to NEAR AI Hub!","recipient":"ai.near","nonce":"A_UNIQUE_NONCE_FOR_THIS_SIGNATURE"}' \
-d @- <<'EOF'
  {
    "agent_id": "flatirons.near/xela-agent/5",
    "environment_id": "a_previous_environment_id",
    "new_message":"Build a backgammon game", 
    "max_iterations": "2"
  }
EOF

Remote results

The results of both run_remote and the /agent/runs endpoint are either an error or the resulting environment state.

Agent run finished. New environment is "flatirons.near/environment_run_flatirons.near_example-travel-agent_1_1c82938c55fc43e492882ee938c6356a/0"

To view the resulting state, download the environment.tar.gz file from the registry and extract it.

nearai registry download flatirons.near/environment_run_flatirons.near_example-travel-agent_1_1c82938c55fc43e492882ee938c6356a/0

Signed messages

NearAI authentication is through a Signed Message: a payload signed by a Near Account private key. (How to Login with NEAR)

If you need one for manual testing, you can nearai login then copy the auth section from your ~/.nearai/config.json.

To add signed message login to an application, see the code in hub demo near.tsx.

Saving and loading environment runs

When you are logged in, by default, each environment run is saved to the registry. You can disable this by adding the cli flag --record_run=False.

An environment run can be loaded by using the --load_env flag and passing it a registry identifier --load_env=near.ai/environment_run_test_6a8393b51d4141c7846247bdf4086038/1.0.0.

To list environment identifiers use the command nearai registry list --tags=environment.

A run can be named by passing a name to the record_run flag --record_run="my special run".

Environment runs can be loaded by passing the name of a previous run to the --load_env flag like --load_env="my special run".

Running an agent with Environment Variables

When working with agents, managing configuration parameters through environment variables can provide a flexible way to adjust settings without altering the underlying code. This approach is particularly useful when dealing with sensitive information or configuration that needs to be customized without modifying the agent's codebase.

Storing Environment Variables

Environment variables can be stored in a metadata.json file. Here’s an example of how to structure this file:

{
  "details": {
    "env_vars": {
      "id": "id_from_env",
      "key": "key_from_env"
    }
  }
}

Accessing Environment Variables in Code

In your agent’s code, you can access these environment variables using Python’s os module or by accessing the env_vars dictionary directly.

To retrieve an environment variable in the agent code:

# Using os.environ
import os
value = os.environ.get('VARIABLE_NAME', None)

# Or using globals()
value = globals()['env'].env_vars.get('VARIABLE_NAME')

This allows users to fork the agent, modify the environment variables in metadata.json, and achieve the desired behavior without changing the code itself.

Running the agent with Environment Variables

You can also pass environment variables directly when launching the agent. This can be useful for overriding or extending the variables defined in metadata.json and handling Sensitive Information: If your agent needs to interact with APIs or services that require secret keys or credentials, you can pass these as environment variables instead of hardcoding them. This ensures that sensitive information is not exposed in publicly accessible code.

To run the agent with environment variables, use the following command:

nearai agent interactive user.near/agent/1 --local --env_vars='{"foo":"bar"}'

Example

Consider an agent zavodil.near/test-env-agent/1 that has configurable environment variables.

Agent Frameworks

Agents can be built using a variety of frameworks and libraries. A particular bundle of libraries is given a name, such as langgraph-0-2-26. To run your agent remotely with a particular framework, set the framework name in the agent's metadata.json file.

{
  "details": {
    "agent": {
      "framework": "langgraph-0-2-26"
    }
  }
}
For local development, you can install any libraries you would like to use by adding them to top level pyproject.toml.

Current frameworks can be found in the repo's frameworks folder.

LangChain / LangGraph

The example agent langgraph-min-example has metadata that specifies the langgraph-0-1-4 framework to run on langgraph version 1.4. In addition, the agent.py code contains an adaptor class, AgentChatModel that maps LangChain inference operations to env.completions calls.