/ README.md
README.md
1 <!-- 2 SPDX-FileCopyrightText: 2021 Jeff Epler 3 4 SPDX-License-Identifier: MIT 5 --> 6 [](https://github.com/jepler/chap/actions/workflows/test.yml) 7 [](https://github.com/jepler/chap/actions/workflows/release.yml) 8 [](https://pypi.org/project/chap/) 9 10 # chap - A Python interface to chatgpt and other LLMs, including a terminal user interface (tui) 11 12  13 14 ## System requirements 15 16 Chap is primarily developed on Linux with Python 3.11. Moderate effort will be made to support versions back to Python 3.9 (Debian oldstable). 17 18 ## Installation 19 20 If you want `chap` available as a command, just install with `pipx install chap` or `pip install chap`. 21 22 Use a virtual environment unless you want it installed globally. 23 24 ## Installation for development 25 26 Use one of the following two methods to run `chap` as a command, with the ability to edit the source files. You are welcome to submit valuable changes as [a pull request](https://github.com/jepler/chap/pulls). 27 28 ### Via `pip install --editable .` 29 30 This is an "editable install", as [recommended by the Python Packaging Authority](https://setuptools.pypa.io/en/latest/userguide/development_mode.html). 31 32 Change directory to the root of the `chap` project. 33 34 Activate your virtual environment, then install `chap` in development mode: 35 36 ```shell 37 pip install --editable . 38 ``` 39 40 In this mode, you get the `chap` command-line program installed, but you are able to edit the source files in the `src` directory in place. 41 42 ### Via `chap-dev.py` 43 44 A simple shim script called `chap-dev.py` is included to demonstrate how to load and run the `chap` library without installing `chap` in development mode. This method may be more familiar to some developers. 45 46 Change directory to the root of the `chap` project. 47 48 Activate your virtual environment, then install requirements: 49 50 ```shell 51 pip install -r requirements.txt 52 ``` 53 54 Run the shim script (with optional command flags as appropriate): 55 56 ```shell 57 ./chap-dev.py 58 ``` 59 60 In this mode, you can edit the source files in the `src` directory in place, and the shim script will pick up the changes via the `import` directive. 61 62 ## Contributing 63 64 See [CONTRIBUTING.md](CONTRIBUTING.md). 65 66 ## Code of Conduct 67 68 See [CODE\_OF\_CONDUCT.md](CODE_OF_CONDUCT.md). 69 70 ## Configuration 71 72 Put your OpenAI API key in the platform configuration directory for chap, e.g., on linux/unix systems at `~/.config/chap/openai_api_key` 73 74 ## Command-line usage 75 76 * `chap ask "What advice would you give a 20th century human visiting the 21st century for the first time?"` 77 78 * `chap render --last` / `chap cat --last` 79 80 * `chap import chatgpt-style-chatlog.json` (for files from pionxzh/chatgpt-exporter) 81 82 * `chap grep needle` 83 84 ## `@FILE` arguments 85 86 It's useful to set a bunch of related arguments together, for instance to fully 87 configure a back-end. This functionality is implemented via `@FILE` arguments. 88 89 Before any other command-line argument parsing is performed, `@FILE` arguments are expanded: 90 91 * An `@FILE` argument is searched relative to the current directory 92 * An `@:FILE` argument is searched relative to the configuration directory (e.g., $HOME/.config/chap) 93 * If an argument starts with a literal `@`, double it: `@@` 94 * `@.` stops processing any further `@FILE` arguments and leaves them unchanged. 95 The contents of an `@FILE` are parsed according to `shlex.split(comments=True)`. 96 Comments are supported. 97 A typical content might look like this: 98 ``` 99 # gpt-3.5.txt: Use cheaper gpt 3.5 and custom prompt 100 --backend openai-chatgpt 101 -B model:gpt-3.5-turbo 102 -s my-custom-system-message.txt 103 ``` 104 and you might use it with 105 ``` 106 chap @:gpt-3.5.txt ask what version of gpt is this 107 ``` 108 109 ## Interactive terminal usage 110 The interactive terminal mode is accessed via `chap tui`. 111 112 There are a variety of keyboard shortcuts to be aware of: 113 * tab/shift-tab to move between the entry field and the conversation, or between conversation items 114 * While in the text box, F9 or (if supported by your terminal) alt+enter to submit multiline text 115 * while on a conversation item: 116 * ctrl+x to re-draft the message. This 117 * saves a copy of the session in an auto-named file in the conversations folder 118 * removes the conversation from this message to the end 119 * puts the user's message in the text box to edit 120 * ctrl+x to re-submit the message. This 121 * saves a copy of the session in an auto-named file in the conversations folder 122 * removes the conversation from this message to the end 123 * puts the user's message in the text box 124 * and submits it immediately 125 * ctrl+y to yank the message. This places the response part of the current 126 interaction in the operating system clipboard to be pasted (e..g, with 127 ctrl+v or command+v in other software) 128 * ctrl+q to toggle whether this message may be included in the contextual history for a future query. 129 The exact way history is submitted is determined by the back-end, often by 130 counting messages or tokens, but the ctrl+q toggle ensures this message (both the user 131 and assistant message parts) are not considered. 132 133 ## Sessions & Command-line Parameters 134 135 Details of session handling & command-line arguments are in flux. 136 137 By default, a new session is created. It is saved to the user's state directory 138 (e.g., `~/.local/state/chap` on linux/unix systems). 139 140 You can specify the session filename for a new session with `-n` or to re-open 141 an existing session with `-s`. Or, you can continue the last session with 142 `--last`. 143 144 You can set the "system message" with the `-S` flag. 145 146 You can select the text generating backend with the `-b` flag: 147 * openai-chatgpt: the default, paid API, best quality results 148 * llama-cpp: Works with [llama.cpp's http server](https://github.com/ggerganov/llama.cpp/blob/master/examples/server/README.md) and can run locally with various models, 149 though it is [optimized for models that use the llama2-style prompting](https://huggingface.co/blog/llama2#how-to-prompt-llama-2). 150 Set the server URL with `-B url:...`. 151 * textgen: Works with https://github.com/oobabooga/text-generation-webui and can run locally with various models. 152 Needs the server URL in *$configuration_directory/textgen\_url*. 153 * lorem: local non-AI lorem generator for testing 154 155 ## Environment variables 156 157 The backend can be set with the `CHAP_BACKEND` environment variable. 158 159 Backend settings can be set with `CHAP_<backend_name>_<parameter_name>`, with `backend_name` and `parameter_name` all in caps. 160 161 For instance, `CHAP_LLAMA_CPP_URL=http://server.local:8080/completion` changes the default server URL for the llama-cpp back-end. 162 163 ## Importing from ChatGPT 164 165 The userscript https://github.com/pionxzh/chatgpt-exporter can export chat logs from chat.openai.com in a JSON format. 166 167 This format is different than chap's, especially since `chap` currently only represents a single branch of conversation in one log. 168 169 You can use the `chap import` command to import all the branches of a chatgpt-style chatlog in JSON format into a series of `chap`-style chat logs. 170 171 ## Plug-ins 172 173 Chap supports back-end and command plug-ins. 174 175 "Back-ends" add additional text generators. 176 177 "Commands" add new ways to interact with text generators, session data, and so forth. 178 179 Install a plugin with `pip install` or `pipx inject` (depending how you installed chap) and then use it as normal. 180 181 [chap-backend-replay](https://pypi.org/project/chap-backend-replay/) is an example back-end plug-in. It replays answers from a previous session. 182 183 [chap-command-explain](https://pypi.org/project/chap-command-explain/) is an example command plug-in. It is similar to `chap ask`. 184 185 At this time, there is no stability guarantee for the API of commands or backends.