Compare commits
6 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 820d4aed21 | |||
| eac93cb667 | |||
| 4af43bccbb | |||
| dd661f822b | |||
| 16a1229bb0 | |||
| e6cf9984e4 |
@@ -46,32 +46,32 @@ cmm [global options] command [command options]
|
||||
The `question` command is used to ask, create, and process questions.
|
||||
|
||||
```bash
|
||||
cmm question [-t OTAGS]... [-k ATAGS]... [-x XTAGS]... [-o OUTTAGS]... [-A AI_ID] [-M MODEL] [-n NUM] [-m MAX] [-T TEMP] (-a QUESTION | -c QUESTION | -r [MESSAGE ...] | -p [MESSAGE ...]) [-O] [-s FILE]... [-S FILE]...
|
||||
cmm question [-t OTAGS]... [-k ATAGS]... [-x XTAGS]... [-o OUTTAGS]... [-A AI] [-M MODEL] [-n NUM] [-m MAX] [-T TEMP] (-a ASK | -c CREATE | -r REPEAT | -p PROCESS) [-O] [-s SOURCE]... [-S SOURCE]...
|
||||
```
|
||||
|
||||
* `-t, --or-tags OTAGS` : List of tags (one must match)
|
||||
* `-k, --and-tags ATAGS` : List of tags (all must match)
|
||||
* `-x, --exclude-tags XTAGS` : List of tags to exclude
|
||||
* `-o, --output-tags OUTTAGS` : List of output tags (default: use input tags)
|
||||
* `-A, --AI AI_ID`: AI ID to use
|
||||
* `-A, --AI AI` : AI ID to use
|
||||
* `-M, --model MODEL` : Model to use
|
||||
* `-n, --num-answers NUM` : Number of answers to request
|
||||
* `-m, --max-tokens MAX` : Max. number of tokens
|
||||
* `-T, --temperature TEMP` : Temperature value
|
||||
* `-a, --ask QUESTION`: Ask a question
|
||||
* `-c, --create QUESTION`: Create a question
|
||||
* `-r, --repeat [MESSAGE ...]`: Repeat a question
|
||||
* `-p, --process [MESSAGE ...]`: Process existing questions
|
||||
* `-a, --ask ASK` : Ask a question
|
||||
* `-c, --create CREATE` : Create a question
|
||||
* `-r, --repeat REPEAT` : Repeat a question
|
||||
* `-p, --process PROCESS` : Process existing questions
|
||||
* `-O, --overwrite` : Overwrite existing messages when repeating them
|
||||
* `-s, --source-text FILE`: Add content of a file to the query
|
||||
* `-S, --source-code FILE`: Add source code file content to the chat history
|
||||
* `-s, --source-text SOURCE` : Add content of a file to the query
|
||||
* `-S, --source-code SOURCE` : Add source code file content to the chat history
|
||||
|
||||
#### Hist
|
||||
|
||||
The `hist` command is used to print the chat history.
|
||||
|
||||
```bash
|
||||
cmm hist [-t OTAGS]... [-k ATAGS]... [-x XTAGS]... [-w] [-W] [-S] [-A SUBSTRING] [-Q SUBSTRING]
|
||||
cmm hist [-t OTAGS]... [-k ATAGS]... [-x XTAGS]... [-w] [-W] [-S] [-A ANSWER] [-Q QUESTION]
|
||||
```
|
||||
|
||||
* `-t, --or-tags OTAGS` : List of tags (one must match)
|
||||
@@ -79,47 +79,46 @@ cmm hist [-t OTAGS]... [-k ATAGS]... [-x XTAGS]... [-w] [-W] [-S] [-A SUBSTRING]
|
||||
* `-x, --exclude-tags XTAGS` : List of tags to exclude
|
||||
* `-w, --with-tags` : Print chat history with tags
|
||||
* `-W, --with-files` : Print chat history with filenames
|
||||
* `-S, --source-code-only`: Only print embedded source code
|
||||
* `-A, --answer SUBSTRING`: Search for answer substring
|
||||
* `-Q, --question SUBSTRING`: Search for question substring
|
||||
* `-S, --source-code-only` : Print only source code
|
||||
* `-A, --answer ANSWER` : Search for answer substring
|
||||
* `-Q, --question QUESTION` : Search for question substring
|
||||
|
||||
#### Tags
|
||||
|
||||
The `tags` command is used to manage tags.
|
||||
|
||||
```bash
|
||||
cmm tags (-l | -p PREFIX | -c SUBSTRING)
|
||||
cmm tags (-l | -p PREFIX | -c CONTENT)
|
||||
```
|
||||
|
||||
* `-l, --list` : List all tags and their frequency
|
||||
* `-p, --prefix PREFIX` : Filter tags by prefix
|
||||
* `-c, --contain SUBSTRING`: Filter tags by contained substring
|
||||
* `-c, --contain CONTENT` : Filter tags by contained substring
|
||||
|
||||
#### Config
|
||||
|
||||
The `config` command is used to manage the configuration.
|
||||
|
||||
```bash
|
||||
cmm config (-l | -m | -c FILE)
|
||||
cmm config (-l | -m | -c CREATE)
|
||||
```
|
||||
|
||||
* `-l, --list-models` : List all available models
|
||||
* `-m, --print-model` : Print the currently configured model
|
||||
* `-c, --create FILE`: Create config with default settings in the given file
|
||||
* `-c, --create CREATE` : Create config with default settings in the given file
|
||||
|
||||
#### Print
|
||||
|
||||
The `print` command is used to print message files.
|
||||
|
||||
```bash
|
||||
cmm print (-f FILE | -l) [-q | -a | -S]
|
||||
cmm print -f FILE [-q | -a | -S]
|
||||
```
|
||||
|
||||
* `-f, --file FILE`: Print given file
|
||||
* `-l, --latest`: Print latest message
|
||||
* `-q, --question`: Only print the question
|
||||
* `-a, --answer`: Only print the answer
|
||||
* `-S, --only-source-code`: Only print embedded source code
|
||||
* `-f, --file FILE` : File to print
|
||||
* `-q, --question` : Print only question
|
||||
* `-a, --answer` : Print only answer
|
||||
* `-S, --only-source-code` : Print only source code
|
||||
|
||||
### Examples
|
||||
|
||||
@@ -161,27 +160,18 @@ cmm print -f example.yaml
|
||||
|
||||
## Configuration
|
||||
|
||||
The default configuration filename is `.config.yaml` (it is searched in the current working directory).
|
||||
Use the command `cmm config --create <FILENAME>` to create a default configuration:
|
||||
The configuration file (`.config.yaml`) should contain the following fields:
|
||||
|
||||
```
|
||||
cache: .
|
||||
db: ./db/
|
||||
ais:
|
||||
myopenai:
|
||||
name: openai
|
||||
model: gpt-3.5-turbo-16k
|
||||
api_key: 0123456789
|
||||
temperature: 1.0
|
||||
max_tokens: 4000
|
||||
top_p: 1.0
|
||||
frequency_penalty: 0.0
|
||||
presence_penalty: 0.0
|
||||
system: You are an assistant
|
||||
```
|
||||
|
||||
Each AI has its own section and the name of that section is called the 'AI ID' (in the example above it is `myopenai`).
|
||||
The AI ID can be any string, as long as it's unique within the `ais` section. The AI ID is used for all commands that support the `AI` parameter and it's also stored within each message file.
|
||||
- `openai`:
|
||||
- `api_key`: Your OpenAI API key.
|
||||
- `model`: The name of the OpenAI model to use (e.g. "text-davinci-002").
|
||||
- `temperature`: The temperature value for the model.
|
||||
- `max_tokens`: The maximum number of tokens for the model.
|
||||
- `top_p`: The top P value for the model.
|
||||
- `frequency_penalty`: The frequency penalty value.
|
||||
- `presence_penalty`: The presence penalty value.
|
||||
- `system`: The system message used to set the behavior of the AI.
|
||||
- `db`: The directory where the question-answer pairs are stored in YAML files.
|
||||
|
||||
## Autocompletion
|
||||
|
||||
@@ -196,33 +186,33 @@ After adding this line, restart your shell or run `source <your-shell-config-fil
|
||||
## Contributing
|
||||
|
||||
### Enable commit hooks
|
||||
```bash
|
||||
```
|
||||
pip install pre-commit
|
||||
pre-commit install
|
||||
```
|
||||
### Execute tests before opening a PR
|
||||
```bash
|
||||
```
|
||||
pytest
|
||||
```
|
||||
### Consider using `pyenv` / `pyenv-virtualenv`
|
||||
Short installation instructions:
|
||||
* install `pyenv`:
|
||||
```bash
|
||||
```
|
||||
cd ~
|
||||
git clone https://github.com/pyenv/pyenv .pyenv
|
||||
cd ~/.pyenv && src/configure && make -C src
|
||||
```
|
||||
* make sure that `~/.pyenv/shims` and `~/.pyenv/bin` are the first entries in your `PATH`, e.g., by setting it in `~/.bashrc`
|
||||
* make sure that `~/.pyenv/shims` and `~/.pyenv/bin` are the first entries in your `PATH`, e. g. by setting it in `~/.bashrc`
|
||||
* add the following to your `~/.bashrc` (after setting `PATH`): `eval "$(pyenv init -)"`
|
||||
* create a new terminal or source the changes (e.g., `source ~/.bashrc`)
|
||||
* create a new terminal or source the changes (e. g. `source ~/.bashrc`)
|
||||
* install `virtualenv`
|
||||
```bash
|
||||
```
|
||||
git clone https://github.com/pyenv/pyenv-virtualenv.git $(pyenv root)/plugins/pyenv-virtualenv
|
||||
```
|
||||
* add the following to your `~/.bashrc` (after the commands above): `eval "$(pyenv virtualenv-init -)`
|
||||
* create a new terminal or source the changes (e.g., `source ~/.bashrc`)
|
||||
* go back to the `ChatMasterMind` repo and create a virtual environment with the latest `Python`, e.g., `3.11.4`:
|
||||
```bash
|
||||
* create a new terminal or source the changes (e. g. `source ~/.bashrc`)
|
||||
* go back to the `ChatMasterMind` repo and create a virtual environment with the latest `Python`, e. g. `3.11.4`:
|
||||
```
|
||||
cd <CMM_REPO_PATH>
|
||||
pyenv install 3.11.4
|
||||
pyenv virtualenv 3.11.4 py311
|
||||
@@ -233,3 +223,5 @@ pyenv activate py311
|
||||
## License
|
||||
|
||||
This project is licensed under the terms of the WTFPL License.
|
||||
|
||||
|
||||
|
||||
@@ -3,20 +3,18 @@ Creates different AI instances, based on the given configuration.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
from typing import cast, Optional
|
||||
from typing import cast
|
||||
from .configuration import Config, AIConfig, OpenAIConfig
|
||||
from .ai import AI, AIError
|
||||
from .ais.openai import OpenAI
|
||||
|
||||
|
||||
def create_ai(args: argparse.Namespace, config: Config, # noqa: 11
|
||||
def_ai: Optional[str] = None,
|
||||
def_model: Optional[str] = None) -> AI:
|
||||
def create_ai(args: argparse.Namespace, config: Config) -> AI: # noqa: 11
|
||||
"""
|
||||
Creates an AI subclass instance from the given arguments and configuration file.
|
||||
If AI has not been set in the arguments, it searches for the ID 'default'. If
|
||||
that is not found, it uses the first AI in the list. It's also possible to
|
||||
specify a default AI and model using 'def_ai' and 'def_model'.
|
||||
Creates an AI subclass instance from the given arguments
|
||||
and configuration file. If AI has not been set in the
|
||||
arguments, it searches for the ID 'default'. If that
|
||||
is not found, it uses the first AI in the list.
|
||||
"""
|
||||
ai_conf: AIConfig
|
||||
if hasattr(args, 'AI') and args.AI:
|
||||
@@ -24,8 +22,6 @@ def create_ai(args: argparse.Namespace, config: Config, # noqa: 11
|
||||
ai_conf = config.ais[args.AI]
|
||||
except KeyError:
|
||||
raise AIError(f"AI ID '{args.AI}' does not exist in this configuration")
|
||||
elif def_ai:
|
||||
ai_conf = config.ais[def_ai]
|
||||
elif 'default' in config.ais:
|
||||
ai_conf = config.ais['default']
|
||||
else:
|
||||
@@ -38,8 +34,6 @@ def create_ai(args: argparse.Namespace, config: Config, # noqa: 11
|
||||
ai = OpenAI(cast(OpenAIConfig, ai_conf))
|
||||
if hasattr(args, 'model') and args.model:
|
||||
ai.config.model = args.model
|
||||
elif def_model:
|
||||
ai.config.model = def_model
|
||||
if hasattr(args, 'max_tokens') and args.max_tokens:
|
||||
ai.config.max_tokens = args.max_tokens
|
||||
if hasattr(args, 'temperature') and args.temperature:
|
||||
|
||||
@@ -44,7 +44,7 @@ class OpenAI(AI):
|
||||
frequency_penalty=self.config.frequency_penalty,
|
||||
presence_penalty=self.config.presence_penalty)
|
||||
question.answer = Answer(response['choices'][0]['message']['content'])
|
||||
question.tags = set(otags) if otags is not None else None
|
||||
question.tags = otags
|
||||
question.ai = self.ID
|
||||
question.model = self.config.model
|
||||
answers: list[Message] = [question]
|
||||
|
||||
@@ -3,13 +3,16 @@ import argparse
|
||||
from pathlib import Path
|
||||
from ..configuration import Config
|
||||
from ..message import Message, MessageError
|
||||
from ..chat import ChatDB
|
||||
|
||||
|
||||
def print_message(message: Message, args: argparse.Namespace) -> None:
|
||||
def print_cmd(args: argparse.Namespace, config: Config) -> None:
|
||||
"""
|
||||
Print given message according to give arguments.
|
||||
Handler for the 'print' command.
|
||||
"""
|
||||
fname = Path(args.file)
|
||||
try:
|
||||
message = Message.from_file(fname)
|
||||
if message:
|
||||
if args.question:
|
||||
print(message.question)
|
||||
elif args.answer:
|
||||
@@ -19,27 +22,6 @@ def print_message(message: Message, args: argparse.Namespace) -> None:
|
||||
print(code)
|
||||
else:
|
||||
print(message.to_str())
|
||||
|
||||
|
||||
def print_cmd(args: argparse.Namespace, config: Config) -> None:
|
||||
"""
|
||||
Handler for the 'print' command.
|
||||
"""
|
||||
# print given file
|
||||
if args.file is not None:
|
||||
fname = Path(args.file)
|
||||
try:
|
||||
message = Message.from_file(fname)
|
||||
if message:
|
||||
print_message(message, args)
|
||||
except MessageError:
|
||||
print(f"File is not a valid message: {args.file}")
|
||||
sys.exit(1)
|
||||
# print latest message
|
||||
elif args.latest:
|
||||
chat = ChatDB.from_dir(Path(config.cache), Path(config.db))
|
||||
latest = chat.msg_latest(loc='disk')
|
||||
if not latest:
|
||||
print("No message found!")
|
||||
sys.exit(1)
|
||||
print_message(latest, args)
|
||||
|
||||
@@ -2,7 +2,6 @@ import sys
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
from itertools import zip_longest
|
||||
from copy import deepcopy
|
||||
from ..configuration import Config
|
||||
from ..chat import ChatDB
|
||||
from ..message import Message, MessageFilter, MessageError, Question, source_code
|
||||
@@ -72,7 +71,7 @@ def create_message(chat: ChatDB, args: argparse.Namespace) -> Message:
|
||||
full_question = '\n\n'.join(question_parts)
|
||||
|
||||
message = Message(question=Question(full_question),
|
||||
tags=args.output_tags,
|
||||
tags=args.output_tags, # FIXME
|
||||
ai=args.AI,
|
||||
model=args.model)
|
||||
# only write the new message to the cache,
|
||||
@@ -93,8 +92,8 @@ def make_request(ai: AI, chat: ChatDB, message: Message, args: argparse.Namespac
|
||||
print(message.to_str())
|
||||
response: AIResponse = ai.request(message,
|
||||
chat,
|
||||
args.num_answers,
|
||||
args.output_tags)
|
||||
args.num_answers, # FIXME
|
||||
args.output_tags) # FIXME
|
||||
# only write the response messages to the cache,
|
||||
# don't add them to the internal list
|
||||
chat.cache_write(response.messages)
|
||||
@@ -106,75 +105,13 @@ def make_request(ai: AI, chat: ChatDB, message: Message, args: argparse.Namespac
|
||||
print(response.tokens)
|
||||
|
||||
|
||||
def create_msg_args(msg: Message, args: argparse.Namespace) -> argparse.Namespace:
|
||||
"""
|
||||
Takes an existing message and CLI arguments, and returns modified args based
|
||||
on the members of the given message. Used e.g. when repeating messages, where
|
||||
it's necessary to determine the correct AI, module and output tags to use
|
||||
(either from the existing message or the given args).
|
||||
"""
|
||||
msg_args = args
|
||||
# if AI, model or output tags have not been specified,
|
||||
# use those from the original message
|
||||
if (args.AI is None
|
||||
or args.model is None # noqa: W503
|
||||
or args.output_tags is None): # noqa: W503
|
||||
msg_args = deepcopy(args)
|
||||
if args.AI is None and msg.ai is not None:
|
||||
msg_args.AI = msg.ai
|
||||
if args.model is None and msg.model is not None:
|
||||
msg_args.model = msg.model
|
||||
if args.output_tags is None and msg.tags is not None:
|
||||
msg_args.output_tags = msg.tags
|
||||
return msg_args
|
||||
|
||||
|
||||
def repeat_messages(messages: list[Message], chat: ChatDB, args: argparse.Namespace, config: Config) -> None:
|
||||
"""
|
||||
Repeat the given messages using the given arguments.
|
||||
"""
|
||||
ai: AI
|
||||
for msg in messages:
|
||||
msg_args = create_msg_args(msg, args)
|
||||
ai = create_ai(msg_args, config)
|
||||
print(f"--------- Repeating message '{msg.msg_id()}': ---------")
|
||||
# overwrite the latest message if requested or empty
|
||||
# -> but not if it's in the DB!
|
||||
if ((msg.answer is None or msg_args.overwrite is True)
|
||||
and (not chat.msg_in_db(msg))): # noqa: W503
|
||||
msg.clear_answer()
|
||||
make_request(ai, chat, msg, msg_args)
|
||||
# otherwise create a new one
|
||||
else:
|
||||
msg_args.ask = [msg.question]
|
||||
message = create_message(chat, msg_args)
|
||||
make_request(ai, chat, message, msg_args)
|
||||
|
||||
|
||||
def invert_input_tag_args(args: argparse.Namespace) -> None:
|
||||
"""
|
||||
Changes the semantics of the INPUT tags for this command:
|
||||
* not tags specified on the CLI -> no tags are selected
|
||||
* empty tags specified on the CLI -> all tags are selected
|
||||
"""
|
||||
if args.or_tags is None:
|
||||
args.or_tags = set()
|
||||
elif len(args.or_tags) == 0:
|
||||
args.or_tags = None
|
||||
if args.and_tags is None:
|
||||
args.and_tags = set()
|
||||
elif len(args.and_tags) == 0:
|
||||
args.and_tags = None
|
||||
|
||||
|
||||
def question_cmd(args: argparse.Namespace, config: Config) -> None:
|
||||
"""
|
||||
Handler for the 'question' command.
|
||||
"""
|
||||
invert_input_tag_args(args)
|
||||
mfilter = MessageFilter(tags_or=args.or_tags,
|
||||
tags_and=args.and_tags,
|
||||
tags_not=args.exclude_tags)
|
||||
mfilter = MessageFilter(tags_or=args.or_tags if args.or_tags is not None else set(),
|
||||
tags_and=args.and_tags if args.and_tags is not None else set(),
|
||||
tags_not=args.exclude_tags if args.exclude_tags is not None else set())
|
||||
chat = ChatDB.from_dir(cache_path=Path(config.cache),
|
||||
db_path=Path(config.db),
|
||||
mfilter=mfilter)
|
||||
@@ -184,24 +121,30 @@ def question_cmd(args: argparse.Namespace, config: Config) -> None:
|
||||
if args.create:
|
||||
return
|
||||
|
||||
# create the correct AI instance
|
||||
ai: AI = create_ai(args, config)
|
||||
|
||||
# === ASK ===
|
||||
if args.ask:
|
||||
ai: AI = create_ai(args, config)
|
||||
make_request(ai, chat, message, args)
|
||||
# === REPEAT ===
|
||||
elif args.repeat is not None:
|
||||
repeat_msgs: list[Message] = []
|
||||
# repeat latest message
|
||||
if len(args.repeat) == 0:
|
||||
lmessage = chat.msg_latest(loc='cache')
|
||||
if lmessage is None:
|
||||
print("No message found to repeat!")
|
||||
sys.exit(1)
|
||||
repeat_msgs.append(lmessage)
|
||||
# repeat given message(s)
|
||||
else:
|
||||
repeat_msgs = chat.msg_find(args.repeat, loc='disk')
|
||||
repeat_messages(repeat_msgs, chat, args, config)
|
||||
print(f"Repeating message '{lmessage.msg_id()}':")
|
||||
# overwrite the latest message if requested or empty
|
||||
if lmessage.answer is None or args.overwrite is True:
|
||||
lmessage.clear_answer()
|
||||
make_request(ai, chat, lmessage, args)
|
||||
# otherwise create a new one
|
||||
else:
|
||||
args.ask = [lmessage.question]
|
||||
message = create_message(chat, args)
|
||||
make_request(ai, chat, message, args)
|
||||
|
||||
# === PROCESS ===
|
||||
elif args.process is not None:
|
||||
# TODO: process either all questions without an
|
||||
|
||||
+22
-25
@@ -34,23 +34,23 @@ def create_parser() -> argparse.ArgumentParser:
|
||||
|
||||
# a parent parser for all commands that support tag selection
|
||||
tag_parser = argparse.ArgumentParser(add_help=False)
|
||||
tag_arg = tag_parser.add_argument('-t', '--or-tags', nargs='*',
|
||||
tag_arg = tag_parser.add_argument('-t', '--or-tags', nargs='+',
|
||||
help='List of tags (one must match)', metavar='OTAGS')
|
||||
tag_arg.completer = tags_completer # type: ignore
|
||||
atag_arg = tag_parser.add_argument('-k', '--and-tags', nargs='*',
|
||||
atag_arg = tag_parser.add_argument('-k', '--and-tags', nargs='+',
|
||||
help='List of tags (all must match)', metavar='ATAGS')
|
||||
atag_arg.completer = tags_completer # type: ignore
|
||||
etag_arg = tag_parser.add_argument('-x', '--exclude-tags', nargs='*',
|
||||
etag_arg = tag_parser.add_argument('-x', '--exclude-tags', nargs='+',
|
||||
help='List of tags to exclude', metavar='XTAGS')
|
||||
etag_arg.completer = tags_completer # type: ignore
|
||||
otag_arg = tag_parser.add_argument('-o', '--output-tags', nargs='+',
|
||||
help='List of output tags (default: use input tags)', metavar='OUTAGS')
|
||||
help='List of output tags (default: use input tags)', metavar='OUTTAGS')
|
||||
otag_arg.completer = tags_completer # type: ignore
|
||||
|
||||
# a parent parser for all commands that support AI configuration
|
||||
ai_parser = argparse.ArgumentParser(add_help=False)
|
||||
ai_parser.add_argument('-A', '--AI', help='AI ID to use', metavar='AI_ID')
|
||||
ai_parser.add_argument('-M', '--model', help='Model to use', metavar='MODEL')
|
||||
ai_parser.add_argument('-A', '--AI', help='AI ID to use')
|
||||
ai_parser.add_argument('-M', '--model', help='Model to use')
|
||||
ai_parser.add_argument('-n', '--num-answers', help='Number of answers to request', type=int, default=1)
|
||||
ai_parser.add_argument('-m', '--max-tokens', help='Max. nr. of tokens', type=int)
|
||||
ai_parser.add_argument('-T', '--temperature', help='Temperature value', type=float)
|
||||
@@ -61,15 +61,14 @@ def create_parser() -> argparse.ArgumentParser:
|
||||
aliases=['q'])
|
||||
question_cmd_parser.set_defaults(func=question_cmd)
|
||||
question_group = question_cmd_parser.add_mutually_exclusive_group(required=True)
|
||||
question_group.add_argument('-a', '--ask', nargs='+', help='Ask a question', metavar='QUESTION')
|
||||
question_group.add_argument('-c', '--create', nargs='+', help='Create a question', metavar='QUESTION')
|
||||
question_group.add_argument('-r', '--repeat', nargs='*', help='Repeat a question', metavar='MESSAGE')
|
||||
question_group.add_argument('-p', '--process', nargs='*', help='Process existing questions', metavar='MESSAGE')
|
||||
question_group.add_argument('-a', '--ask', nargs='+', help='Ask a question')
|
||||
question_group.add_argument('-c', '--create', nargs='+', help='Create a question')
|
||||
question_group.add_argument('-r', '--repeat', nargs='*', help='Repeat a question')
|
||||
question_group.add_argument('-p', '--process', nargs='*', help='Process existing questions')
|
||||
question_cmd_parser.add_argument('-O', '--overwrite', help='Overwrite existing messages when repeating them',
|
||||
action='store_true')
|
||||
question_cmd_parser.add_argument('-s', '--source-text', nargs='+', help='Add content of a file to the query', metavar='FILE')
|
||||
question_cmd_parser.add_argument('-S', '--source-code', nargs='+', help='Add source code file content to the chat history',
|
||||
metavar='FILE')
|
||||
question_cmd_parser.add_argument('-s', '--source-text', nargs='+', help='Add content of a file to the query')
|
||||
question_cmd_parser.add_argument('-S', '--source-code', nargs='+', help='Add source code file content to the chat history')
|
||||
|
||||
# 'hist' command parser
|
||||
hist_cmd_parser = cmdparser.add_parser('hist', parents=[tag_parser],
|
||||
@@ -80,10 +79,10 @@ def create_parser() -> argparse.ArgumentParser:
|
||||
action='store_true')
|
||||
hist_cmd_parser.add_argument('-W', '--with-files', help="Print chat history with filenames.",
|
||||
action='store_true')
|
||||
hist_cmd_parser.add_argument('-S', '--source-code-only', help='Only print embedded source code',
|
||||
hist_cmd_parser.add_argument('-S', '--source-code-only', help='Print only source code',
|
||||
action='store_true')
|
||||
hist_cmd_parser.add_argument('-A', '--answer', help='Search for answer substring', metavar='SUBSTRING')
|
||||
hist_cmd_parser.add_argument('-Q', '--question', help='Search for question substring', metavar='SUBSTRING')
|
||||
hist_cmd_parser.add_argument('-A', '--answer', help='Search for answer substring')
|
||||
hist_cmd_parser.add_argument('-Q', '--question', help='Search for question substring')
|
||||
|
||||
# 'tags' command parser
|
||||
tags_cmd_parser = cmdparser.add_parser('tags',
|
||||
@@ -93,8 +92,8 @@ def create_parser() -> argparse.ArgumentParser:
|
||||
tags_group = tags_cmd_parser.add_mutually_exclusive_group(required=True)
|
||||
tags_group.add_argument('-l', '--list', help="List all tags and their frequency",
|
||||
action='store_true')
|
||||
tags_cmd_parser.add_argument('-p', '--prefix', help="Filter tags by prefix", metavar='PREFIX')
|
||||
tags_cmd_parser.add_argument('-c', '--contain', help="Filter tags by contained substring", metavar='SUBSTRING')
|
||||
tags_cmd_parser.add_argument('-p', '--prefix', help="Filter tags by prefix")
|
||||
tags_cmd_parser.add_argument('-c', '--contain', help="Filter tags by contained substring")
|
||||
|
||||
# 'config' command parser
|
||||
config_cmd_parser = cmdparser.add_parser('config',
|
||||
@@ -107,20 +106,18 @@ def create_parser() -> argparse.ArgumentParser:
|
||||
action='store_true')
|
||||
config_group.add_argument('-m', '--print-model', help="Print the currently configured model",
|
||||
action='store_true')
|
||||
config_group.add_argument('-c', '--create', help="Create config with default settings in the given file", metavar='FILE')
|
||||
config_group.add_argument('-c', '--create', help="Create config with default settings in the given file")
|
||||
|
||||
# 'print' command parser
|
||||
print_cmd_parser = cmdparser.add_parser('print',
|
||||
help="Print message files.",
|
||||
aliases=['p'])
|
||||
print_cmd_parser.set_defaults(func=print_cmd)
|
||||
print_group = print_cmd_parser.add_mutually_exclusive_group(required=True)
|
||||
print_group.add_argument('-f', '--file', help='Print given message file', metavar='FILE')
|
||||
print_group.add_argument('-l', '--latest', help='Print latest message', action='store_true')
|
||||
print_cmd_parser.add_argument('-f', '--file', help='File to print', required=True)
|
||||
print_cmd_modes = print_cmd_parser.add_mutually_exclusive_group()
|
||||
print_cmd_modes.add_argument('-q', '--question', help='Only print the question', action='store_true')
|
||||
print_cmd_modes.add_argument('-a', '--answer', help='Only print the answer', action='store_true')
|
||||
print_cmd_modes.add_argument('-S', '--only-source-code', help='Only print embedded source code', action='store_true')
|
||||
print_cmd_modes.add_argument('-q', '--question', help='Print only question', action='store_true')
|
||||
print_cmd_modes.add_argument('-a', '--answer', help='Print only answer', action='store_true')
|
||||
print_cmd_modes.add_argument('-S', '--only-source-code', help='Print only source code', action='store_true')
|
||||
|
||||
argcomplete.autocomplete(parser)
|
||||
return parser
|
||||
|
||||
@@ -1,100 +0,0 @@
|
||||
import unittest
|
||||
import argparse
|
||||
from typing import Union, Optional
|
||||
from chatmastermind.configuration import Config, AIConfig
|
||||
from chatmastermind.tags import Tag
|
||||
from chatmastermind.message import Message, Answer
|
||||
from chatmastermind.chat import Chat
|
||||
from chatmastermind.ai import AI, AIResponse, Tokens, AIError
|
||||
|
||||
|
||||
class FakeAI(AI):
|
||||
"""
|
||||
A mocked version of the 'AI' class.
|
||||
"""
|
||||
ID: str
|
||||
name: str
|
||||
config: AIConfig
|
||||
|
||||
def models(self) -> list[str]:
|
||||
raise NotImplementedError
|
||||
|
||||
def tokens(self, data: Union[Message, Chat]) -> int:
|
||||
return 123
|
||||
|
||||
def print(self) -> None:
|
||||
pass
|
||||
|
||||
def print_models(self) -> None:
|
||||
pass
|
||||
|
||||
def __init__(self, ID: str, model: str, error: bool = False):
|
||||
self.ID = ID
|
||||
self.model = model
|
||||
self.error = error
|
||||
|
||||
def request(self,
|
||||
question: Message,
|
||||
chat: Chat,
|
||||
num_answers: int = 1,
|
||||
otags: Optional[set[Tag]] = None) -> AIResponse:
|
||||
"""
|
||||
Mock the 'ai.request()' function by either returning fake
|
||||
answers or raising an exception.
|
||||
"""
|
||||
if self.error:
|
||||
raise AIError
|
||||
question.answer = Answer("Answer 0")
|
||||
question.tags = set(otags) if otags is not None else None
|
||||
question.ai = self.ID
|
||||
question.model = self.model
|
||||
answers: list[Message] = [question]
|
||||
for n in range(1, num_answers):
|
||||
answers.append(Message(question=question.question,
|
||||
answer=Answer(f"Answer {n}"),
|
||||
tags=otags,
|
||||
ai=self.ID,
|
||||
model=self.model))
|
||||
return AIResponse(answers, Tokens(10, 10, 20))
|
||||
|
||||
|
||||
class TestWithFakeAI(unittest.TestCase):
|
||||
"""
|
||||
Base class for all tests that need to use the FakeAI.
|
||||
"""
|
||||
def assert_msgs_equal_except_file_path(self, msg1: list[Message], msg2: list[Message]) -> None:
|
||||
"""
|
||||
Compare messages using Question, Answer and all metadata excecot for the file_path.
|
||||
"""
|
||||
self.assertEqual(len(msg1), len(msg2))
|
||||
for m1, m2 in zip(msg1, msg2):
|
||||
# exclude the file_path, compare only Q, A and metadata
|
||||
self.assertTrue(m1.equals(m2, file_path=False, verbose=True))
|
||||
|
||||
def assert_msgs_all_equal(self, msg1: list[Message], msg2: list[Message]) -> None:
|
||||
"""
|
||||
Compare messages using Question, Answer and ALL metadata.
|
||||
"""
|
||||
self.assertEqual(len(msg1), len(msg2))
|
||||
for m1, m2 in zip(msg1, msg2):
|
||||
self.assertTrue(m1.equals(m2, verbose=True))
|
||||
|
||||
def assert_msgs_content_equal(self, msg1: list[Message], msg2: list[Message]) -> None:
|
||||
"""
|
||||
Compare messages using only Question and Answer.
|
||||
"""
|
||||
self.assertEqual(len(msg1), len(msg2))
|
||||
for m1, m2 in zip(msg1, msg2):
|
||||
self.assertEqual(m1, m2)
|
||||
|
||||
def mock_create_ai(self, args: argparse.Namespace, config: Config) -> AI:
|
||||
"""
|
||||
Mocked 'create_ai' that returns a 'FakeAI' instance.
|
||||
"""
|
||||
return FakeAI(args.AI, args.model)
|
||||
|
||||
def mock_create_ai_with_error(self, args: argparse.Namespace, config: Config) -> AI:
|
||||
"""
|
||||
Mocked 'create_ai' that returns a 'FakeAI' instance.
|
||||
"""
|
||||
return FakeAI(args.AI, args.model, error=True)
|
||||
+134
-233
@@ -1,20 +1,31 @@
|
||||
import os
|
||||
import unittest
|
||||
import argparse
|
||||
import tempfile
|
||||
from copy import copy
|
||||
from pathlib import Path
|
||||
from unittest import mock
|
||||
from unittest.mock import MagicMock, call
|
||||
from unittest.mock import MagicMock, call, ANY
|
||||
from typing import Optional
|
||||
from chatmastermind.configuration import Config
|
||||
from chatmastermind.commands.question import create_message, question_cmd
|
||||
from chatmastermind.tags import Tag
|
||||
from chatmastermind.message import Message, Question, Answer
|
||||
from chatmastermind.chat import Chat, ChatDB
|
||||
from chatmastermind.ai import AIError
|
||||
from .test_common import TestWithFakeAI
|
||||
from chatmastermind.ai import AI, AIResponse, Tokens, AIError
|
||||
|
||||
|
||||
class TestMessageCreate(TestWithFakeAI):
|
||||
class TestQuestionCmdBase(unittest.TestCase):
|
||||
def assert_messages_equal(self, msg1: list[Message], msg2: list[Message]) -> None:
|
||||
"""
|
||||
Compare messages using more than just Question and Answer.
|
||||
"""
|
||||
self.assertEqual(len(msg1), len(msg2))
|
||||
for m1, m2 in zip(msg1, msg2):
|
||||
# exclude the file_path, compare only Q, A and metadata
|
||||
self.assertTrue(m1.equals(m2, file_path=False, verbose=True))
|
||||
|
||||
|
||||
class TestMessageCreate(TestQuestionCmdBase):
|
||||
"""
|
||||
Test if messages created by the 'question' command have
|
||||
the correct format.
|
||||
@@ -201,7 +212,7 @@ It is embedded code
|
||||
"""))
|
||||
|
||||
|
||||
class TestQuestionCmd(TestWithFakeAI):
|
||||
class TestQuestionCmd(TestQuestionCmdBase):
|
||||
|
||||
def setUp(self) -> None:
|
||||
# create DB and cache
|
||||
@@ -216,8 +227,8 @@ class TestQuestionCmd(TestWithFakeAI):
|
||||
ask=['What is the meaning of life?'],
|
||||
num_answers=1,
|
||||
output_tags=['science'],
|
||||
AI='FakeAI',
|
||||
model='FakeModel',
|
||||
AI='openai',
|
||||
model='gpt-3.5-turbo',
|
||||
or_tags=None,
|
||||
and_tags=None,
|
||||
exclude_tags=None,
|
||||
@@ -228,27 +239,57 @@ class TestQuestionCmd(TestWithFakeAI):
|
||||
process=None,
|
||||
overwrite=None
|
||||
)
|
||||
# create a mock AI instance
|
||||
self.ai = MagicMock(spec=AI)
|
||||
self.ai.request.side_effect = self.mock_request
|
||||
|
||||
def input_message(self, args: argparse.Namespace) -> Message:
|
||||
"""
|
||||
Create the expected input message for a question using the
|
||||
given arguments.
|
||||
"""
|
||||
# NOTE: we only use the first question from the "ask" list
|
||||
# -> message creation using "question.create_message()" is
|
||||
# tested above
|
||||
# the answer is always empty for the input message
|
||||
return Message(Question(args.ask[0]),
|
||||
tags=args.output_tags,
|
||||
ai=args.AI,
|
||||
model=args.model)
|
||||
|
||||
def mock_request(self,
|
||||
question: Message,
|
||||
chat: Chat,
|
||||
num_answers: int = 1,
|
||||
otags: Optional[set[Tag]] = None) -> AIResponse:
|
||||
"""
|
||||
Mock the 'ai.request()' function
|
||||
"""
|
||||
question.answer = Answer("Answer 0")
|
||||
question.tags = set(otags) if otags else None
|
||||
question.ai = 'FakeAI'
|
||||
question.model = 'FakeModel'
|
||||
answers: list[Message] = [question]
|
||||
for n in range(1, num_answers):
|
||||
answers.append(Message(question=question.question,
|
||||
answer=Answer(f"Answer {n}"),
|
||||
tags=otags,
|
||||
ai='FakeAI',
|
||||
model='FakeModel'))
|
||||
return AIResponse(answers, Tokens(10, 10, 20))
|
||||
|
||||
def message_list(self, tmp_dir: tempfile.TemporaryDirectory) -> list[Path]:
|
||||
# exclude '.next'
|
||||
return sorted([f for f in Path(tmp_dir.name).glob('*.[ty]*')])
|
||||
|
||||
|
||||
class TestQuestionCmdAsk(TestQuestionCmd):
|
||||
|
||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
||||
def test_ask_single_answer(self, mock_create_ai: MagicMock) -> None:
|
||||
"""
|
||||
Test single answer with no errors.
|
||||
"""
|
||||
mock_create_ai.side_effect = self.mock_create_ai
|
||||
expected_question = Message(Question(self.args.ask[0]),
|
||||
tags=set(self.args.output_tags),
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path('<NOT COMPARED>'))
|
||||
fake_ai = self.mock_create_ai(self.args, self.config)
|
||||
expected_responses = fake_ai.request(expected_question,
|
||||
mock_create_ai.return_value = self.ai
|
||||
expected_question = self.input_message(self.args)
|
||||
expected_responses = self.mock_request(expected_question,
|
||||
Chat([]),
|
||||
self.args.num_answers,
|
||||
self.args.output_tags).messages
|
||||
@@ -256,12 +297,17 @@ class TestQuestionCmdAsk(TestQuestionCmd):
|
||||
# execute the command
|
||||
question_cmd(self.args, self.config)
|
||||
|
||||
# check for correct request call
|
||||
self.ai.request.assert_called_once_with(expected_question,
|
||||
ANY,
|
||||
self.args.num_answers,
|
||||
self.args.output_tags)
|
||||
# check for the expected message files
|
||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
||||
Path(self.db_dir.name))
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
||||
self.assert_msgs_equal_except_file_path(cached_msg, expected_responses)
|
||||
self.assert_messages_equal(cached_msg, expected_responses)
|
||||
|
||||
@mock.patch('chatmastermind.commands.question.ChatDB.from_dir')
|
||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
||||
@@ -272,14 +318,9 @@ class TestQuestionCmdAsk(TestQuestionCmd):
|
||||
chat = MagicMock(spec=ChatDB)
|
||||
mock_from_dir.return_value = chat
|
||||
|
||||
mock_create_ai.side_effect = self.mock_create_ai
|
||||
expected_question = Message(Question(self.args.ask[0]),
|
||||
tags=set(self.args.output_tags),
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path('<NOT COMPARED>'))
|
||||
fake_ai = self.mock_create_ai(self.args, self.config)
|
||||
expected_responses = fake_ai.request(expected_question,
|
||||
mock_create_ai.return_value = self.ai
|
||||
expected_question = self.input_message(self.args)
|
||||
expected_responses = self.mock_request(expected_question,
|
||||
Chat([]),
|
||||
self.args.num_answers,
|
||||
self.args.output_tags).messages
|
||||
@@ -287,6 +328,12 @@ class TestQuestionCmdAsk(TestQuestionCmd):
|
||||
# execute the command
|
||||
question_cmd(self.args, self.config)
|
||||
|
||||
# check for correct request call
|
||||
self.ai.request.assert_called_once_with(expected_question,
|
||||
chat,
|
||||
self.args.num_answers,
|
||||
self.args.output_tags)
|
||||
|
||||
# check for the correct ChatDB calls:
|
||||
# - initial question has been written (prior to the actual request)
|
||||
# - responses have been written (after the request)
|
||||
@@ -303,98 +350,86 @@ class TestQuestionCmdAsk(TestQuestionCmd):
|
||||
Provoke an error during the AI request and verify that the question
|
||||
has been correctly stored in the cache.
|
||||
"""
|
||||
mock_create_ai.side_effect = self.mock_create_ai_with_error
|
||||
expected_question = Message(Question(self.args.ask[0]),
|
||||
tags=set(self.args.output_tags),
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path('<NOT COMPARED>'))
|
||||
mock_create_ai.return_value = self.ai
|
||||
expected_question = self.input_message(self.args)
|
||||
self.ai.request.side_effect = AIError
|
||||
|
||||
# execute the command
|
||||
with self.assertRaises(AIError):
|
||||
question_cmd(self.args, self.config)
|
||||
|
||||
# check for correct request call
|
||||
self.ai.request.assert_called_once_with(expected_question,
|
||||
ANY,
|
||||
self.args.num_answers,
|
||||
self.args.output_tags)
|
||||
# check for the expected message files
|
||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
||||
Path(self.db_dir.name))
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
||||
self.assert_msgs_equal_except_file_path(cached_msg, [expected_question])
|
||||
|
||||
|
||||
class TestQuestionCmdRepeat(TestQuestionCmd):
|
||||
self.assert_messages_equal(cached_msg, [expected_question])
|
||||
|
||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
||||
def test_repeat_single_question(self, mock_create_ai: MagicMock) -> None:
|
||||
"""
|
||||
Repeat a single question.
|
||||
"""
|
||||
mock_create_ai.side_effect = self.mock_create_ai
|
||||
# create a message
|
||||
message = Message(Question(self.args.ask[0]),
|
||||
Answer('Old Answer'),
|
||||
tags=set(self.args.output_tags),
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
||||
message.to_file()
|
||||
|
||||
# repeat the last question (without overwriting)
|
||||
# -> expect two identical messages (except for the file_path)
|
||||
self.args.ask = None
|
||||
self.args.repeat = []
|
||||
self.args.overwrite = False
|
||||
expected_response = Message(Question(message.question),
|
||||
Answer('Answer 0'),
|
||||
ai=message.ai,
|
||||
model=message.model,
|
||||
tags=message.tags,
|
||||
file_path=Path('<NOT COMPARED>'))
|
||||
# we expect the original message + the one with the new response
|
||||
expected_responses = [message] + [expected_response]
|
||||
# 1. ask a question
|
||||
mock_create_ai.return_value = self.ai
|
||||
expected_question = self.input_message(self.args)
|
||||
expected_responses = self.mock_request(expected_question,
|
||||
Chat([]),
|
||||
self.args.num_answers,
|
||||
self.args.output_tags).messages
|
||||
question_cmd(self.args, self.config)
|
||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
||||
Path(self.db_dir.name))
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
print(self.message_list(self.cache_dir))
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
||||
self.assert_messages_equal(cached_msg, expected_responses)
|
||||
|
||||
# 2. repeat the last question (without overwriting)
|
||||
# -> expect two identical messages (except for the file_path)
|
||||
self.args.ask = None
|
||||
self.args.repeat = []
|
||||
self.args.overwrite = False
|
||||
expected_responses += expected_responses
|
||||
question_cmd(self.args, self.config)
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 2)
|
||||
self.assert_msgs_equal_except_file_path(cached_msg, expected_responses)
|
||||
self.assert_messages_equal(cached_msg, expected_responses)
|
||||
|
||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
||||
def test_repeat_single_question_overwrite(self, mock_create_ai: MagicMock) -> None:
|
||||
"""
|
||||
Repeat a single question and overwrite the old one.
|
||||
"""
|
||||
mock_create_ai.side_effect = self.mock_create_ai
|
||||
# create a message
|
||||
message = Message(Question(self.args.ask[0]),
|
||||
Answer('Old Answer'),
|
||||
tags=set(self.args.output_tags),
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
||||
message.to_file()
|
||||
# 1. ask a question
|
||||
mock_create_ai.return_value = self.ai
|
||||
expected_question = self.input_message(self.args)
|
||||
expected_responses = self.mock_request(expected_question,
|
||||
Chat([]),
|
||||
self.args.num_answers,
|
||||
self.args.output_tags).messages
|
||||
question_cmd(self.args, self.config)
|
||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
||||
Path(self.db_dir.name))
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
assert cached_msg[0].file_path
|
||||
cached_msg_file_id = cached_msg[0].file_path.stem
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
||||
self.assert_messages_equal(cached_msg, expected_responses)
|
||||
|
||||
# repeat the last question (WITH overwriting)
|
||||
# -> expect a single message afterwards (with a new answer)
|
||||
# 2. repeat the last question (WITH overwriting)
|
||||
# -> expect a single message afterwards
|
||||
self.args.ask = None
|
||||
self.args.repeat = []
|
||||
self.args.overwrite = True
|
||||
expected_response = Message(Question(message.question),
|
||||
Answer('Answer 0'),
|
||||
ai=message.ai,
|
||||
model=message.model,
|
||||
tags=message.tags,
|
||||
file_path=Path('<NOT COMPARED>'))
|
||||
question_cmd(self.args, self.config)
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
||||
self.assert_msgs_equal_except_file_path(cached_msg, [expected_response])
|
||||
self.assert_messages_equal(cached_msg, expected_responses)
|
||||
# also check that the file ID has not been changed
|
||||
assert cached_msg[0].file_path
|
||||
self.assertEqual(cached_msg_file_id, cached_msg[0].file_path.stem)
|
||||
@@ -404,172 +439,38 @@ class TestQuestionCmdRepeat(TestQuestionCmd):
|
||||
"""
|
||||
Repeat a single question after an error.
|
||||
"""
|
||||
mock_create_ai.side_effect = self.mock_create_ai
|
||||
# create a question WITHOUT an answer
|
||||
# -> just like after an error, which is tested above
|
||||
message = Message(Question(self.args.ask[0]),
|
||||
tags=set(self.args.output_tags),
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
||||
message.to_file()
|
||||
# 1. ask a question
|
||||
mock_create_ai.return_value = self.ai
|
||||
expected_question = self.input_message(self.args)
|
||||
self.ai.request.side_effect = AIError
|
||||
|
||||
# execute the command
|
||||
with self.assertRaises(AIError):
|
||||
question_cmd(self.args, self.config)
|
||||
|
||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
||||
Path(self.db_dir.name))
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
assert cached_msg[0].file_path
|
||||
cached_msg_file_id = cached_msg[0].file_path.stem
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
||||
self.assert_messages_equal(cached_msg, [expected_question])
|
||||
|
||||
# repeat the last question (without overwriting)
|
||||
# 2. repeat the last question (without overwriting)
|
||||
# -> expect a single message because if the original has
|
||||
# no answer, it should be overwritten by default
|
||||
self.args.ask = None
|
||||
self.args.repeat = []
|
||||
self.args.overwrite = False
|
||||
expected_response = Message(Question(message.question),
|
||||
Answer('Answer 0'),
|
||||
ai=message.ai,
|
||||
model=message.model,
|
||||
tags=message.tags,
|
||||
file_path=Path('<NOT COMPARED>'))
|
||||
self.ai.request.side_effect = self.mock_request
|
||||
expected_responses = self.mock_request(expected_question,
|
||||
Chat([]),
|
||||
self.args.num_answers,
|
||||
self.args.output_tags).messages
|
||||
question_cmd(self.args, self.config)
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
||||
self.assert_msgs_equal_except_file_path(cached_msg, [expected_response])
|
||||
self.assert_messages_equal(cached_msg, expected_responses)
|
||||
# also check that the file ID has not been changed
|
||||
assert cached_msg[0].file_path
|
||||
self.assertEqual(cached_msg_file_id, cached_msg[0].file_path.stem)
|
||||
|
||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
||||
def test_repeat_single_question_new_args(self, mock_create_ai: MagicMock) -> None:
|
||||
"""
|
||||
Repeat a single question with new arguments.
|
||||
"""
|
||||
mock_create_ai.side_effect = self.mock_create_ai
|
||||
# create a message
|
||||
message = Message(Question(self.args.ask[0]),
|
||||
Answer('Old Answer'),
|
||||
tags=set(self.args.output_tags),
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
||||
message.to_file()
|
||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
||||
Path(self.db_dir.name))
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
assert cached_msg[0].file_path
|
||||
|
||||
# repeat the last question with new arguments (without overwriting)
|
||||
# -> expect two messages with identical question but different metadata and new answer
|
||||
self.args.ask = None
|
||||
self.args.repeat = []
|
||||
self.args.overwrite = False
|
||||
self.args.output_tags = ['newtag']
|
||||
self.args.AI = 'newai'
|
||||
self.args.model = 'newmodel'
|
||||
new_expected_response = Message(Question(message.question),
|
||||
Answer('Answer 0'),
|
||||
ai='newai',
|
||||
model='newmodel',
|
||||
tags={Tag('newtag')},
|
||||
file_path=Path('<NOT COMPARED>'))
|
||||
question_cmd(self.args, self.config)
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 2)
|
||||
self.assert_msgs_equal_except_file_path(cached_msg, [message] + [new_expected_response])
|
||||
|
||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
||||
def test_repeat_single_question_new_args_overwrite(self, mock_create_ai: MagicMock) -> None:
|
||||
"""
|
||||
Repeat a single question with new arguments, overwriting the old one.
|
||||
"""
|
||||
mock_create_ai.side_effect = self.mock_create_ai
|
||||
# create a message
|
||||
message = Message(Question(self.args.ask[0]),
|
||||
Answer('Old Answer'),
|
||||
tags=set(self.args.output_tags),
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
||||
message.to_file()
|
||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
||||
Path(self.db_dir.name))
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
assert cached_msg[0].file_path
|
||||
|
||||
# repeat the last question with new arguments
|
||||
self.args.ask = None
|
||||
self.args.repeat = []
|
||||
self.args.overwrite = True
|
||||
self.args.output_tags = ['newtag']
|
||||
self.args.AI = 'newai'
|
||||
self.args.model = 'newmodel'
|
||||
new_expected_response = Message(Question(message.question),
|
||||
Answer('Answer 0'),
|
||||
ai='newai',
|
||||
model='newmodel',
|
||||
tags={Tag('newtag')},
|
||||
file_path=Path('<NOT COMPARED>'))
|
||||
question_cmd(self.args, self.config)
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
||||
self.assert_msgs_equal_except_file_path(cached_msg, [new_expected_response])
|
||||
|
||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
||||
def test_repeat_multiple_questions(self, mock_create_ai: MagicMock) -> None:
|
||||
"""
|
||||
Repeat multiple questions.
|
||||
"""
|
||||
mock_create_ai.side_effect = self.mock_create_ai
|
||||
# 1. === create three questions ===
|
||||
# cached message without an answer
|
||||
message1 = Message(Question(self.args.ask[0]),
|
||||
tags=self.args.output_tags,
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
||||
# cached message with an answer
|
||||
message2 = Message(Question(self.args.ask[0]),
|
||||
Answer('Old Answer'),
|
||||
tags=self.args.output_tags,
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path(self.cache_dir.name) / '0002.txt')
|
||||
# DB message without an answer
|
||||
message3 = Message(Question(self.args.ask[0]),
|
||||
tags=self.args.output_tags,
|
||||
ai=self.args.AI,
|
||||
model=self.args.model,
|
||||
file_path=Path(self.db_dir.name) / '0003.txt')
|
||||
message1.to_file()
|
||||
message2.to_file()
|
||||
message3.to_file()
|
||||
questions = [message1, message2, message3]
|
||||
expected_responses: list[Message] = []
|
||||
fake_ai = self.mock_create_ai(self.args, self.config)
|
||||
for question in questions:
|
||||
# since the message's answer is modified, we use a copy
|
||||
# -> the original is used for comparison below
|
||||
expected_responses += fake_ai.request(copy(question),
|
||||
Chat([]),
|
||||
self.args.num_answers,
|
||||
set(self.args.output_tags)).messages
|
||||
|
||||
# 2. === repeat all three questions (without overwriting) ===
|
||||
self.args.ask = None
|
||||
self.args.repeat = ['0001', '0002', '0003']
|
||||
self.args.overwrite = False
|
||||
question_cmd(self.args, self.config)
|
||||
# two new files should be in the cache directory
|
||||
# * the repeated cached message with answer
|
||||
# * the repeated DB message
|
||||
# -> the cached message without answer should be overwritten
|
||||
self.assertEqual(len(self.message_list(self.cache_dir)), 4)
|
||||
self.assertEqual(len(self.message_list(self.db_dir)), 1)
|
||||
expected_cache_messages = [expected_responses[0], message2, expected_responses[1], expected_responses[2]]
|
||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
||||
Path(self.db_dir.name))
|
||||
cached_msg = chat.msg_gather(loc='cache')
|
||||
self.assert_msgs_equal_except_file_path(cached_msg, expected_cache_messages)
|
||||
# check that the DB message has not been modified at all
|
||||
db_msg = chat.msg_gather(loc='db')
|
||||
self.assert_msgs_all_equal(db_msg, [message3])
|
||||
|
||||
Reference in New Issue
Block a user