Compare commits
2 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 0e00d25ef3 | |||
| 69d916f0cc |
@@ -46,80 +46,79 @@ cmm [global options] command [command options]
|
|||||||
The `question` command is used to ask, create, and process questions.
|
The `question` command is used to ask, create, and process questions.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cmm question [-t OTAGS]... [-k ATAGS]... [-x XTAGS]... [-o OUTTAGS]... [-A AI_ID] [-M MODEL] [-n NUM] [-m MAX] [-T TEMP] (-a QUESTION | -c QUESTION | -r [MESSAGE ...] | -p [MESSAGE ...]) [-O] [-s FILE]... [-S FILE]...
|
cmm question [-t OTAGS]... [-k ATAGS]... [-x XTAGS]... [-o OUTTAGS]... [-A AI] [-M MODEL] [-n NUM] [-m MAX] [-T TEMP] (-a ASK | -c CREATE | -r REPEAT | -p PROCESS) [-O] [-s SOURCE]... [-S SOURCE]...
|
||||||
```
|
```
|
||||||
|
|
||||||
* `-t, --or-tags OTAGS`: List of tags (one must match)
|
* `-t, --or-tags OTAGS` : List of tags (one must match)
|
||||||
* `-k, --and-tags ATAGS`: List of tags (all must match)
|
* `-k, --and-tags ATAGS` : List of tags (all must match)
|
||||||
* `-x, --exclude-tags XTAGS`: List of tags to exclude
|
* `-x, --exclude-tags XTAGS` : List of tags to exclude
|
||||||
* `-o, --output-tags OUTTAGS`: List of output tags (default: use input tags)
|
* `-o, --output-tags OUTTAGS` : List of output tags (default: use input tags)
|
||||||
* `-A, --AI AI_ID`: AI ID to use
|
* `-A, --AI AI` : AI ID to use
|
||||||
* `-M, --model MODEL`: Model to use
|
* `-M, --model MODEL` : Model to use
|
||||||
* `-n, --num-answers NUM`: Number of answers to request
|
* `-n, --num-answers NUM` : Number of answers to request
|
||||||
* `-m, --max-tokens MAX`: Max. number of tokens
|
* `-m, --max-tokens MAX` : Max. number of tokens
|
||||||
* `-T, --temperature TEMP`: Temperature value
|
* `-T, --temperature TEMP` : Temperature value
|
||||||
* `-a, --ask QUESTION`: Ask a question
|
* `-a, --ask ASK` : Ask a question
|
||||||
* `-c, --create QUESTION`: Create a question
|
* `-c, --create CREATE` : Create a question
|
||||||
* `-r, --repeat [MESSAGE ...]`: Repeat a question
|
* `-r, --repeat REPEAT` : Repeat a question
|
||||||
* `-p, --process [MESSAGE ...]`: Process existing questions
|
* `-p, --process PROCESS` : Process existing questions
|
||||||
* `-O, --overwrite`: Overwrite existing messages when repeating them
|
* `-O, --overwrite` : Overwrite existing messages when repeating them
|
||||||
* `-s, --source-text FILE`: Add content of a file to the query
|
* `-s, --source-text SOURCE` : Add content of a file to the query
|
||||||
* `-S, --source-code FILE`: Add source code file content to the chat history
|
* `-S, --source-code SOURCE` : Add source code file content to the chat history
|
||||||
|
|
||||||
#### Hist
|
#### Hist
|
||||||
|
|
||||||
The `hist` command is used to print the chat history.
|
The `hist` command is used to print the chat history.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cmm hist [-t OTAGS]... [-k ATAGS]... [-x XTAGS]... [-w] [-W] [-S] [-A SUBSTRING] [-Q SUBSTRING]
|
cmm hist [-t OTAGS]... [-k ATAGS]... [-x XTAGS]... [-w] [-W] [-S] [-A ANSWER] [-Q QUESTION]
|
||||||
```
|
```
|
||||||
|
|
||||||
* `-t, --or-tags OTAGS`: List of tags (one must match)
|
* `-t, --or-tags OTAGS` : List of tags (one must match)
|
||||||
* `-k, --and-tags ATAGS`: List of tags (all must match)
|
* `-k, --and-tags ATAGS` : List of tags (all must match)
|
||||||
* `-x, --exclude-tags XTAGS`: List of tags to exclude
|
* `-x, --exclude-tags XTAGS` : List of tags to exclude
|
||||||
* `-w, --with-tags`: Print chat history with tags
|
* `-w, --with-tags` : Print chat history with tags
|
||||||
* `-W, --with-files`: Print chat history with filenames
|
* `-W, --with-files` : Print chat history with filenames
|
||||||
* `-S, --source-code-only`: Only print embedded source code
|
* `-S, --source-code-only` : Print only source code
|
||||||
* `-A, --answer SUBSTRING`: Search for answer substring
|
* `-A, --answer ANSWER` : Search for answer substring
|
||||||
* `-Q, --question SUBSTRING`: Search for question substring
|
* `-Q, --question QUESTION` : Search for question substring
|
||||||
|
|
||||||
#### Tags
|
#### Tags
|
||||||
|
|
||||||
The `tags` command is used to manage tags.
|
The `tags` command is used to manage tags.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cmm tags (-l | -p PREFIX | -c SUBSTRING)
|
cmm tags (-l | -p PREFIX | -c CONTENT)
|
||||||
```
|
```
|
||||||
|
|
||||||
* `-l, --list`: List all tags and their frequency
|
* `-l, --list` : List all tags and their frequency
|
||||||
* `-p, --prefix PREFIX`: Filter tags by prefix
|
* `-p, --prefix PREFIX` : Filter tags by prefix
|
||||||
* `-c, --contain SUBSTRING`: Filter tags by contained substring
|
* `-c, --contain CONTENT` : Filter tags by contained substring
|
||||||
|
|
||||||
#### Config
|
#### Config
|
||||||
|
|
||||||
The `config` command is used to manage the configuration.
|
The `config` command is used to manage the configuration.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cmm config (-l | -m | -c FILE)
|
cmm config (-l | -m | -c CREATE)
|
||||||
```
|
```
|
||||||
|
|
||||||
* `-l, --list-models`: List all available models
|
* `-l, --list-models` : List all available models
|
||||||
* `-m, --print-model`: Print the currently configured model
|
* `-m, --print-model` : Print the currently configured model
|
||||||
* `-c, --create FILE`: Create config with default settings in the given file
|
* `-c, --create CREATE` : Create config with default settings in the given file
|
||||||
|
|
||||||
#### Print
|
#### Print
|
||||||
|
|
||||||
The `print` command is used to print message files.
|
The `print` command is used to print message files.
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
cmm print (-f FILE | -l) [-q | -a | -S]
|
cmm print -f FILE [-q | -a | -S]
|
||||||
```
|
```
|
||||||
|
|
||||||
* `-f, --file FILE`: Print given file
|
* `-f, --file FILE` : File to print
|
||||||
* `-l, --latest`: Print latest message
|
* `-q, --question` : Print only question
|
||||||
* `-q, --question`: Only print the question
|
* `-a, --answer` : Print only answer
|
||||||
* `-a, --answer`: Only print the answer
|
* `-S, --only-source-code` : Print only source code
|
||||||
* `-S, --only-source-code`: Only print embedded source code
|
|
||||||
|
|
||||||
### Examples
|
### Examples
|
||||||
|
|
||||||
@@ -161,27 +160,18 @@ cmm print -f example.yaml
|
|||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
|
||||||
The default configuration filename is `.config.yaml` (it is searched in the current working directory).
|
The configuration file (`.config.yaml`) should contain the following fields:
|
||||||
Use the command `cmm config --create <FILENAME>` to create a default configuration:
|
|
||||||
|
|
||||||
```
|
- `openai`:
|
||||||
cache: .
|
- `api_key`: Your OpenAI API key.
|
||||||
db: ./db/
|
- `model`: The name of the OpenAI model to use (e.g. "text-davinci-002").
|
||||||
ais:
|
- `temperature`: The temperature value for the model.
|
||||||
myopenai:
|
- `max_tokens`: The maximum number of tokens for the model.
|
||||||
name: openai
|
- `top_p`: The top P value for the model.
|
||||||
model: gpt-3.5-turbo-16k
|
- `frequency_penalty`: The frequency penalty value.
|
||||||
api_key: 0123456789
|
- `presence_penalty`: The presence penalty value.
|
||||||
temperature: 1.0
|
- `system`: The system message used to set the behavior of the AI.
|
||||||
max_tokens: 4000
|
- `db`: The directory where the question-answer pairs are stored in YAML files.
|
||||||
top_p: 1.0
|
|
||||||
frequency_penalty: 0.0
|
|
||||||
presence_penalty: 0.0
|
|
||||||
system: You are an assistant
|
|
||||||
```
|
|
||||||
|
|
||||||
Each AI has its own section and the name of that section is called the 'AI ID' (in the example above it is `myopenai`).
|
|
||||||
The AI ID can be any string, as long as it's unique within the `ais` section. The AI ID is used for all commands that support the `AI` parameter and it's also stored within each message file.
|
|
||||||
|
|
||||||
## Autocompletion
|
## Autocompletion
|
||||||
|
|
||||||
@@ -196,33 +186,33 @@ After adding this line, restart your shell or run `source <your-shell-config-fil
|
|||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
### Enable commit hooks
|
### Enable commit hooks
|
||||||
```bash
|
```
|
||||||
pip install pre-commit
|
pip install pre-commit
|
||||||
pre-commit install
|
pre-commit install
|
||||||
```
|
```
|
||||||
### Execute tests before opening a PR
|
### Execute tests before opening a PR
|
||||||
```bash
|
```
|
||||||
pytest
|
pytest
|
||||||
```
|
```
|
||||||
### Consider using `pyenv` / `pyenv-virtualenv`
|
### Consider using `pyenv` / `pyenv-virtualenv`
|
||||||
Short installation instructions:
|
Short installation instructions:
|
||||||
* install `pyenv`:
|
* install `pyenv`:
|
||||||
```bash
|
```
|
||||||
cd ~
|
cd ~
|
||||||
git clone https://github.com/pyenv/pyenv .pyenv
|
git clone https://github.com/pyenv/pyenv .pyenv
|
||||||
cd ~/.pyenv && src/configure && make -C src
|
cd ~/.pyenv && src/configure && make -C src
|
||||||
```
|
```
|
||||||
* make sure that `~/.pyenv/shims` and `~/.pyenv/bin` are the first entries in your `PATH`, e.g., by setting it in `~/.bashrc`
|
* make sure that `~/.pyenv/shims` and `~/.pyenv/bin` are the first entries in your `PATH`, e. g. by setting it in `~/.bashrc`
|
||||||
* add the following to your `~/.bashrc` (after setting `PATH`): `eval "$(pyenv init -)"`
|
* add the following to your `~/.bashrc` (after setting `PATH`): `eval "$(pyenv init -)"`
|
||||||
* create a new terminal or source the changes (e.g., `source ~/.bashrc`)
|
* create a new terminal or source the changes (e. g. `source ~/.bashrc`)
|
||||||
* install `virtualenv`
|
* install `virtualenv`
|
||||||
```bash
|
```
|
||||||
git clone https://github.com/pyenv/pyenv-virtualenv.git $(pyenv root)/plugins/pyenv-virtualenv
|
git clone https://github.com/pyenv/pyenv-virtualenv.git $(pyenv root)/plugins/pyenv-virtualenv
|
||||||
```
|
```
|
||||||
* add the following to your `~/.bashrc` (after the commands above): `eval "$(pyenv virtualenv-init -)`
|
* add the following to your `~/.bashrc` (after the commands above): `eval "$(pyenv virtualenv-init -)`
|
||||||
* create a new terminal or source the changes (e.g., `source ~/.bashrc`)
|
* create a new terminal or source the changes (e. g. `source ~/.bashrc`)
|
||||||
* go back to the `ChatMasterMind` repo and create a virtual environment with the latest `Python`, e.g., `3.11.4`:
|
* go back to the `ChatMasterMind` repo and create a virtual environment with the latest `Python`, e. g. `3.11.4`:
|
||||||
```bash
|
```
|
||||||
cd <CMM_REPO_PATH>
|
cd <CMM_REPO_PATH>
|
||||||
pyenv install 3.11.4
|
pyenv install 3.11.4
|
||||||
pyenv virtualenv 3.11.4 py311
|
pyenv virtualenv 3.11.4 py311
|
||||||
@@ -233,3 +223,5 @@ pyenv activate py311
|
|||||||
## License
|
## License
|
||||||
|
|
||||||
This project is licensed under the terms of the WTFPL License.
|
This project is licensed under the terms of the WTFPL License.
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -3,20 +3,18 @@ Creates different AI instances, based on the given configuration.
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
import argparse
|
import argparse
|
||||||
from typing import cast, Optional
|
from typing import cast
|
||||||
from .configuration import Config, AIConfig, OpenAIConfig
|
from .configuration import Config, AIConfig, OpenAIConfig
|
||||||
from .ai import AI, AIError
|
from .ai import AI, AIError
|
||||||
from .ais.openai import OpenAI
|
from .ais.openai import OpenAI
|
||||||
|
|
||||||
|
|
||||||
def create_ai(args: argparse.Namespace, config: Config, # noqa: 11
|
def create_ai(args: argparse.Namespace, config: Config) -> AI: # noqa: 11
|
||||||
def_ai: Optional[str] = None,
|
|
||||||
def_model: Optional[str] = None) -> AI:
|
|
||||||
"""
|
"""
|
||||||
Creates an AI subclass instance from the given arguments and configuration file.
|
Creates an AI subclass instance from the given arguments
|
||||||
If AI has not been set in the arguments, it searches for the ID 'default'. If
|
and configuration file. If AI has not been set in the
|
||||||
that is not found, it uses the first AI in the list. It's also possible to
|
arguments, it searches for the ID 'default'. If that
|
||||||
specify a default AI and model using 'def_ai' and 'def_model'.
|
is not found, it uses the first AI in the list.
|
||||||
"""
|
"""
|
||||||
ai_conf: AIConfig
|
ai_conf: AIConfig
|
||||||
if hasattr(args, 'AI') and args.AI:
|
if hasattr(args, 'AI') and args.AI:
|
||||||
@@ -24,8 +22,6 @@ def create_ai(args: argparse.Namespace, config: Config, # noqa: 11
|
|||||||
ai_conf = config.ais[args.AI]
|
ai_conf = config.ais[args.AI]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
raise AIError(f"AI ID '{args.AI}' does not exist in this configuration")
|
raise AIError(f"AI ID '{args.AI}' does not exist in this configuration")
|
||||||
elif def_ai:
|
|
||||||
ai_conf = config.ais[def_ai]
|
|
||||||
elif 'default' in config.ais:
|
elif 'default' in config.ais:
|
||||||
ai_conf = config.ais['default']
|
ai_conf = config.ais['default']
|
||||||
else:
|
else:
|
||||||
@@ -38,8 +34,6 @@ def create_ai(args: argparse.Namespace, config: Config, # noqa: 11
|
|||||||
ai = OpenAI(cast(OpenAIConfig, ai_conf))
|
ai = OpenAI(cast(OpenAIConfig, ai_conf))
|
||||||
if hasattr(args, 'model') and args.model:
|
if hasattr(args, 'model') and args.model:
|
||||||
ai.config.model = args.model
|
ai.config.model = args.model
|
||||||
elif def_model:
|
|
||||||
ai.config.model = def_model
|
|
||||||
if hasattr(args, 'max_tokens') and args.max_tokens:
|
if hasattr(args, 'max_tokens') and args.max_tokens:
|
||||||
ai.config.max_tokens = args.max_tokens
|
ai.config.max_tokens = args.max_tokens
|
||||||
if hasattr(args, 'temperature') and args.temperature:
|
if hasattr(args, 'temperature') and args.temperature:
|
||||||
|
|||||||
@@ -44,7 +44,7 @@ class OpenAI(AI):
|
|||||||
frequency_penalty=self.config.frequency_penalty,
|
frequency_penalty=self.config.frequency_penalty,
|
||||||
presence_penalty=self.config.presence_penalty)
|
presence_penalty=self.config.presence_penalty)
|
||||||
question.answer = Answer(response['choices'][0]['message']['content'])
|
question.answer = Answer(response['choices'][0]['message']['content'])
|
||||||
question.tags = set(otags) if otags is not None else None
|
question.tags = otags
|
||||||
question.ai = self.ID
|
question.ai = self.ID
|
||||||
question.model = self.config.model
|
question.model = self.config.model
|
||||||
answers: list[Message] = [question]
|
answers: list[Message] = [question]
|
||||||
|
|||||||
@@ -15,7 +15,7 @@ def hist_cmd(args: argparse.Namespace, config: Config) -> None:
|
|||||||
tags_not=args.exclude_tags,
|
tags_not=args.exclude_tags,
|
||||||
question_contains=args.question,
|
question_contains=args.question,
|
||||||
answer_contains=args.answer)
|
answer_contains=args.answer)
|
||||||
chat = ChatDB.from_dir(Path(config.cache),
|
chat = ChatDB.from_dir(Path('.'),
|
||||||
Path(config.db),
|
Path(config.db),
|
||||||
mfilter=mfilter)
|
mfilter=mfilter)
|
||||||
chat.print(args.source_code_only,
|
chat.print(args.source_code_only,
|
||||||
|
|||||||
@@ -3,43 +3,25 @@ import argparse
|
|||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from ..configuration import Config
|
from ..configuration import Config
|
||||||
from ..message import Message, MessageError
|
from ..message import Message, MessageError
|
||||||
from ..chat import ChatDB
|
|
||||||
|
|
||||||
|
|
||||||
def print_message(message: Message, args: argparse.Namespace) -> None:
|
|
||||||
"""
|
|
||||||
Print given message according to give arguments.
|
|
||||||
"""
|
|
||||||
if args.question:
|
|
||||||
print(message.question)
|
|
||||||
elif args.answer:
|
|
||||||
print(message.answer)
|
|
||||||
elif message.answer and args.only_source_code:
|
|
||||||
for code in message.answer.source_code():
|
|
||||||
print(code)
|
|
||||||
else:
|
|
||||||
print(message.to_str())
|
|
||||||
|
|
||||||
|
|
||||||
def print_cmd(args: argparse.Namespace, config: Config) -> None:
|
def print_cmd(args: argparse.Namespace, config: Config) -> None:
|
||||||
"""
|
"""
|
||||||
Handler for the 'print' command.
|
Handler for the 'print' command.
|
||||||
"""
|
"""
|
||||||
# print given file
|
fname = Path(args.file)
|
||||||
if args.file is not None:
|
try:
|
||||||
fname = Path(args.file)
|
message = Message.from_file(fname)
|
||||||
try:
|
if message:
|
||||||
message = Message.from_file(fname)
|
if args.question:
|
||||||
if message:
|
print(message.question)
|
||||||
print_message(message, args)
|
elif args.answer:
|
||||||
except MessageError:
|
print(message.answer)
|
||||||
print(f"File is not a valid message: {args.file}")
|
elif message.answer and args.only_source_code:
|
||||||
sys.exit(1)
|
for code in message.answer.source_code():
|
||||||
# print latest message
|
print(code)
|
||||||
elif args.latest:
|
else:
|
||||||
chat = ChatDB.from_dir(Path(config.cache), Path(config.db))
|
print(message.to_str())
|
||||||
latest = chat.msg_latest(loc='disk')
|
except MessageError:
|
||||||
if not latest:
|
print(f"File is not a valid message: {args.file}")
|
||||||
print("No message found!")
|
sys.exit(1)
|
||||||
sys.exit(1)
|
|
||||||
print_message(latest, args)
|
|
||||||
|
|||||||
@@ -2,7 +2,6 @@ import sys
|
|||||||
import argparse
|
import argparse
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from itertools import zip_longest
|
from itertools import zip_longest
|
||||||
from copy import deepcopy
|
|
||||||
from ..configuration import Config
|
from ..configuration import Config
|
||||||
from ..chat import ChatDB
|
from ..chat import ChatDB
|
||||||
from ..message import Message, MessageFilter, MessageError, Question, source_code
|
from ..message import Message, MessageFilter, MessageError, Question, source_code
|
||||||
@@ -72,7 +71,7 @@ def create_message(chat: ChatDB, args: argparse.Namespace) -> Message:
|
|||||||
full_question = '\n\n'.join(question_parts)
|
full_question = '\n\n'.join(question_parts)
|
||||||
|
|
||||||
message = Message(question=Question(full_question),
|
message = Message(question=Question(full_question),
|
||||||
tags=args.output_tags,
|
tags=args.output_tags, # FIXME
|
||||||
ai=args.AI,
|
ai=args.AI,
|
||||||
model=args.model)
|
model=args.model)
|
||||||
# only write the new message to the cache,
|
# only write the new message to the cache,
|
||||||
@@ -93,8 +92,8 @@ def make_request(ai: AI, chat: ChatDB, message: Message, args: argparse.Namespac
|
|||||||
print(message.to_str())
|
print(message.to_str())
|
||||||
response: AIResponse = ai.request(message,
|
response: AIResponse = ai.request(message,
|
||||||
chat,
|
chat,
|
||||||
args.num_answers,
|
args.num_answers, # FIXME
|
||||||
args.output_tags)
|
args.output_tags) # FIXME
|
||||||
# only write the response messages to the cache,
|
# only write the response messages to the cache,
|
||||||
# don't add them to the internal list
|
# don't add them to the internal list
|
||||||
chat.cache_write(response.messages)
|
chat.cache_write(response.messages)
|
||||||
@@ -106,76 +105,14 @@ def make_request(ai: AI, chat: ChatDB, message: Message, args: argparse.Namespac
|
|||||||
print(response.tokens)
|
print(response.tokens)
|
||||||
|
|
||||||
|
|
||||||
def create_msg_args(msg: Message, args: argparse.Namespace) -> argparse.Namespace:
|
|
||||||
"""
|
|
||||||
Takes an existing message and CLI arguments, and returns modified args based
|
|
||||||
on the members of the given message. Used e.g. when repeating messages, where
|
|
||||||
it's necessary to determine the correct AI, module and output tags to use
|
|
||||||
(either from the existing message or the given args).
|
|
||||||
"""
|
|
||||||
msg_args = args
|
|
||||||
# if AI, model or output tags have not been specified,
|
|
||||||
# use those from the original message
|
|
||||||
if (args.AI is None
|
|
||||||
or args.model is None # noqa: W503
|
|
||||||
or args.output_tags is None): # noqa: W503
|
|
||||||
msg_args = deepcopy(args)
|
|
||||||
if args.AI is None and msg.ai is not None:
|
|
||||||
msg_args.AI = msg.ai
|
|
||||||
if args.model is None and msg.model is not None:
|
|
||||||
msg_args.model = msg.model
|
|
||||||
if args.output_tags is None and msg.tags is not None:
|
|
||||||
msg_args.output_tags = msg.tags
|
|
||||||
return msg_args
|
|
||||||
|
|
||||||
|
|
||||||
def repeat_messages(messages: list[Message], chat: ChatDB, args: argparse.Namespace, config: Config) -> None:
|
|
||||||
"""
|
|
||||||
Repeat the given messages using the given arguments.
|
|
||||||
"""
|
|
||||||
ai: AI
|
|
||||||
for msg in messages:
|
|
||||||
msg_args = create_msg_args(msg, args)
|
|
||||||
ai = create_ai(msg_args, config)
|
|
||||||
print(f"--------- Repeating message '{msg.msg_id()}': ---------")
|
|
||||||
# overwrite the latest message if requested or empty
|
|
||||||
# -> but not if it's in the DB!
|
|
||||||
if ((msg.answer is None or msg_args.overwrite is True)
|
|
||||||
and (not chat.msg_in_db(msg))): # noqa: W503
|
|
||||||
msg.clear_answer()
|
|
||||||
make_request(ai, chat, msg, msg_args)
|
|
||||||
# otherwise create a new one
|
|
||||||
else:
|
|
||||||
msg_args.ask = [msg.question]
|
|
||||||
message = create_message(chat, msg_args)
|
|
||||||
make_request(ai, chat, message, msg_args)
|
|
||||||
|
|
||||||
|
|
||||||
def invert_input_tag_args(args: argparse.Namespace) -> None:
|
|
||||||
"""
|
|
||||||
Changes the semantics of the INPUT tags for this command:
|
|
||||||
* not tags specified on the CLI -> no tags are selected
|
|
||||||
* empty tags specified on the CLI -> all tags are selected
|
|
||||||
"""
|
|
||||||
if args.or_tags is None:
|
|
||||||
args.or_tags = set()
|
|
||||||
elif len(args.or_tags) == 0:
|
|
||||||
args.or_tags = None
|
|
||||||
if args.and_tags is None:
|
|
||||||
args.and_tags = set()
|
|
||||||
elif len(args.and_tags) == 0:
|
|
||||||
args.and_tags = None
|
|
||||||
|
|
||||||
|
|
||||||
def question_cmd(args: argparse.Namespace, config: Config) -> None:
|
def question_cmd(args: argparse.Namespace, config: Config) -> None:
|
||||||
"""
|
"""
|
||||||
Handler for the 'question' command.
|
Handler for the 'question' command.
|
||||||
"""
|
"""
|
||||||
invert_input_tag_args(args)
|
mfilter = MessageFilter(tags_or=args.or_tags if args.or_tags is not None else set(),
|
||||||
mfilter = MessageFilter(tags_or=args.or_tags,
|
tags_and=args.and_tags if args.and_tags is not None else set(),
|
||||||
tags_and=args.and_tags,
|
tags_not=args.exclude_tags if args.exclude_tags is not None else set())
|
||||||
tags_not=args.exclude_tags)
|
chat = ChatDB.from_dir(cache_path=Path('.'),
|
||||||
chat = ChatDB.from_dir(cache_path=Path(config.cache),
|
|
||||||
db_path=Path(config.db),
|
db_path=Path(config.db),
|
||||||
mfilter=mfilter)
|
mfilter=mfilter)
|
||||||
# if it's a new question, create and store it immediately
|
# if it's a new question, create and store it immediately
|
||||||
@@ -184,24 +121,30 @@ def question_cmd(args: argparse.Namespace, config: Config) -> None:
|
|||||||
if args.create:
|
if args.create:
|
||||||
return
|
return
|
||||||
|
|
||||||
|
# create the correct AI instance
|
||||||
|
ai: AI = create_ai(args, config)
|
||||||
|
|
||||||
# === ASK ===
|
# === ASK ===
|
||||||
if args.ask:
|
if args.ask:
|
||||||
ai: AI = create_ai(args, config)
|
|
||||||
make_request(ai, chat, message, args)
|
make_request(ai, chat, message, args)
|
||||||
# === REPEAT ===
|
# === REPEAT ===
|
||||||
elif args.repeat is not None:
|
elif args.repeat is not None:
|
||||||
repeat_msgs: list[Message] = []
|
lmessage = chat.msg_latest(loc='cache')
|
||||||
# repeat latest message
|
if lmessage is None:
|
||||||
if len(args.repeat) == 0:
|
print("No message found to repeat!")
|
||||||
lmessage = chat.msg_latest(loc='cache')
|
sys.exit(1)
|
||||||
if lmessage is None:
|
|
||||||
print("No message found to repeat!")
|
|
||||||
sys.exit(1)
|
|
||||||
repeat_msgs.append(lmessage)
|
|
||||||
# repeat given message(s)
|
|
||||||
else:
|
else:
|
||||||
repeat_msgs = chat.msg_find(args.repeat, loc='disk')
|
print(f"Repeating message '{lmessage.msg_id()}':")
|
||||||
repeat_messages(repeat_msgs, chat, args, config)
|
# overwrite the latest message if requested or empty
|
||||||
|
if lmessage.answer is None or args.overwrite is True:
|
||||||
|
lmessage.clear_answer()
|
||||||
|
make_request(ai, chat, lmessage, args)
|
||||||
|
# otherwise create a new one
|
||||||
|
else:
|
||||||
|
args.ask = [lmessage.question]
|
||||||
|
message = create_message(chat, args)
|
||||||
|
make_request(ai, chat, message, args)
|
||||||
|
|
||||||
# === PROCESS ===
|
# === PROCESS ===
|
||||||
elif args.process is not None:
|
elif args.process is not None:
|
||||||
# TODO: process either all questions without an
|
# TODO: process either all questions without an
|
||||||
|
|||||||
@@ -8,7 +8,7 @@ def tags_cmd(args: argparse.Namespace, config: Config) -> None:
|
|||||||
"""
|
"""
|
||||||
Handler for the 'tags' command.
|
Handler for the 'tags' command.
|
||||||
"""
|
"""
|
||||||
chat = ChatDB.from_dir(cache_path=Path(config.cache),
|
chat = ChatDB.from_dir(cache_path=Path('.'),
|
||||||
db_path=Path(config.db))
|
db_path=Path(config.db))
|
||||||
if args.list:
|
if args.list:
|
||||||
tags_freq = chat.msg_tags_frequency(args.prefix, args.contain)
|
tags_freq = chat.msg_tags_frequency(args.prefix, args.contain)
|
||||||
|
|||||||
@@ -116,7 +116,6 @@ class Config:
|
|||||||
"""
|
"""
|
||||||
# all members have default values, so we can easily create
|
# all members have default values, so we can easily create
|
||||||
# a default configuration
|
# a default configuration
|
||||||
cache: str = '.'
|
|
||||||
db: str = './db/'
|
db: str = './db/'
|
||||||
ais: dict[str, AIConfig] = field(default_factory=create_default_ai_configs)
|
ais: dict[str, AIConfig] = field(default_factory=create_default_ai_configs)
|
||||||
|
|
||||||
@@ -133,7 +132,6 @@ class Config:
|
|||||||
ai_conf = ai_config_instance(conf['name'], conf)
|
ai_conf = ai_config_instance(conf['name'], conf)
|
||||||
ais[ID] = ai_conf
|
ais[ID] = ai_conf
|
||||||
return cls(
|
return cls(
|
||||||
cache=str(source['cache']) if 'cache' in source else '.',
|
|
||||||
db=str(source['db']),
|
db=str(source['db']),
|
||||||
ais=ais
|
ais=ais
|
||||||
)
|
)
|
||||||
|
|||||||
+22
-25
@@ -34,23 +34,23 @@ def create_parser() -> argparse.ArgumentParser:
|
|||||||
|
|
||||||
# a parent parser for all commands that support tag selection
|
# a parent parser for all commands that support tag selection
|
||||||
tag_parser = argparse.ArgumentParser(add_help=False)
|
tag_parser = argparse.ArgumentParser(add_help=False)
|
||||||
tag_arg = tag_parser.add_argument('-t', '--or-tags', nargs='*',
|
tag_arg = tag_parser.add_argument('-t', '--or-tags', nargs='+',
|
||||||
help='List of tags (one must match)', metavar='OTAGS')
|
help='List of tags (one must match)', metavar='OTAGS')
|
||||||
tag_arg.completer = tags_completer # type: ignore
|
tag_arg.completer = tags_completer # type: ignore
|
||||||
atag_arg = tag_parser.add_argument('-k', '--and-tags', nargs='*',
|
atag_arg = tag_parser.add_argument('-k', '--and-tags', nargs='+',
|
||||||
help='List of tags (all must match)', metavar='ATAGS')
|
help='List of tags (all must match)', metavar='ATAGS')
|
||||||
atag_arg.completer = tags_completer # type: ignore
|
atag_arg.completer = tags_completer # type: ignore
|
||||||
etag_arg = tag_parser.add_argument('-x', '--exclude-tags', nargs='*',
|
etag_arg = tag_parser.add_argument('-x', '--exclude-tags', nargs='+',
|
||||||
help='List of tags to exclude', metavar='XTAGS')
|
help='List of tags to exclude', metavar='XTAGS')
|
||||||
etag_arg.completer = tags_completer # type: ignore
|
etag_arg.completer = tags_completer # type: ignore
|
||||||
otag_arg = tag_parser.add_argument('-o', '--output-tags', nargs='+',
|
otag_arg = tag_parser.add_argument('-o', '--output-tags', nargs='+',
|
||||||
help='List of output tags (default: use input tags)', metavar='OUTAGS')
|
help='List of output tags (default: use input tags)', metavar='OUTTAGS')
|
||||||
otag_arg.completer = tags_completer # type: ignore
|
otag_arg.completer = tags_completer # type: ignore
|
||||||
|
|
||||||
# a parent parser for all commands that support AI configuration
|
# a parent parser for all commands that support AI configuration
|
||||||
ai_parser = argparse.ArgumentParser(add_help=False)
|
ai_parser = argparse.ArgumentParser(add_help=False)
|
||||||
ai_parser.add_argument('-A', '--AI', help='AI ID to use', metavar='AI_ID')
|
ai_parser.add_argument('-A', '--AI', help='AI ID to use')
|
||||||
ai_parser.add_argument('-M', '--model', help='Model to use', metavar='MODEL')
|
ai_parser.add_argument('-M', '--model', help='Model to use')
|
||||||
ai_parser.add_argument('-n', '--num-answers', help='Number of answers to request', type=int, default=1)
|
ai_parser.add_argument('-n', '--num-answers', help='Number of answers to request', type=int, default=1)
|
||||||
ai_parser.add_argument('-m', '--max-tokens', help='Max. nr. of tokens', type=int)
|
ai_parser.add_argument('-m', '--max-tokens', help='Max. nr. of tokens', type=int)
|
||||||
ai_parser.add_argument('-T', '--temperature', help='Temperature value', type=float)
|
ai_parser.add_argument('-T', '--temperature', help='Temperature value', type=float)
|
||||||
@@ -61,15 +61,14 @@ def create_parser() -> argparse.ArgumentParser:
|
|||||||
aliases=['q'])
|
aliases=['q'])
|
||||||
question_cmd_parser.set_defaults(func=question_cmd)
|
question_cmd_parser.set_defaults(func=question_cmd)
|
||||||
question_group = question_cmd_parser.add_mutually_exclusive_group(required=True)
|
question_group = question_cmd_parser.add_mutually_exclusive_group(required=True)
|
||||||
question_group.add_argument('-a', '--ask', nargs='+', help='Ask a question', metavar='QUESTION')
|
question_group.add_argument('-a', '--ask', nargs='+', help='Ask a question')
|
||||||
question_group.add_argument('-c', '--create', nargs='+', help='Create a question', metavar='QUESTION')
|
question_group.add_argument('-c', '--create', nargs='+', help='Create a question')
|
||||||
question_group.add_argument('-r', '--repeat', nargs='*', help='Repeat a question', metavar='MESSAGE')
|
question_group.add_argument('-r', '--repeat', nargs='*', help='Repeat a question')
|
||||||
question_group.add_argument('-p', '--process', nargs='*', help='Process existing questions', metavar='MESSAGE')
|
question_group.add_argument('-p', '--process', nargs='*', help='Process existing questions')
|
||||||
question_cmd_parser.add_argument('-O', '--overwrite', help='Overwrite existing messages when repeating them',
|
question_cmd_parser.add_argument('-O', '--overwrite', help='Overwrite existing messages when repeating them',
|
||||||
action='store_true')
|
action='store_true')
|
||||||
question_cmd_parser.add_argument('-s', '--source-text', nargs='+', help='Add content of a file to the query', metavar='FILE')
|
question_cmd_parser.add_argument('-s', '--source-text', nargs='+', help='Add content of a file to the query')
|
||||||
question_cmd_parser.add_argument('-S', '--source-code', nargs='+', help='Add source code file content to the chat history',
|
question_cmd_parser.add_argument('-S', '--source-code', nargs='+', help='Add source code file content to the chat history')
|
||||||
metavar='FILE')
|
|
||||||
|
|
||||||
# 'hist' command parser
|
# 'hist' command parser
|
||||||
hist_cmd_parser = cmdparser.add_parser('hist', parents=[tag_parser],
|
hist_cmd_parser = cmdparser.add_parser('hist', parents=[tag_parser],
|
||||||
@@ -80,10 +79,10 @@ def create_parser() -> argparse.ArgumentParser:
|
|||||||
action='store_true')
|
action='store_true')
|
||||||
hist_cmd_parser.add_argument('-W', '--with-files', help="Print chat history with filenames.",
|
hist_cmd_parser.add_argument('-W', '--with-files', help="Print chat history with filenames.",
|
||||||
action='store_true')
|
action='store_true')
|
||||||
hist_cmd_parser.add_argument('-S', '--source-code-only', help='Only print embedded source code',
|
hist_cmd_parser.add_argument('-S', '--source-code-only', help='Print only source code',
|
||||||
action='store_true')
|
action='store_true')
|
||||||
hist_cmd_parser.add_argument('-A', '--answer', help='Search for answer substring', metavar='SUBSTRING')
|
hist_cmd_parser.add_argument('-A', '--answer', help='Search for answer substring')
|
||||||
hist_cmd_parser.add_argument('-Q', '--question', help='Search for question substring', metavar='SUBSTRING')
|
hist_cmd_parser.add_argument('-Q', '--question', help='Search for question substring')
|
||||||
|
|
||||||
# 'tags' command parser
|
# 'tags' command parser
|
||||||
tags_cmd_parser = cmdparser.add_parser('tags',
|
tags_cmd_parser = cmdparser.add_parser('tags',
|
||||||
@@ -93,8 +92,8 @@ def create_parser() -> argparse.ArgumentParser:
|
|||||||
tags_group = tags_cmd_parser.add_mutually_exclusive_group(required=True)
|
tags_group = tags_cmd_parser.add_mutually_exclusive_group(required=True)
|
||||||
tags_group.add_argument('-l', '--list', help="List all tags and their frequency",
|
tags_group.add_argument('-l', '--list', help="List all tags and their frequency",
|
||||||
action='store_true')
|
action='store_true')
|
||||||
tags_cmd_parser.add_argument('-p', '--prefix', help="Filter tags by prefix", metavar='PREFIX')
|
tags_cmd_parser.add_argument('-p', '--prefix', help="Filter tags by prefix")
|
||||||
tags_cmd_parser.add_argument('-c', '--contain', help="Filter tags by contained substring", metavar='SUBSTRING')
|
tags_cmd_parser.add_argument('-c', '--contain', help="Filter tags by contained substring")
|
||||||
|
|
||||||
# 'config' command parser
|
# 'config' command parser
|
||||||
config_cmd_parser = cmdparser.add_parser('config',
|
config_cmd_parser = cmdparser.add_parser('config',
|
||||||
@@ -107,20 +106,18 @@ def create_parser() -> argparse.ArgumentParser:
|
|||||||
action='store_true')
|
action='store_true')
|
||||||
config_group.add_argument('-m', '--print-model', help="Print the currently configured model",
|
config_group.add_argument('-m', '--print-model', help="Print the currently configured model",
|
||||||
action='store_true')
|
action='store_true')
|
||||||
config_group.add_argument('-c', '--create', help="Create config with default settings in the given file", metavar='FILE')
|
config_group.add_argument('-c', '--create', help="Create config with default settings in the given file")
|
||||||
|
|
||||||
# 'print' command parser
|
# 'print' command parser
|
||||||
print_cmd_parser = cmdparser.add_parser('print',
|
print_cmd_parser = cmdparser.add_parser('print',
|
||||||
help="Print message files.",
|
help="Print message files.",
|
||||||
aliases=['p'])
|
aliases=['p'])
|
||||||
print_cmd_parser.set_defaults(func=print_cmd)
|
print_cmd_parser.set_defaults(func=print_cmd)
|
||||||
print_group = print_cmd_parser.add_mutually_exclusive_group(required=True)
|
print_cmd_parser.add_argument('-f', '--file', help='File to print', required=True)
|
||||||
print_group.add_argument('-f', '--file', help='Print given message file', metavar='FILE')
|
|
||||||
print_group.add_argument('-l', '--latest', help='Print latest message', action='store_true')
|
|
||||||
print_cmd_modes = print_cmd_parser.add_mutually_exclusive_group()
|
print_cmd_modes = print_cmd_parser.add_mutually_exclusive_group()
|
||||||
print_cmd_modes.add_argument('-q', '--question', help='Only print the question', action='store_true')
|
print_cmd_modes.add_argument('-q', '--question', help='Print only question', action='store_true')
|
||||||
print_cmd_modes.add_argument('-a', '--answer', help='Only print the answer', action='store_true')
|
print_cmd_modes.add_argument('-a', '--answer', help='Print only answer', action='store_true')
|
||||||
print_cmd_modes.add_argument('-S', '--only-source-code', help='Only print embedded source code', action='store_true')
|
print_cmd_modes.add_argument('-S', '--only-source-code', help='Print only source code', action='store_true')
|
||||||
|
|
||||||
argcomplete.autocomplete(parser)
|
argcomplete.autocomplete(parser)
|
||||||
return parser
|
return parser
|
||||||
|
|||||||
@@ -222,36 +222,12 @@ class Message():
|
|||||||
ai_yaml_key: ClassVar[str] = 'ai'
|
ai_yaml_key: ClassVar[str] = 'ai'
|
||||||
model_yaml_key: ClassVar[str] = 'model'
|
model_yaml_key: ClassVar[str] = 'model'
|
||||||
|
|
||||||
def __post_init__(self) -> None:
|
|
||||||
# convert some types that are often set wrong
|
|
||||||
if self.tags is not None and not isinstance(self.tags, set):
|
|
||||||
self.tags = set(self.tags)
|
|
||||||
if self.file_path is not None and not isinstance(self.file_path, pathlib.Path):
|
|
||||||
self.file_path = pathlib.Path(self.file_path)
|
|
||||||
|
|
||||||
def __hash__(self) -> int:
|
def __hash__(self) -> int:
|
||||||
"""
|
"""
|
||||||
The hash value is computed based on immutable members.
|
The hash value is computed based on immutable members.
|
||||||
"""
|
"""
|
||||||
return hash((self.question, self.answer))
|
return hash((self.question, self.answer))
|
||||||
|
|
||||||
def equals(self, other: MessageInst, tags: bool = True, ai: bool = True,
|
|
||||||
model: bool = True, file_path: bool = True, verbose: bool = False) -> bool:
|
|
||||||
"""
|
|
||||||
Compare this message with another one, including the metadata.
|
|
||||||
Return True if everything is identical, False otherwise.
|
|
||||||
"""
|
|
||||||
equal: bool = ((not tags or (self.tags == other.tags))
|
|
||||||
and (not ai or (self.ai == other.ai)) # noqa: W503
|
|
||||||
and (not model or (self.model == other.model)) # noqa: W503
|
|
||||||
and (not file_path or (self.file_path == other.file_path)) # noqa: W503
|
|
||||||
and (self == other)) # noqa: W503
|
|
||||||
if not equal and verbose:
|
|
||||||
print("Messages not equal:")
|
|
||||||
print(self)
|
|
||||||
print(other)
|
|
||||||
return equal
|
|
||||||
|
|
||||||
@classmethod
|
@classmethod
|
||||||
def from_dict(cls: Type[MessageInst], data: dict[str, Any]) -> MessageInst:
|
def from_dict(cls: Type[MessageInst], data: dict[str, Any]) -> MessageInst:
|
||||||
"""
|
"""
|
||||||
@@ -442,6 +418,9 @@ class Message():
|
|||||||
output.append(self.answer)
|
output.append(self.answer)
|
||||||
return '\n'.join(output)
|
return '\n'.join(output)
|
||||||
|
|
||||||
|
def __str__(self) -> str:
|
||||||
|
return self.to_str(True, True, False)
|
||||||
|
|
||||||
def to_file(self, file_path: Optional[pathlib.Path]=None) -> None: # noqa: 11
|
def to_file(self, file_path: Optional[pathlib.Path]=None) -> None: # noqa: 11
|
||||||
"""
|
"""
|
||||||
Write a Message to the given file. Type is determined based on the suffix.
|
Write a Message to the given file. Type is determined based on the suffix.
|
||||||
|
|||||||
+36
-47
@@ -10,18 +10,7 @@ from chatmastermind.message import Message, Question, Answer, Tag, MessageFilter
|
|||||||
from chatmastermind.chat import Chat, ChatDB, ChatError
|
from chatmastermind.chat import Chat, ChatDB, ChatError
|
||||||
|
|
||||||
|
|
||||||
class TestChatBase(unittest.TestCase):
|
class TestChat(unittest.TestCase):
|
||||||
def assert_messages_equal(self, msg1: list[Message], msg2: list[Message]) -> None:
|
|
||||||
"""
|
|
||||||
Compare messages using more than just Question and Answer.
|
|
||||||
"""
|
|
||||||
self.assertEqual(len(msg1), len(msg2))
|
|
||||||
for m1, m2 in zip(msg1, msg2):
|
|
||||||
# exclude the file_path, compare only Q, A and metadata
|
|
||||||
self.assertTrue(m1.equals(m2, file_path=False, verbose=True))
|
|
||||||
|
|
||||||
|
|
||||||
class TestChat(TestChatBase):
|
|
||||||
def setUp(self) -> None:
|
def setUp(self) -> None:
|
||||||
self.chat = Chat([])
|
self.chat = Chat([])
|
||||||
self.message1 = Message(Question('Question 1'),
|
self.message1 = Message(Question('Question 1'),
|
||||||
@@ -37,24 +26,24 @@ class TestChat(TestChatBase):
|
|||||||
def test_unique_id(self) -> None:
|
def test_unique_id(self) -> None:
|
||||||
# test with two identical messages
|
# test with two identical messages
|
||||||
self.chat.msg_add([self.message1, self.message1])
|
self.chat.msg_add([self.message1, self.message1])
|
||||||
self.assert_messages_equal(self.chat.messages, [self.message1, self.message1])
|
self.assertSequenceEqual(self.chat.messages, [self.message1, self.message1])
|
||||||
self.chat.msg_unique_id()
|
self.chat.msg_unique_id()
|
||||||
self.assert_messages_equal(self.chat.messages, [self.message1])
|
self.assertSequenceEqual(self.chat.messages, [self.message1])
|
||||||
# test with two different messages
|
# test with two different messages
|
||||||
self.chat.msg_add([self.message2])
|
self.chat.msg_add([self.message2])
|
||||||
self.chat.msg_unique_id()
|
self.chat.msg_unique_id()
|
||||||
self.assert_messages_equal(self.chat.messages, [self.message1, self.message2])
|
self.assertSequenceEqual(self.chat.messages, [self.message1, self.message2])
|
||||||
|
|
||||||
def test_unique_content(self) -> None:
|
def test_unique_content(self) -> None:
|
||||||
# test with two identical messages
|
# test with two identical messages
|
||||||
self.chat.msg_add([self.message1, self.message1])
|
self.chat.msg_add([self.message1, self.message1])
|
||||||
self.assert_messages_equal(self.chat.messages, [self.message1, self.message1])
|
self.assertSequenceEqual(self.chat.messages, [self.message1, self.message1])
|
||||||
self.chat.msg_unique_content()
|
self.chat.msg_unique_content()
|
||||||
self.assert_messages_equal(self.chat.messages, [self.message1])
|
self.assertSequenceEqual(self.chat.messages, [self.message1])
|
||||||
# test with two different messages
|
# test with two different messages
|
||||||
self.chat.msg_add([self.message2])
|
self.chat.msg_add([self.message2])
|
||||||
self.chat.msg_unique_content()
|
self.chat.msg_unique_content()
|
||||||
self.assert_messages_equal(self.chat.messages, [self.message1, self.message2])
|
self.assertSequenceEqual(self.chat.messages, [self.message1, self.message2])
|
||||||
|
|
||||||
def test_filter(self) -> None:
|
def test_filter(self) -> None:
|
||||||
self.chat.msg_add([self.message1, self.message2])
|
self.chat.msg_add([self.message1, self.message2])
|
||||||
@@ -161,7 +150,7 @@ Answer 2
|
|||||||
self.assertEqual(mock_stdout.getvalue(), expected_output)
|
self.assertEqual(mock_stdout.getvalue(), expected_output)
|
||||||
|
|
||||||
|
|
||||||
class TestChatDB(TestChatBase):
|
class TestChatDB(unittest.TestCase):
|
||||||
def setUp(self) -> None:
|
def setUp(self) -> None:
|
||||||
self.db_path = tempfile.TemporaryDirectory()
|
self.db_path = tempfile.TemporaryDirectory()
|
||||||
self.cache_path = tempfile.TemporaryDirectory()
|
self.cache_path = tempfile.TemporaryDirectory()
|
||||||
@@ -580,7 +569,7 @@ class TestChatDB(TestChatBase):
|
|||||||
search_names = ['0001', '0002.yaml', self.message3.msg_id(), str(self.message3.file_path)]
|
search_names = ['0001', '0002.yaml', self.message3.msg_id(), str(self.message3.file_path)]
|
||||||
expected_result = [self.message1, self.message2, self.message3]
|
expected_result = [self.message1, self.message2, self.message3]
|
||||||
result = chat_db.msg_find(search_names, loc='all')
|
result = chat_db.msg_find(search_names, loc='all')
|
||||||
self.assert_messages_equal(result, expected_result)
|
self.assertSequenceEqual(result, expected_result)
|
||||||
|
|
||||||
def test_msg_latest(self) -> None:
|
def test_msg_latest(self) -> None:
|
||||||
chat_db = ChatDB.from_dir(pathlib.Path(self.cache_path.name),
|
chat_db = ChatDB.from_dir(pathlib.Path(self.cache_path.name),
|
||||||
@@ -606,47 +595,47 @@ class TestChatDB(TestChatBase):
|
|||||||
chat_db = ChatDB.from_dir(pathlib.Path(self.cache_path.name),
|
chat_db = ChatDB.from_dir(pathlib.Path(self.cache_path.name),
|
||||||
pathlib.Path(self.db_path.name))
|
pathlib.Path(self.db_path.name))
|
||||||
all_messages = [self.message1, self.message2, self.message3, self.message4]
|
all_messages = [self.message1, self.message2, self.message3, self.message4]
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='all'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='all'), all_messages)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='db'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='db'), all_messages)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='mem'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='mem'), all_messages)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='disk'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='disk'), all_messages)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='cache'), [])
|
self.assertSequenceEqual(chat_db.msg_gather(loc='cache'), [])
|
||||||
# add a new message, but only to the internal list
|
# add a new message, but only to the internal list
|
||||||
new_message = Message(Question("What?"))
|
new_message = Message(Question("What?"))
|
||||||
all_messages_mem = all_messages + [new_message]
|
all_messages_mem = all_messages + [new_message]
|
||||||
chat_db.msg_add([new_message])
|
chat_db.msg_add([new_message])
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='mem'), all_messages_mem)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='mem'), all_messages_mem)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='all'), all_messages_mem)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='all'), all_messages_mem)
|
||||||
# the nr. of messages on disk did not change -> expect old result
|
# the nr. of messages on disk did not change -> expect old result
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='db'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='db'), all_messages)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='disk'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='disk'), all_messages)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='cache'), [])
|
self.assertSequenceEqual(chat_db.msg_gather(loc='cache'), [])
|
||||||
# test with MessageFilter
|
# test with MessageFilter
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='all', mfilter=MessageFilter(tags_or={Tag('tag1')})),
|
self.assertSequenceEqual(chat_db.msg_gather(loc='all', mfilter=MessageFilter(tags_or={Tag('tag1')})),
|
||||||
[self.message1])
|
[self.message1])
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='disk', mfilter=MessageFilter(tags_or={Tag('tag2')})),
|
self.assertSequenceEqual(chat_db.msg_gather(loc='disk', mfilter=MessageFilter(tags_or={Tag('tag2')})),
|
||||||
[self.message2])
|
[self.message2])
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='cache', mfilter=MessageFilter(tags_or={Tag('tag3')})),
|
self.assertSequenceEqual(chat_db.msg_gather(loc='cache', mfilter=MessageFilter(tags_or={Tag('tag3')})),
|
||||||
[])
|
[])
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='mem', mfilter=MessageFilter(question_contains="What")),
|
self.assertSequenceEqual(chat_db.msg_gather(loc='mem', mfilter=MessageFilter(question_contains="What")),
|
||||||
[new_message])
|
[new_message])
|
||||||
|
|
||||||
def test_msg_move_and_gather(self) -> None:
|
def test_msg_move_and_gather(self) -> None:
|
||||||
chat_db = ChatDB.from_dir(pathlib.Path(self.cache_path.name),
|
chat_db = ChatDB.from_dir(pathlib.Path(self.cache_path.name),
|
||||||
pathlib.Path(self.db_path.name))
|
pathlib.Path(self.db_path.name))
|
||||||
all_messages = [self.message1, self.message2, self.message3, self.message4]
|
all_messages = [self.message1, self.message2, self.message3, self.message4]
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='db'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='db'), all_messages)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='cache'), [])
|
self.assertSequenceEqual(chat_db.msg_gather(loc='cache'), [])
|
||||||
# move first message to the cache
|
# move first message to the cache
|
||||||
chat_db.cache_move(self.message1)
|
chat_db.cache_move(self.message1)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='cache'), [self.message1])
|
self.assertSequenceEqual(chat_db.msg_gather(loc='cache'), [self.message1])
|
||||||
self.assertEqual(self.message1.file_path.parent, pathlib.Path(self.cache_path.name)) # type: ignore [union-attr]
|
self.assertEqual(self.message1.file_path.parent, pathlib.Path(self.cache_path.name)) # type: ignore [union-attr]
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='db'), [self.message2, self.message3, self.message4])
|
self.assertSequenceEqual(chat_db.msg_gather(loc='db'), [self.message2, self.message3, self.message4])
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='all'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='all'), all_messages)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='disk'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='disk'), all_messages)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='mem'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='mem'), all_messages)
|
||||||
# now move first message back to the DB
|
# now move first message back to the DB
|
||||||
chat_db.db_move(self.message1)
|
chat_db.db_move(self.message1)
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='cache'), [])
|
self.assertSequenceEqual(chat_db.msg_gather(loc='cache'), [])
|
||||||
self.assertEqual(self.message1.file_path.parent, pathlib.Path(self.db_path.name)) # type: ignore [union-attr]
|
self.assertEqual(self.message1.file_path.parent, pathlib.Path(self.db_path.name)) # type: ignore [union-attr]
|
||||||
self.assert_messages_equal(chat_db.msg_gather(loc='db'), all_messages)
|
self.assertSequenceEqual(chat_db.msg_gather(loc='db'), all_messages)
|
||||||
|
|||||||
@@ -1,100 +0,0 @@
|
|||||||
import unittest
|
|
||||||
import argparse
|
|
||||||
from typing import Union, Optional
|
|
||||||
from chatmastermind.configuration import Config, AIConfig
|
|
||||||
from chatmastermind.tags import Tag
|
|
||||||
from chatmastermind.message import Message, Answer
|
|
||||||
from chatmastermind.chat import Chat
|
|
||||||
from chatmastermind.ai import AI, AIResponse, Tokens, AIError
|
|
||||||
|
|
||||||
|
|
||||||
class FakeAI(AI):
|
|
||||||
"""
|
|
||||||
A mocked version of the 'AI' class.
|
|
||||||
"""
|
|
||||||
ID: str
|
|
||||||
name: str
|
|
||||||
config: AIConfig
|
|
||||||
|
|
||||||
def models(self) -> list[str]:
|
|
||||||
raise NotImplementedError
|
|
||||||
|
|
||||||
def tokens(self, data: Union[Message, Chat]) -> int:
|
|
||||||
return 123
|
|
||||||
|
|
||||||
def print(self) -> None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def print_models(self) -> None:
|
|
||||||
pass
|
|
||||||
|
|
||||||
def __init__(self, ID: str, model: str, error: bool = False):
|
|
||||||
self.ID = ID
|
|
||||||
self.model = model
|
|
||||||
self.error = error
|
|
||||||
|
|
||||||
def request(self,
|
|
||||||
question: Message,
|
|
||||||
chat: Chat,
|
|
||||||
num_answers: int = 1,
|
|
||||||
otags: Optional[set[Tag]] = None) -> AIResponse:
|
|
||||||
"""
|
|
||||||
Mock the 'ai.request()' function by either returning fake
|
|
||||||
answers or raising an exception.
|
|
||||||
"""
|
|
||||||
if self.error:
|
|
||||||
raise AIError
|
|
||||||
question.answer = Answer("Answer 0")
|
|
||||||
question.tags = set(otags) if otags is not None else None
|
|
||||||
question.ai = self.ID
|
|
||||||
question.model = self.model
|
|
||||||
answers: list[Message] = [question]
|
|
||||||
for n in range(1, num_answers):
|
|
||||||
answers.append(Message(question=question.question,
|
|
||||||
answer=Answer(f"Answer {n}"),
|
|
||||||
tags=otags,
|
|
||||||
ai=self.ID,
|
|
||||||
model=self.model))
|
|
||||||
return AIResponse(answers, Tokens(10, 10, 20))
|
|
||||||
|
|
||||||
|
|
||||||
class TestWithFakeAI(unittest.TestCase):
|
|
||||||
"""
|
|
||||||
Base class for all tests that need to use the FakeAI.
|
|
||||||
"""
|
|
||||||
def assert_msgs_equal_except_file_path(self, msg1: list[Message], msg2: list[Message]) -> None:
|
|
||||||
"""
|
|
||||||
Compare messages using Question, Answer and all metadata excecot for the file_path.
|
|
||||||
"""
|
|
||||||
self.assertEqual(len(msg1), len(msg2))
|
|
||||||
for m1, m2 in zip(msg1, msg2):
|
|
||||||
# exclude the file_path, compare only Q, A and metadata
|
|
||||||
self.assertTrue(m1.equals(m2, file_path=False, verbose=True))
|
|
||||||
|
|
||||||
def assert_msgs_all_equal(self, msg1: list[Message], msg2: list[Message]) -> None:
|
|
||||||
"""
|
|
||||||
Compare messages using Question, Answer and ALL metadata.
|
|
||||||
"""
|
|
||||||
self.assertEqual(len(msg1), len(msg2))
|
|
||||||
for m1, m2 in zip(msg1, msg2):
|
|
||||||
self.assertTrue(m1.equals(m2, verbose=True))
|
|
||||||
|
|
||||||
def assert_msgs_content_equal(self, msg1: list[Message], msg2: list[Message]) -> None:
|
|
||||||
"""
|
|
||||||
Compare messages using only Question and Answer.
|
|
||||||
"""
|
|
||||||
self.assertEqual(len(msg1), len(msg2))
|
|
||||||
for m1, m2 in zip(msg1, msg2):
|
|
||||||
self.assertEqual(m1, m2)
|
|
||||||
|
|
||||||
def mock_create_ai(self, args: argparse.Namespace, config: Config) -> AI:
|
|
||||||
"""
|
|
||||||
Mocked 'create_ai' that returns a 'FakeAI' instance.
|
|
||||||
"""
|
|
||||||
return FakeAI(args.AI, args.model)
|
|
||||||
|
|
||||||
def mock_create_ai_with_error(self, args: argparse.Namespace, config: Config) -> AI:
|
|
||||||
"""
|
|
||||||
Mocked 'create_ai' that returns a 'FakeAI' instance.
|
|
||||||
"""
|
|
||||||
return FakeAI(args.AI, args.model, error=True)
|
|
||||||
@@ -57,7 +57,6 @@ class TestConfig(unittest.TestCase):
|
|||||||
|
|
||||||
def test_from_dict_should_create_config_from_dict(self) -> None:
|
def test_from_dict_should_create_config_from_dict(self) -> None:
|
||||||
source_dict = {
|
source_dict = {
|
||||||
'cache': '.',
|
|
||||||
'db': './test_db/',
|
'db': './test_db/',
|
||||||
'ais': {
|
'ais': {
|
||||||
'myopenai': {
|
'myopenai': {
|
||||||
@@ -74,7 +73,6 @@ class TestConfig(unittest.TestCase):
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
config = Config.from_dict(source_dict)
|
config = Config.from_dict(source_dict)
|
||||||
self.assertEqual(config.cache, '.')
|
|
||||||
self.assertEqual(config.db, './test_db/')
|
self.assertEqual(config.db, './test_db/')
|
||||||
self.assertEqual(len(config.ais), 1)
|
self.assertEqual(len(config.ais), 1)
|
||||||
self.assertEqual(config.ais['myopenai'].name, 'openai')
|
self.assertEqual(config.ais['myopenai'].name, 'openai')
|
||||||
@@ -91,7 +89,6 @@ class TestConfig(unittest.TestCase):
|
|||||||
|
|
||||||
def test_from_file_should_load_config_from_file(self) -> None:
|
def test_from_file_should_load_config_from_file(self) -> None:
|
||||||
source_dict = {
|
source_dict = {
|
||||||
'cache': './test_cache/',
|
|
||||||
'db': './test_db/',
|
'db': './test_db/',
|
||||||
'ais': {
|
'ais': {
|
||||||
'default': {
|
'default': {
|
||||||
@@ -111,7 +108,6 @@ class TestConfig(unittest.TestCase):
|
|||||||
yaml.dump(source_dict, f)
|
yaml.dump(source_dict, f)
|
||||||
config = Config.from_file(self.test_file.name)
|
config = Config.from_file(self.test_file.name)
|
||||||
self.assertIsInstance(config, Config)
|
self.assertIsInstance(config, Config)
|
||||||
self.assertEqual(config.cache, './test_cache/')
|
|
||||||
self.assertEqual(config.db, './test_db/')
|
self.assertEqual(config.db, './test_db/')
|
||||||
self.assertEqual(len(config.ais), 1)
|
self.assertEqual(len(config.ais), 1)
|
||||||
self.assertIsInstance(config.ais['default'], AIConfig)
|
self.assertIsInstance(config.ais['default'], AIConfig)
|
||||||
@@ -119,7 +115,6 @@ class TestConfig(unittest.TestCase):
|
|||||||
|
|
||||||
def test_to_file_should_save_config_to_file(self) -> None:
|
def test_to_file_should_save_config_to_file(self) -> None:
|
||||||
config = Config(
|
config = Config(
|
||||||
cache='./test_cache/',
|
|
||||||
db='./test_db/',
|
db='./test_db/',
|
||||||
ais={
|
ais={
|
||||||
'myopenai': OpenAIConfig(
|
'myopenai': OpenAIConfig(
|
||||||
@@ -138,14 +133,12 @@ class TestConfig(unittest.TestCase):
|
|||||||
config.to_file(Path(self.test_file.name))
|
config.to_file(Path(self.test_file.name))
|
||||||
with open(self.test_file.name, 'r') as f:
|
with open(self.test_file.name, 'r') as f:
|
||||||
saved_config = yaml.load(f, Loader=yaml.FullLoader)
|
saved_config = yaml.load(f, Loader=yaml.FullLoader)
|
||||||
self.assertEqual(saved_config['cache'], './test_cache/')
|
|
||||||
self.assertEqual(saved_config['db'], './test_db/')
|
self.assertEqual(saved_config['db'], './test_db/')
|
||||||
self.assertEqual(len(saved_config['ais']), 1)
|
self.assertEqual(len(saved_config['ais']), 1)
|
||||||
self.assertEqual(saved_config['ais']['myopenai']['system'], 'Custom system')
|
self.assertEqual(saved_config['ais']['myopenai']['system'], 'Custom system')
|
||||||
|
|
||||||
def test_from_file_error_unknown_ai(self) -> None:
|
def test_from_file_error_unknown_ai(self) -> None:
|
||||||
source_dict = {
|
source_dict = {
|
||||||
'cache': './test_cache/',
|
|
||||||
'db': './test_db/',
|
'db': './test_db/',
|
||||||
'ais': {
|
'ais': {
|
||||||
'default': {
|
'default': {
|
||||||
|
|||||||
+57
-343
@@ -1,30 +1,28 @@
|
|||||||
import os
|
import os
|
||||||
|
import unittest
|
||||||
import argparse
|
import argparse
|
||||||
import tempfile
|
import tempfile
|
||||||
from copy import copy
|
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
from unittest import mock
|
from unittest import mock
|
||||||
from unittest.mock import MagicMock, call
|
from unittest.mock import MagicMock, call
|
||||||
from chatmastermind.configuration import Config
|
from chatmastermind.configuration import Config
|
||||||
from chatmastermind.commands.question import create_message, question_cmd
|
from chatmastermind.commands.question import create_message, question_cmd
|
||||||
from chatmastermind.tags import Tag
|
|
||||||
from chatmastermind.message import Message, Question, Answer
|
from chatmastermind.message import Message, Question, Answer
|
||||||
from chatmastermind.chat import Chat, ChatDB
|
from chatmastermind.chat import ChatDB
|
||||||
from chatmastermind.ai import AIError
|
from chatmastermind.ai import AI, AIResponse, Tokens
|
||||||
from .test_common import TestWithFakeAI
|
|
||||||
|
|
||||||
|
|
||||||
class TestMessageCreate(TestWithFakeAI):
|
class TestMessageCreate(unittest.TestCase):
|
||||||
"""
|
"""
|
||||||
Test if messages created by the 'question' command have
|
Test if messages created by the 'question' command have
|
||||||
the correct format.
|
the correct format.
|
||||||
"""
|
"""
|
||||||
def setUp(self) -> None:
|
def setUp(self) -> None:
|
||||||
# create ChatDB structure
|
# create ChatDB structure
|
||||||
self.db_dir = tempfile.TemporaryDirectory()
|
self.db_path = tempfile.TemporaryDirectory()
|
||||||
self.cache_dir = tempfile.TemporaryDirectory()
|
self.cache_path = tempfile.TemporaryDirectory()
|
||||||
self.chat = ChatDB.from_dir(cache_path=Path(self.cache_dir.name),
|
self.chat = ChatDB.from_dir(cache_path=Path(self.cache_path.name),
|
||||||
db_path=Path(self.db_dir.name))
|
db_path=Path(self.db_path.name))
|
||||||
# create some messages
|
# create some messages
|
||||||
self.message_text = Message(Question("What is this?"),
|
self.message_text = Message(Question("What is this?"),
|
||||||
Answer("It is pure text"))
|
Answer("It is pure text"))
|
||||||
@@ -87,10 +85,10 @@ Aaaand again some text."""
|
|||||||
|
|
||||||
def test_message_file_created(self) -> None:
|
def test_message_file_created(self) -> None:
|
||||||
self.args.ask = ["What is this?"]
|
self.args.ask = ["What is this?"]
|
||||||
cache_dir_files = self.message_list(self.cache_dir)
|
cache_dir_files = self.message_list(self.cache_path)
|
||||||
self.assertEqual(len(cache_dir_files), 0)
|
self.assertEqual(len(cache_dir_files), 0)
|
||||||
create_message(self.chat, self.args)
|
create_message(self.chat, self.args)
|
||||||
cache_dir_files = self.message_list(self.cache_dir)
|
cache_dir_files = self.message_list(self.cache_path)
|
||||||
self.assertEqual(len(cache_dir_files), 1)
|
self.assertEqual(len(cache_dir_files), 1)
|
||||||
message = Message.from_file(cache_dir_files[0])
|
message = Message.from_file(cache_dir_files[0])
|
||||||
self.assertIsInstance(message, Message)
|
self.assertIsInstance(message, Message)
|
||||||
@@ -201,23 +199,21 @@ It is embedded code
|
|||||||
"""))
|
"""))
|
||||||
|
|
||||||
|
|
||||||
class TestQuestionCmd(TestWithFakeAI):
|
class TestQuestionCmd(unittest.TestCase):
|
||||||
|
|
||||||
def setUp(self) -> None:
|
def setUp(self) -> None:
|
||||||
# create DB and cache
|
# create DB and cache
|
||||||
self.db_dir = tempfile.TemporaryDirectory()
|
self.db_path = tempfile.TemporaryDirectory()
|
||||||
self.cache_dir = tempfile.TemporaryDirectory()
|
self.cache_path = tempfile.TemporaryDirectory()
|
||||||
# create configuration
|
# create configuration
|
||||||
self.config = Config()
|
self.config = Config()
|
||||||
self.config.cache = self.cache_dir.name
|
|
||||||
self.config.db = self.db_dir.name
|
|
||||||
# create a mock argparse.Namespace
|
# create a mock argparse.Namespace
|
||||||
self.args = argparse.Namespace(
|
self.args = argparse.Namespace(
|
||||||
ask=['What is the meaning of life?'],
|
ask=['What is the meaning of life?'],
|
||||||
num_answers=1,
|
num_answers=1,
|
||||||
output_tags=['science'],
|
output_tags=['science'],
|
||||||
AI='FakeAI',
|
AI='openai',
|
||||||
model='FakeModel',
|
model='gpt-3.5-turbo',
|
||||||
or_tags=None,
|
or_tags=None,
|
||||||
and_tags=None,
|
and_tags=None,
|
||||||
exclude_tags=None,
|
exclude_tags=None,
|
||||||
@@ -225,68 +221,63 @@ class TestQuestionCmd(TestWithFakeAI):
|
|||||||
source_code=None,
|
source_code=None,
|
||||||
create=None,
|
create=None,
|
||||||
repeat=None,
|
repeat=None,
|
||||||
process=None,
|
process=None
|
||||||
overwrite=None
|
|
||||||
)
|
)
|
||||||
|
|
||||||
def message_list(self, tmp_dir: tempfile.TemporaryDirectory) -> list[Path]:
|
def input_message(self, args: argparse.Namespace) -> Message:
|
||||||
# exclude '.next'
|
|
||||||
return sorted([f for f in Path(tmp_dir.name).glob('*.[ty]*')])
|
|
||||||
|
|
||||||
|
|
||||||
class TestQuestionCmdAsk(TestQuestionCmd):
|
|
||||||
|
|
||||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
|
||||||
def test_ask_single_answer(self, mock_create_ai: MagicMock) -> None:
|
|
||||||
"""
|
"""
|
||||||
Test single answer with no errors.
|
Create the expected input message for a question using the
|
||||||
|
given arguments.
|
||||||
"""
|
"""
|
||||||
mock_create_ai.side_effect = self.mock_create_ai
|
# NOTE: we only use the first question from the "ask" list
|
||||||
expected_question = Message(Question(self.args.ask[0]),
|
# -> message creation using "question.create_message()" is
|
||||||
tags=set(self.args.output_tags),
|
# tested above
|
||||||
ai=self.args.AI,
|
# the answer is always empty for the input message
|
||||||
model=self.args.model,
|
return Message(Question(args.ask[0]),
|
||||||
file_path=Path('<NOT COMPARED>'))
|
tags=args.output_tags,
|
||||||
fake_ai = self.mock_create_ai(self.args, self.config)
|
ai=args.AI,
|
||||||
expected_responses = fake_ai.request(expected_question,
|
model=args.model)
|
||||||
Chat([]),
|
|
||||||
self.args.num_answers,
|
|
||||||
self.args.output_tags).messages
|
|
||||||
|
|
||||||
# execute the command
|
def response(self, args: argparse.Namespace) -> AIResponse:
|
||||||
question_cmd(self.args, self.config)
|
"""
|
||||||
|
Create the expected AI response from the give arguments.
|
||||||
# check for the expected message files
|
"""
|
||||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
input_msg = self.input_message(args)
|
||||||
Path(self.db_dir.name))
|
response = AIResponse(messages=[], tokens=Tokens(10, 10, 20))
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
for n in range(args.num_answers):
|
||||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
response_msg = Message(input_msg.question,
|
||||||
self.assert_msgs_equal_except_file_path(cached_msg, expected_responses)
|
Answer(f"Answer {n}"),
|
||||||
|
tags=input_msg.tags,
|
||||||
|
ai=input_msg.ai,
|
||||||
|
model=input_msg.model)
|
||||||
|
response.messages.append(response_msg)
|
||||||
|
return response
|
||||||
|
|
||||||
@mock.patch('chatmastermind.commands.question.ChatDB.from_dir')
|
@mock.patch('chatmastermind.commands.question.ChatDB.from_dir')
|
||||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
@mock.patch('chatmastermind.commands.question.create_ai')
|
||||||
def test_ask_single_answer_mocked(self, mock_create_ai: MagicMock, mock_from_dir: MagicMock) -> None:
|
def test_ask_single_answer(self, mock_create_ai: MagicMock, mock_from_dir: MagicMock) -> None:
|
||||||
"""
|
|
||||||
Test single answer with no errors (mocked ChatDB version).
|
# FIXME: this mock is only neccessary because the cache dir is not
|
||||||
"""
|
# configurable in the configuration file
|
||||||
chat = MagicMock(spec=ChatDB)
|
chat = MagicMock(spec=ChatDB)
|
||||||
mock_from_dir.return_value = chat
|
mock_from_dir.return_value = chat
|
||||||
|
|
||||||
mock_create_ai.side_effect = self.mock_create_ai
|
# create a mock AI instance
|
||||||
expected_question = Message(Question(self.args.ask[0]),
|
ai = MagicMock(spec=AI)
|
||||||
tags=set(self.args.output_tags),
|
ai.request.return_value = self.response(self.args)
|
||||||
ai=self.args.AI,
|
mock_create_ai.return_value = ai
|
||||||
model=self.args.model,
|
expected_question = self.input_message(self.args)
|
||||||
file_path=Path('<NOT COMPARED>'))
|
expected_responses = ai.request.return_value.messages
|
||||||
fake_ai = self.mock_create_ai(self.args, self.config)
|
|
||||||
expected_responses = fake_ai.request(expected_question,
|
|
||||||
Chat([]),
|
|
||||||
self.args.num_answers,
|
|
||||||
self.args.output_tags).messages
|
|
||||||
|
|
||||||
# execute the command
|
# execute the command
|
||||||
question_cmd(self.args, self.config)
|
question_cmd(self.args, self.config)
|
||||||
|
|
||||||
|
# check for correct request call
|
||||||
|
ai.request.assert_called_once_with(expected_question,
|
||||||
|
chat,
|
||||||
|
self.args.num_answers,
|
||||||
|
self.args.output_tags)
|
||||||
|
|
||||||
# check for the correct ChatDB calls:
|
# check for the correct ChatDB calls:
|
||||||
# - initial question has been written (prior to the actual request)
|
# - initial question has been written (prior to the actual request)
|
||||||
# - responses have been written (after the request)
|
# - responses have been written (after the request)
|
||||||
@@ -296,280 +287,3 @@ class TestQuestionCmdAsk(TestQuestionCmd):
|
|||||||
|
|
||||||
# check that the messages have not been added to the internal message list
|
# check that the messages have not been added to the internal message list
|
||||||
chat.cache_add.assert_not_called()
|
chat.cache_add.assert_not_called()
|
||||||
|
|
||||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
|
||||||
def test_ask_with_error(self, mock_create_ai: MagicMock) -> None:
|
|
||||||
"""
|
|
||||||
Provoke an error during the AI request and verify that the question
|
|
||||||
has been correctly stored in the cache.
|
|
||||||
"""
|
|
||||||
mock_create_ai.side_effect = self.mock_create_ai_with_error
|
|
||||||
expected_question = Message(Question(self.args.ask[0]),
|
|
||||||
tags=set(self.args.output_tags),
|
|
||||||
ai=self.args.AI,
|
|
||||||
model=self.args.model,
|
|
||||||
file_path=Path('<NOT COMPARED>'))
|
|
||||||
|
|
||||||
# execute the command
|
|
||||||
with self.assertRaises(AIError):
|
|
||||||
question_cmd(self.args, self.config)
|
|
||||||
|
|
||||||
# check for the expected message files
|
|
||||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
|
||||||
Path(self.db_dir.name))
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
|
||||||
self.assert_msgs_equal_except_file_path(cached_msg, [expected_question])
|
|
||||||
|
|
||||||
|
|
||||||
class TestQuestionCmdRepeat(TestQuestionCmd):
|
|
||||||
|
|
||||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
|
||||||
def test_repeat_single_question(self, mock_create_ai: MagicMock) -> None:
|
|
||||||
"""
|
|
||||||
Repeat a single question.
|
|
||||||
"""
|
|
||||||
mock_create_ai.side_effect = self.mock_create_ai
|
|
||||||
# create a message
|
|
||||||
message = Message(Question(self.args.ask[0]),
|
|
||||||
Answer('Old Answer'),
|
|
||||||
tags=set(self.args.output_tags),
|
|
||||||
ai=self.args.AI,
|
|
||||||
model=self.args.model,
|
|
||||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
|
||||||
message.to_file()
|
|
||||||
|
|
||||||
# repeat the last question (without overwriting)
|
|
||||||
# -> expect two identical messages (except for the file_path)
|
|
||||||
self.args.ask = None
|
|
||||||
self.args.repeat = []
|
|
||||||
self.args.overwrite = False
|
|
||||||
expected_response = Message(Question(message.question),
|
|
||||||
Answer('Answer 0'),
|
|
||||||
ai=message.ai,
|
|
||||||
model=message.model,
|
|
||||||
tags=message.tags,
|
|
||||||
file_path=Path('<NOT COMPARED>'))
|
|
||||||
# we expect the original message + the one with the new response
|
|
||||||
expected_responses = [message] + [expected_response]
|
|
||||||
question_cmd(self.args, self.config)
|
|
||||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
|
||||||
Path(self.db_dir.name))
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
print(self.message_list(self.cache_dir))
|
|
||||||
self.assertEqual(len(self.message_list(self.cache_dir)), 2)
|
|
||||||
self.assert_msgs_equal_except_file_path(cached_msg, expected_responses)
|
|
||||||
|
|
||||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
|
||||||
def test_repeat_single_question_overwrite(self, mock_create_ai: MagicMock) -> None:
|
|
||||||
"""
|
|
||||||
Repeat a single question and overwrite the old one.
|
|
||||||
"""
|
|
||||||
mock_create_ai.side_effect = self.mock_create_ai
|
|
||||||
# create a message
|
|
||||||
message = Message(Question(self.args.ask[0]),
|
|
||||||
Answer('Old Answer'),
|
|
||||||
tags=set(self.args.output_tags),
|
|
||||||
ai=self.args.AI,
|
|
||||||
model=self.args.model,
|
|
||||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
|
||||||
message.to_file()
|
|
||||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
|
||||||
Path(self.db_dir.name))
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
assert cached_msg[0].file_path
|
|
||||||
cached_msg_file_id = cached_msg[0].file_path.stem
|
|
||||||
|
|
||||||
# repeat the last question (WITH overwriting)
|
|
||||||
# -> expect a single message afterwards (with a new answer)
|
|
||||||
self.args.ask = None
|
|
||||||
self.args.repeat = []
|
|
||||||
self.args.overwrite = True
|
|
||||||
expected_response = Message(Question(message.question),
|
|
||||||
Answer('Answer 0'),
|
|
||||||
ai=message.ai,
|
|
||||||
model=message.model,
|
|
||||||
tags=message.tags,
|
|
||||||
file_path=Path('<NOT COMPARED>'))
|
|
||||||
question_cmd(self.args, self.config)
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
|
||||||
self.assert_msgs_equal_except_file_path(cached_msg, [expected_response])
|
|
||||||
# also check that the file ID has not been changed
|
|
||||||
assert cached_msg[0].file_path
|
|
||||||
self.assertEqual(cached_msg_file_id, cached_msg[0].file_path.stem)
|
|
||||||
|
|
||||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
|
||||||
def test_repeat_single_question_after_error(self, mock_create_ai: MagicMock) -> None:
|
|
||||||
"""
|
|
||||||
Repeat a single question after an error.
|
|
||||||
"""
|
|
||||||
mock_create_ai.side_effect = self.mock_create_ai
|
|
||||||
# create a question WITHOUT an answer
|
|
||||||
# -> just like after an error, which is tested above
|
|
||||||
message = Message(Question(self.args.ask[0]),
|
|
||||||
tags=set(self.args.output_tags),
|
|
||||||
ai=self.args.AI,
|
|
||||||
model=self.args.model,
|
|
||||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
|
||||||
message.to_file()
|
|
||||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
|
||||||
Path(self.db_dir.name))
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
assert cached_msg[0].file_path
|
|
||||||
cached_msg_file_id = cached_msg[0].file_path.stem
|
|
||||||
|
|
||||||
# repeat the last question (without overwriting)
|
|
||||||
# -> expect a single message because if the original has
|
|
||||||
# no answer, it should be overwritten by default
|
|
||||||
self.args.ask = None
|
|
||||||
self.args.repeat = []
|
|
||||||
self.args.overwrite = False
|
|
||||||
expected_response = Message(Question(message.question),
|
|
||||||
Answer('Answer 0'),
|
|
||||||
ai=message.ai,
|
|
||||||
model=message.model,
|
|
||||||
tags=message.tags,
|
|
||||||
file_path=Path('<NOT COMPARED>'))
|
|
||||||
question_cmd(self.args, self.config)
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
|
||||||
self.assert_msgs_equal_except_file_path(cached_msg, [expected_response])
|
|
||||||
# also check that the file ID has not been changed
|
|
||||||
assert cached_msg[0].file_path
|
|
||||||
self.assertEqual(cached_msg_file_id, cached_msg[0].file_path.stem)
|
|
||||||
|
|
||||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
|
||||||
def test_repeat_single_question_new_args(self, mock_create_ai: MagicMock) -> None:
|
|
||||||
"""
|
|
||||||
Repeat a single question with new arguments.
|
|
||||||
"""
|
|
||||||
mock_create_ai.side_effect = self.mock_create_ai
|
|
||||||
# create a message
|
|
||||||
message = Message(Question(self.args.ask[0]),
|
|
||||||
Answer('Old Answer'),
|
|
||||||
tags=set(self.args.output_tags),
|
|
||||||
ai=self.args.AI,
|
|
||||||
model=self.args.model,
|
|
||||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
|
||||||
message.to_file()
|
|
||||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
|
||||||
Path(self.db_dir.name))
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
assert cached_msg[0].file_path
|
|
||||||
|
|
||||||
# repeat the last question with new arguments (without overwriting)
|
|
||||||
# -> expect two messages with identical question but different metadata and new answer
|
|
||||||
self.args.ask = None
|
|
||||||
self.args.repeat = []
|
|
||||||
self.args.overwrite = False
|
|
||||||
self.args.output_tags = ['newtag']
|
|
||||||
self.args.AI = 'newai'
|
|
||||||
self.args.model = 'newmodel'
|
|
||||||
new_expected_response = Message(Question(message.question),
|
|
||||||
Answer('Answer 0'),
|
|
||||||
ai='newai',
|
|
||||||
model='newmodel',
|
|
||||||
tags={Tag('newtag')},
|
|
||||||
file_path=Path('<NOT COMPARED>'))
|
|
||||||
question_cmd(self.args, self.config)
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
self.assertEqual(len(self.message_list(self.cache_dir)), 2)
|
|
||||||
self.assert_msgs_equal_except_file_path(cached_msg, [message] + [new_expected_response])
|
|
||||||
|
|
||||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
|
||||||
def test_repeat_single_question_new_args_overwrite(self, mock_create_ai: MagicMock) -> None:
|
|
||||||
"""
|
|
||||||
Repeat a single question with new arguments, overwriting the old one.
|
|
||||||
"""
|
|
||||||
mock_create_ai.side_effect = self.mock_create_ai
|
|
||||||
# create a message
|
|
||||||
message = Message(Question(self.args.ask[0]),
|
|
||||||
Answer('Old Answer'),
|
|
||||||
tags=set(self.args.output_tags),
|
|
||||||
ai=self.args.AI,
|
|
||||||
model=self.args.model,
|
|
||||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
|
||||||
message.to_file()
|
|
||||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
|
||||||
Path(self.db_dir.name))
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
assert cached_msg[0].file_path
|
|
||||||
|
|
||||||
# repeat the last question with new arguments
|
|
||||||
self.args.ask = None
|
|
||||||
self.args.repeat = []
|
|
||||||
self.args.overwrite = True
|
|
||||||
self.args.output_tags = ['newtag']
|
|
||||||
self.args.AI = 'newai'
|
|
||||||
self.args.model = 'newmodel'
|
|
||||||
new_expected_response = Message(Question(message.question),
|
|
||||||
Answer('Answer 0'),
|
|
||||||
ai='newai',
|
|
||||||
model='newmodel',
|
|
||||||
tags={Tag('newtag')},
|
|
||||||
file_path=Path('<NOT COMPARED>'))
|
|
||||||
question_cmd(self.args, self.config)
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
self.assertEqual(len(self.message_list(self.cache_dir)), 1)
|
|
||||||
self.assert_msgs_equal_except_file_path(cached_msg, [new_expected_response])
|
|
||||||
|
|
||||||
@mock.patch('chatmastermind.commands.question.create_ai')
|
|
||||||
def test_repeat_multiple_questions(self, mock_create_ai: MagicMock) -> None:
|
|
||||||
"""
|
|
||||||
Repeat multiple questions.
|
|
||||||
"""
|
|
||||||
mock_create_ai.side_effect = self.mock_create_ai
|
|
||||||
# 1. === create three questions ===
|
|
||||||
# cached message without an answer
|
|
||||||
message1 = Message(Question(self.args.ask[0]),
|
|
||||||
tags=self.args.output_tags,
|
|
||||||
ai=self.args.AI,
|
|
||||||
model=self.args.model,
|
|
||||||
file_path=Path(self.cache_dir.name) / '0001.txt')
|
|
||||||
# cached message with an answer
|
|
||||||
message2 = Message(Question(self.args.ask[0]),
|
|
||||||
Answer('Old Answer'),
|
|
||||||
tags=self.args.output_tags,
|
|
||||||
ai=self.args.AI,
|
|
||||||
model=self.args.model,
|
|
||||||
file_path=Path(self.cache_dir.name) / '0002.txt')
|
|
||||||
# DB message without an answer
|
|
||||||
message3 = Message(Question(self.args.ask[0]),
|
|
||||||
tags=self.args.output_tags,
|
|
||||||
ai=self.args.AI,
|
|
||||||
model=self.args.model,
|
|
||||||
file_path=Path(self.db_dir.name) / '0003.txt')
|
|
||||||
message1.to_file()
|
|
||||||
message2.to_file()
|
|
||||||
message3.to_file()
|
|
||||||
questions = [message1, message2, message3]
|
|
||||||
expected_responses: list[Message] = []
|
|
||||||
fake_ai = self.mock_create_ai(self.args, self.config)
|
|
||||||
for question in questions:
|
|
||||||
# since the message's answer is modified, we use a copy
|
|
||||||
# -> the original is used for comparison below
|
|
||||||
expected_responses += fake_ai.request(copy(question),
|
|
||||||
Chat([]),
|
|
||||||
self.args.num_answers,
|
|
||||||
set(self.args.output_tags)).messages
|
|
||||||
|
|
||||||
# 2. === repeat all three questions (without overwriting) ===
|
|
||||||
self.args.ask = None
|
|
||||||
self.args.repeat = ['0001', '0002', '0003']
|
|
||||||
self.args.overwrite = False
|
|
||||||
question_cmd(self.args, self.config)
|
|
||||||
# two new files should be in the cache directory
|
|
||||||
# * the repeated cached message with answer
|
|
||||||
# * the repeated DB message
|
|
||||||
# -> the cached message without answer should be overwritten
|
|
||||||
self.assertEqual(len(self.message_list(self.cache_dir)), 4)
|
|
||||||
self.assertEqual(len(self.message_list(self.db_dir)), 1)
|
|
||||||
expected_cache_messages = [expected_responses[0], message2, expected_responses[1], expected_responses[2]]
|
|
||||||
chat = ChatDB.from_dir(Path(self.cache_dir.name),
|
|
||||||
Path(self.db_dir.name))
|
|
||||||
cached_msg = chat.msg_gather(loc='cache')
|
|
||||||
self.assert_msgs_equal_except_file_path(cached_msg, expected_cache_messages)
|
|
||||||
# check that the DB message has not been modified at all
|
|
||||||
db_msg = chat.msg_gather(loc='db')
|
|
||||||
self.assert_msgs_all_equal(db_msg, [message3])
|
|
||||||
|
|||||||
Reference in New Issue
Block a user