shithub: oai


branches: front

Clone

clone: git://shithub.us/sirjofri/oai gits://shithub.us/sirjofri/oai
push: hjgit://shithub.us/sirjofri/oai

Last commit

ce8088b0 – sirjofri <sirjofri@sirjofri.de> authored on 2025/12/30 14:18
quiet prompts in oai on request, adds sysprompt customization

About

These tools use the Open AI API to do chat completions AI requests.

It is tested using llama.cpp on a separate Windows machine.

For testing, you can run this command on the llama machine:

llama-server -m "models\gemma-3-4b-it-Q4_K_M.gguf" --ctx-size 0 --host 0.0.0.0 -n 200 --batch-size 8 --threads 8 --mlock --n-gpu-layers 20 --tensor-split 0.7,0.3

(play around with the detail values until you get a stable environment)


TOOLS:

- oai: simple shell-like chat between user and assistant
- ocomplete: acme interface.


USAGE:

oai [-q] [-k apikey] [-m model] [-u baseurl] [-s sysprompt]
ocomplete [-k apikey] [-m model] [-u baseurl]

baseurl is the http url without the v1/... stuff, with llama-server this is usually just http://server:8080.

The apikey and the baseurl are optional if you set them as environment variables ($oaikey and $oaiurl).

After that, you get a user: prompt for your user messages.

Ocomplete: Call the program from within an acme window with some selected text. The whole window contents will be sent to the API as context, and the LLM response will be appended to the selected text.

Oai: -q does not output any prompts, only LLM responses. -s sets the sysprompt.


LIBRARY:

oai.h and oailib.c expose a simple data structure with a function for easy requests using the chat completions API. These are intended to be reused by different tools.