shithub: oai

Download patch

ref: 3b49941153a3ca965ba574d9e64f0325383316fd
parent: 68bce3543b2b1e9461b06cc9622e12e0d6a47ac1
author: sirjofri <sirjofri@sirjofri.de>
date: Tue Dec 30 08:56:51 EST 2025

s/Readme/README

--- /dev/null
+++ b/README
@@ -1,0 +1,23 @@
+These tools use the Open AI API to do chat completions AI requests.
+
+It is tested using llama.cpp on a separate Windows machine.
+
+For testing, you can run this command on the llama machine:
+
+llama-server -m "models\gemma-3-4b-it-Q4_K_M.gguf" --ctx-size 0 --host 0.0.0.0 -n 200 --batch-size 8 --threads 8 --mlock --n-gpu-layers 20 --tensor-split 0.7,0.3
+
+(play around with the detail values until you get a stable environment)
+
+
+USAGE:
+
+oai [-k apikey] [-m model] baseurl
+
+baseurl is the http url without the v1/... stuff, with llama-server this is usually just http://server:8080.
+
+After that, you get a user: prompt for your user messages.
+
+
+LIBRARY:
+
+oai.h and oailib.c expose a simple data structure with a function for easy requests using the chat completions API. These are intended to be reused by different tools.
--- a/Readme
+++ /dev/null
@@ -1,23 +1,0 @@
-These tools use the Open AI API to do chat completions AI requests.
-
-It is tested using llama.cpp on a separate Windows machine.
-
-For testing, you can run this command on the llama machine:
-
-llama-server -m "models\gemma-3-4b-it-Q4_K_M.gguf" --ctx-size 0 --host 0.0.0.0 -n 200 --batch-size 8 --threads 8 --mlock --n-gpu-layers 20 --tensor-split 0.7,0.3
-
-(play around with the detail values until you get a stable environment)
-
-
-USAGE:
-
-oai [-k apikey] [-m model] baseurl
-
-baseurl is the http url without the v1/... stuff, with llama-server this is usually just http://server:8080.
-
-After that, you get a user: prompt for your user messages.
-
-
-LIBRARY:
-
-oai.h and oailib.c expose a simple data structure with a function for easy requests using the chat completions API. These are intended to be reused by different tools.
--