Replies: 3 comments 1 reply
-
What we tried(Describe our experience here) |
Beta Was this translation helpful? Give feedback.
0 replies
-
🔎 Existing solutions(Do some research / discussion)
|
Beta Was this translation helpful? Give feedback.
1 reply
-
11:11 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
|Language| models| generate| one| token| at| a| time|.|
The official SDKs give or stream you the result as a whole by default. You can specify max number of tokens or/and JSON mode in OpenAI, but thats it.
But if we can somehow control the flow and picking of the tokens one-by-one and return back on each wrong step, we can ensure result expectations #30 in a much more granular scale, kind of a "SuperJSON" mode.
Counts
For example
Formats
Format + Schema
See also ⏳ Just-in-time fine-tuning
Beta Was this translation helpful? Give feedback.
All reactions