This repository was archived by the owner on Sep 30, 2024. It is now read-only.
Commit dca1b96
Stephen Gutekanst
self hosted models (#63899)
This PR is stacked on top of all the prior work @chrsmith has done for
shuffling configuration data around; it implements the new "Self hosted
models" functionality.
## Configuration
Configuring a Sourcegraph instance to use self-hosted models basically
involves adding some configuration like this to the site config (if you
set `modelConfiguration`, you are opting in to the new system which is
in early access):
```
// Setting this field means we are opting into the new Cody model configuration system.
"modelConfiguration": {
// Disable use of Sourcegraph's servers for model discovery
"sourcegraph": null,
// Create two model providers
"providerOverrides": [
{
// Our first model provider "mistral" will be a Huggingface TGI deployment which hosts our
// mistral model for chat functionality.
"id": "mistral",
"displayName": "Mistral",
"serverSideConfig": {
"type": "huggingface-tgi",
"endpoints": [{"url": "https://mistral.example.com/v1"}]
},
},
{
// Our second model provider "bigcode" will be a Huggingface TGI deployment which hosts our
// bigcode/starcoder model for code completion functionality.
"id": "bigcode",
"displayName": "Bigcode",
"serverSideConfig": {
"type": "huggingface-tgi",
"endpoints": [{"url": "http://starcoder.example.com/v1"}]
}
}
],
// Make these two models available to Cody users
"modelOverridesRecommendedSettings": [
"mistral::v1::mixtral-8x7b-instruct",
"bigcode::v1::starcoder2-7b"
],
// Configure which models Cody will use by default
"defaultModels": {
"chat": "mistral::v1::mixtral-8x7b-instruct",
"fastChat": "mistral::v1::mixtral-8x7b-instruct",
"codeCompletion": "bigcode::v1::starcoder2-7b"
}
}
```
More advanced configurations are possible, the above is our blessed
configuration for today.
## Hosting models
Another major component of this work is starting to build up
recommendations around how to self-host models, which ones to use, how
to configure them, etc.
For now, we've been testing with these two on a machine with dual A100s:
* Huggingface TGI (this is a Docker container for model inference, which
provides an OpenAI-compatible API - and is widely popular)
* Two models:
* Starcoder2 for code completion; specifically `bigcode/starcoder2-15b`
with `eetq` 8-bit quantization.
* Mixtral 8x7b instruct for chat; specifically
`casperhansen/mixtral-instruct-awq` which uses `awq` 4-bit quantization.
This is our 'starter' configuration. Other models - specifically other
starcoder 2, and mixtral instruct models - certainly work too, and
higher parameter versions may of course provide better results.
Documentation for how to deploy Huggingface TGI, suggested configuration
and debugging tips - coming soon.
## Advanced configuration
As part of this effort, I have added a quite extensive set of
configuration knobs to to the client side model configuration (see `type
ClientSideModelConfigOpenAICompatible` in this PR)
Some of these configuration options are needed for things to work at a
basic level, while others (e.g. prompt customization) are not needed for
basic functionality, but are very important for customers interested in
self-hosting their own models.
Today, Cody clients have a number of different _autocomplete provider
implementations_ which tie model-specific logic to enable autocomplete,
to a provider. For example, if you use a GPT model through Azure OpenAI,
the autocomplete provider for that is entirely different from what you'd
get if you used a GPT model through OpenAI officially. This can lead to
some subtle issues for us, and so it is worth exploring ways to have a
_generalized autocomplete provider_ - and since with self-hosted models
we _must_ address this problem, these configuration knobs fed to the
client from the server are a pathway to doing that - initially just for
self-hosted models, but in the future possibly generalized to other
providers.
## Debugging facilities
Working with customers in the past to use OpenAI-compatible APIs, we've
learned that debugging can be quite a pain. If you can't see what
requests the Sourcegraph backend is making, and what it is getting
back.. it can be quite painful to debug.
This PR implements quite extensive logging, and a `debugConnections`
flag which can be turned on to enable logging of the actual request
payloads and responses. This is critical when a customer is trying to
add support for a new model, their own custom OpenAI API service, etc.
## Robustness
Working with customers in the past, we also learned that various parts
of our backend `openai` provider were not super robust. For example, [if
more than one message was present it was a fatal
error](https://github.com/sourcegraph/sourcegraph/blob/main/internal/completions/client/openai/openai.go#L305),
or if the SSE stream yielded `{"error"}` payloads, they would go
ignored. Similarly, the SSE event stream parser we use is heavily
tailored towards [the exact response
structure](https://github.com/sourcegraph/sourcegraph/blob/main/internal/completions/client/openai/decoder.go#L15-L19)
which OpenAI's official API returns, and is therefor quite brittle if
connecting to a different SSE stream.
For this work, I have _started by forking_ our
`internal/completions/client/openai` - and made a number of major
improvements to it to make it more robust, handle errors better, etc.
I have also replaced the usage of a custom SSE event stream parser -
which was not spec compliant and brittle - with a proper SSE event
stream parser that recently popped up in the Go community:
https://github.com/tmaxmax/go-sse
My intention is that after more extensive testing, this new
`internal/completions/client/openaicompatible` provider will be more
robust, more correct, and all around better than
`internal/completions/client/openai` (and possibly the azure one) so
that we can just supersede those with this new `openaicompatible` one
entirely.
## Client implementation
Much of the work done in this PR is just "let the site admin configure
things, and broadcast that config to the client through the new model
config system."
Actually getting the clients to respect the new configuration, is a task
I am tackling in future `sourcegraph/cody` PRs.
## Test plan
1. This change currently lacks any unit/regression tests, that is a
major noteworthy point. I will follow-up with those in a future PR.
* However, these changes are **incredibly** isolated, clearly only
affecting customers who opt-in to this new self-hosted models
configuration.
* Most of the heavy lifting (SSE streaming, shuffling data around) is
done in other well-tested codebases.
2. Manual testing has played a big role here, specifically:
* Running a dev instance with the new configuration, actually connected
to Huggingface TGI deployed on a remote server.
* Using the new `debugConnections` mechanism (which customers would use)
to directly confirm requests are going to the right places, with the
right data and payloads.
* Confirming with a new client (changes not yet landed) that
autocomplete and chat functionality work.
Can we use more testing? Hell yeah, and I'm going to add it soon. Does
it work quite well and have small room for error? Also yes.
## Changelog
Cody Enterprise: added a new configuration for self-hosting models.
Reach out to support if you would like to use this feature as it is in
early access.
---------
Signed-off-by: Stephen Gutekanst <stephen@sourcegraph.com>1 parent b4e03f4 commit dca1b96
File tree
18 files changed
+1070
-76
lines changed- cmd/frontend/internal/modelconfig
- internal
- completions
- client
- openaicompatible
- tokenizer
- tokenusage
- conf/conftypes
- modelconfig
- types
- schema
18 files changed
+1070
-76
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
173 | 173 | | |
174 | 174 | | |
175 | 175 | | |
| 176 | + | |
| 177 | + | |
| 178 | + | |
| 179 | + | |
| 180 | + | |
| 181 | + | |
| 182 | + | |
176 | 183 | | |
177 | | - | |
178 | 184 | | |
179 | | - | |
180 | | - | |
181 | | - | |
182 | | - | |
| 185 | + | |
| 186 | + | |
| 187 | + | |
183 | 188 | | |
184 | 189 | | |
185 | 190 | | |
| |||
194 | 199 | | |
195 | 200 | | |
196 | 201 | | |
| 202 | + | |
| 203 | + | |
| 204 | + | |
| 205 | + | |
| 206 | + | |
| 207 | + | |
| 208 | + | |
| 209 | + | |
| 210 | + | |
| 211 | + | |
| 212 | + | |
197 | 213 | | |
198 | 214 | | |
199 | 215 | | |
200 | 216 | | |
201 | | - | |
202 | | - | |
| 217 | + | |
| 218 | + | |
| 219 | + | |
| 220 | + | |
| 221 | + | |
| 222 | + | |
| 223 | + | |
| 224 | + | |
| 225 | + | |
| 226 | + | |
| 227 | + | |
| 228 | + | |
| 229 | + | |
| 230 | + | |
| 231 | + | |
| 232 | + | |
| 233 | + | |
| 234 | + | |
| 235 | + | |
| 236 | + | |
| 237 | + | |
| 238 | + | |
| 239 | + | |
| 240 | + | |
| 241 | + | |
| 242 | + | |
| 243 | + | |
| 244 | + | |
| 245 | + | |
| 246 | + | |
| 247 | + | |
| 248 | + | |
| 249 | + | |
203 | 250 | | |
| 251 | + | |
| 252 | + | |
204 | 253 | | |
205 | 254 | | |
206 | 255 | | |
| |||
213 | 262 | | |
214 | 263 | | |
215 | 264 | | |
| 265 | + | |
| 266 | + | |
| 267 | + | |
| 268 | + | |
| 269 | + | |
| 270 | + | |
216 | 271 | | |
217 | 272 | | |
218 | 273 | | |
| |||
262 | 317 | | |
263 | 318 | | |
264 | 319 | | |
265 | | - | |
266 | 320 | | |
267 | 321 | | |
268 | | - | |
269 | 322 | | |
270 | | - | |
271 | | - | |
272 | 323 | | |
273 | 324 | | |
274 | 325 | | |
275 | 326 | | |
276 | 327 | | |
277 | | - | |
278 | 328 | | |
279 | 329 | | |
280 | 330 | | |
| |||
285 | 335 | | |
286 | 336 | | |
287 | 337 | | |
288 | | - | |
| 338 | + | |
| 339 | + | |
| 340 | + | |
| 341 | + | |
| 342 | + | |
| 343 | + | |
| 344 | + | |
289 | 345 | | |
290 | | - | |
291 | | - | |
292 | 346 | | |
293 | 347 | | |
294 | 348 | | |
295 | 349 | | |
296 | | - | |
297 | 350 | | |
298 | 351 | | |
299 | 352 | | |
| |||
304 | 357 | | |
305 | 358 | | |
306 | 359 | | |
307 | | - | |
| 360 | + | |
| 361 | + | |
| 362 | + | |
| 363 | + | |
308 | 364 | | |
309 | | - | |
310 | | - | |
311 | 365 | | |
312 | 366 | | |
Lines changed: 2 additions & 2 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
160 | 160 | | |
161 | 161 | | |
162 | 162 | | |
163 | | - | |
164 | | - | |
| 163 | + | |
| 164 | + | |
165 | 165 | | |
166 | 166 | | |
167 | 167 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
6237 | 6237 | | |
6238 | 6238 | | |
6239 | 6239 | | |
| 6240 | + | |
| 6241 | + | |
| 6242 | + | |
| 6243 | + | |
| 6244 | + | |
| 6245 | + | |
| 6246 | + | |
6240 | 6247 | | |
6241 | 6248 | | |
6242 | 6249 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
318 | 318 | | |
319 | 319 | | |
320 | 320 | | |
| 321 | + | |
321 | 322 | | |
322 | 323 | | |
323 | 324 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
2410 | 2410 | | |
2411 | 2411 | | |
2412 | 2412 | | |
| 2413 | + | |
| 2414 | + | |
2413 | 2415 | | |
2414 | 2416 | | |
2415 | 2417 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
17 | 17 | | |
18 | 18 | | |
19 | 19 | | |
| 20 | + | |
20 | 21 | | |
21 | 22 | | |
22 | 23 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
10 | 10 | | |
11 | 11 | | |
12 | 12 | | |
| 13 | + | |
13 | 14 | | |
14 | 15 | | |
15 | 16 | | |
| |||
64 | 65 | | |
65 | 66 | | |
66 | 67 | | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| 72 | + | |
67 | 73 | | |
68 | 74 | | |
69 | 75 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
0 commit comments