OptionalconvertOptionalmaxMaximum number of tokens to generate in the completion.
OptionalmodelModel to use
OptionalmodelModel to use
Alias for model
OptionalresponseAvailable for gemini-1.5-pro.
The output format of the generated candidate text.
Supported MIME types:
text/plain: Text output.application/json: JSON response in the candidates.OptionalsafetyOptionalsafetyOptionalstopOptionalstreamWhether or not to include usage data, like token counts in the streamed response chunks.
OptionalstreamingWhether or not to stream.
OptionaltemperatureSampling temperature to use
OptionaltoolsOptionaltopKTop-k changes how the model selects tokens for output.
A top-k of 1 means the selected token is the most probable among all tokens in the model’s vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature).
OptionaltopPTop-p changes how the model selects tokens for output.
Tokens are selected from most probable to least until the sum of their probabilities equals the top-p value.
For example, if tokens A, B, and C have a probability of .3, .2, and .1 and the top-p value is .5, then the model will select either A or B as the next token (using temperature).
The params which can be passed to the API at request time.