Search Results for

    Show / Hide Table of Contents

    Class OpenAIModelConfiguration

    Inheritance
    object
    OpenAIModelConfiguration
    ChatGptModelConfiguration
    Implements
    IEquatable<OpenAIModelConfiguration>
    Inherited Members
    object.Equals(object)
    object.Equals(object, object)
    object.GetHashCode()
    object.GetType()
    object.MemberwiseClone()
    object.ReferenceEquals(object, object)
    object.ToString()
    Namespace: DotnetPrompt.LLM.OpenAI
    Assembly: DotnetPrompt.LLM.OpenAI.dll
    Syntax
    public record OpenAIModelConfiguration : IEquatable<OpenAIModelConfiguration>

    Fields

    | Improve this Doc View Source

    Default

    Default model configuration

    Declaration
    public static OpenAIModelConfiguration Default
    Field Value
    Type Description
    OpenAIModelConfiguration

    Properties

    | Improve this Doc View Source

    CacheLevel

    can be used to disable any server-side caching, 0=no cache, 1=prompt prefix enabled, 2=full cache

    Declaration
    [JsonPropertyName("cache_level")]
    public int? CacheLevel { get; init; }
    Property Value
    Type Description
    int?
    | Improve this Doc View Source

    CompletionConfig

    Completion configuration.

    Declaration
    [JsonPropertyName("completion_config")]
    public string CompletionConfig { get; init; }
    Property Value
    Type Description
    string
    | Improve this Doc View Source

    Echo

    Echo back the prompt in addition to the completion.

    Declaration
    [JsonPropertyName("echo")]
    public bool? Echo { get; init; }
    Property Value
    Type Description
    bool?
    | Improve this Doc View Source

    FrequencyPenalty

    How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics.

    Declaration
    [JsonPropertyName("frequency_penalty")]
    public float? FrequencyPenalty { get; init; }
    Property Value
    Type Description
    float?
    | Improve this Doc View Source

    GenerationSampleCount

    How many generations to create server side, and display only the best. Will not stream intermediate progress if best_of > 1. Has maximum value of 128.

    Declaration
    [JsonPropertyName("best_of")]
    public int? GenerationSampleCount { get; init; }
    Property Value
    Type Description
    int?
    | Improve this Doc View Source

    LogitBias

    Defaults to null. Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the GPT tokenizer) to an associated bias value from -100 to 100. You can use this tokenizer tool (which works for both GPT-2 and GPT-3) to convert text to token IDs. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. As an example, you can pass {"50256" &#58; -100} to prevent the <|endoftext|> token from being generated.

    Declaration
    [JsonPropertyName("logit_bias")]
    public IDictionary<string, int> LogitBias { get; init; }
    Property Value
    Type Description
    IDictionary<string, int>
    | Improve this Doc View Source

    LogProbability

    Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. So for example, if logprobs is 10, the API will return a list of the 10 most likely tokens. If logprobs is 0, only the chosen tokens will have logprobs returned. Minimum of 0 and maximum of 100 allowed.

    Declaration
    [JsonPropertyName("logprobs")]
    public int? LogProbability { get; init; }
    Property Value
    Type Description
    int?
    | Improve this Doc View Source

    MaxTokens

    The maximum number of tokens to generate. Has minimum of 0.

    Declaration
    [JsonPropertyName("max_tokens")]
    public int? MaxTokens { get; set; }
    Property Value
    Type Description
    int?
    Remarks

    Could be set to -1 if there a single prompt to return as many tokens as possible given the prompt and the models maximal context size.

    | Improve this Doc View Source

    Model

    The name of the model to use.

    Declaration
    [JsonPropertyName("model")]
    public string Model { get; init; }
    Property Value
    Type Description
    string
    | Improve this Doc View Source

    NucleusSamplingFactor

    An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend using this or temperature but not both. Minimum of 0 and maximum of 1 allowed.

    Declaration
    [JsonPropertyName("top_p")]
    public float? NucleusSamplingFactor { get; init; }
    Property Value
    Type Description
    float?
    | Improve this Doc View Source

    PresencePenalty

    How much to penalize new tokens based on their existing frequency in the text so far. Decreases the model's likelihood to repeat the same line verbatim. Has minimum of -2 and maximum of 2.

    Declaration
    [JsonPropertyName("presence_penalty")]
    public float? PresencePenalty { get; init; }
    Property Value
    Type Description
    float?
    | Improve this Doc View Source

    Prompt

    An optional prompt to complete from, encoded as a string, a list of strings, or a list of token lists. Defaults to <|endoftext|>. The prompt to complete from. If you would like to provide multiple prompts, use the POST variant of this method. Note that <|endoftext|> is the document separator that the model sees during training, so if a prompt is not specified the model will generate as if from the beginning of a new document. Maximum allowed size of string list is 2048.

    Declaration
    [JsonPropertyName("prompt")]
    public IList<string> Prompt { get; init; }
    Property Value
    Type Description
    IList<string>
    | Improve this Doc View Source

    SnippetCount

    How many snippets to generate for each prompt. Minimum of 1 and maximum of 128 allowed.

    Declaration
    [JsonPropertyName("n")]
    public int? SnippetCount { get; init; }
    Property Value
    Type Description
    int?
    | Improve this Doc View Source

    Stop

    A sequence which indicates the end of the current document.

    Declaration
    [JsonPropertyName("stop")]
    public IList<string> Stop { get; set; }
    Property Value
    Type Description
    IList<string>
    | Improve this Doc View Source

    Temperature

    What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend using this or top_p but not both. Minimum of 0 and maximum of 2 allowed.

    Declaration
    [JsonPropertyName("temperature")]
    public float? Temperature { get; init; }
    Property Value
    Type Description
    float?
    | Improve this Doc View Source

    User

    The ID of the end-user, for use in tracking and rate-limiting.

    Declaration
    [JsonPropertyName("user")]
    public string User { get; init; }
    Property Value
    Type Description
    string

    Implements

    System.IEquatable<T>
    • Improve this Doc
    • View Source
    In This Article
    Back to top Copyright 2023 © Pavel «xakpc» Osadchuk for DotnetPrompt