• mistral-large

    Limit for generated tokens in a chat, constrained by the model's context length (value 0 means no limit).
    Sets randomness level (0 to 1). Higher values increase randomness; lower values focus output. (Default: 1)
    The system prompt sets the initial context and direction for the model's responses. It should be carefully selected to align with the specific topic and desired style of the conversation.
  • codellama-70b-instruct

    Limit for generated tokens in a chat, constrained by the model's context length (value 0 means no limit).
    Sets randomness level (0 to 1). Higher values increase randomness; lower values focus output. (Default: 1)
    The system prompt sets the initial context and direction for the model's responses. It should be carefully selected to align with the specific topic and desired style of the conversation.
  • claude-3-opus

    Limit for generated tokens in a chat, constrained by the model's context length (value 0 means no limit).
    Sets randomness level (0 to 1). Higher values increase randomness; lower values focus output. (Default: 1)
    The system prompt sets the initial context and direction for the model's responses. It should be carefully selected to align with the specific topic and desired style of the conversation.
  • gpt-4o

    Limit for generated tokens in a chat, constrained by the model's context length (value 0 means no limit).
    Sets randomness level (0 to 1). Higher values increase randomness; lower values focus output. (Default: 1)
    The system prompt sets the initial context and direction for the model's responses. It should be carefully selected to align with the specific topic and desired style of the conversation.