Phi-3.5-Mini-Instruct

State‑of‑the‑art large language model useful on a variety of language understanding and generation tasks.

Phi‑3.5‑mini is a lightweight, state‑of‑the‑art open model built upon datasets used for Phi‑3 ‑ synthetic data and filtered publicly available websites ‑ with a focus on very high‑quality, reasoning dense data. The model belongs to the Phi‑3 model family and supports 128K token context length. The model underwent a rigorous enhancement process, incorporating both supervised fine‑tuning, proximal policy optimization, and direct preference optimization to ensure precise instruction adherence and robust safety measures.

Technical Details

Minimum QAIRT SDK:2.43.1
Input sequence length for Prompt Processor:128
Supported context lengths:512, 1024, 2048, 3072, 4096
Number of parameters:3.8B
Quantization Type:w4a16 + w8a16 (few layers)
Measurement details:The model was benchmarked using a short prompt only. Since the model asset supports multiple context lengths, the short prompt may have triggered lower context lengths. The upper bound of the TTFT is approximated based on the performance of the model with the short prompt, and the assumption that TTFT scales linearly with the context length.
TTFT:Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (4096 tokens).
Response Rate:Rate of response generation after the first response token.

Applicable Scenarios

  • Dialogue
  • Content Generation
  • Customer Support

License

Tags

  • llm
  • generative-ai

Supported Compute Devices

  • Snapdragon X Elite CRD
  • Snapdragon X2 Elite CRD

Supported Compute Chipsets

  • Snapdragon® X Elite
  • Snapdragon® X2 Elite

Looking for more? See models created by industry leaders.

Discover Model Makers