IndusQ-1.1B
State‑of‑the‑art large language model useful on a variety of language understanding and generation tasks.
Indus is today a 1.2 billion parameter model and has been supervised fine tuned for Hindi and dialects.
Technical Details
Input sequence length for Prompt Processor:128
Max context length:1024
Number of parameters:1B
Precision:w4a16 + w8a16 (few layers)
Use:Initiate conversation with prompt-processor and then token generator for subsequent iterations.
Minimum QNN SDK version required:2.27.7
Supported languages:Hindi and English.
TTFT:Time To First Token is the time it takes to generate the first response token. This is expressed as a range because it varies based on the length of the prompt. The lower bound is for a short prompt (up to 128 tokens, i.e., one iteration of the prompt processor) and the upper bound is for a prompt using the full context length (1024 tokens).
Response Rate:Rate of response generation after the first response token.
Applicable Scenarios
- Dialogue
- Content Generation
- Customer Support
Supported Form Factors
- Phone
- Tablet
Licenses
Terms of Use:Qualcomm® Generative AI usage and limitations
Tags
- llm
- generative-ai
- quantized
Supported Devices
- Snapdragon 8 Elite QRD
Supported Chipsets
- Snapdragon® 8 Elite Mobile
Models from Tech Mahindra
See all model makersLooking for more? See pre‑optimized models for all solutions.
Browse All