google/flan-t5-xxl cover image

google/flan-t5-xxl

Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language model.

Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language model.

Public
$0.0005/sec

Input

text to generate from

maximum length of the generated text (Default: 200, 1 ≤ max_length ≤ 2048)

You need to login to use this model

Output

Haiku is a Japanese poem that is around 108 characters long. A tweet is ...

 


© 2023 Deep Infra. All rights reserved.

Discord Logo