cognitivecomputations/dolphin-2.9.1-llama-3-70b cover image

cognitivecomputations/dolphin-2.9.1-llama-3-70b

Dolphin 2.9.1, a fine-tuned Llama-3-70b model. The new model, trained on filtered data, is more compliant but uncensored. It demonstrates improvements in instruction, conversation, coding, and function calling abilities.

Dolphin 2.9.1, a fine-tuned Llama-3-70b model. The new model, trained on filtered data, is more compliant but uncensored. It demonstrates improvements in instruction, conversation, coding, and function calling abilities.

Public
bfloat16
8k
ProjectLicense
cognitivecomputations/dolphin-2.9.1-llama-3-70b cover image

Dolphin Llama 3 70b

Ask me anything

0.00s

Dolphin 2.9.1 Llama 3 70b 🐬

Curated and trained by Eric Hartford, Lucas Atkins, and Fernando Fernandes, and Cognitive Computations

We have retrained our LLama-3-70b fine tune to address behavioral issues in the initial 2.9 dataset. Specifically, Systemchat was causing the model to be too reliant on the system prompt. Additionally, it had an occasional quirk that would cause the model to overly reference the system prompt. We also found generation length was at times not sufficient for any given task. We identified the culprit as Ultrachat. Accounting for these concerns, we removed systemchat and ultrachat from the dataset. It is otherwise identical to dolphin-2.9.

This model is based on Llama-3-70b, and is governed by META LLAMA 3 COMMUNITY LICENSE AGREEMENT

The base model has 8k context, and the full-weight fine-tuning was with 4k sequence length.

It took 3 days on an 8x H100 provided by Crusoe Cloud

This model was trained FFT on parameters selected by Laser Scanner, using ChatML prompt template format.

example:

<|im_start|>system
You are Dolphin, a helpful AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Dolphin-2.9.1 has a variety of instruction, conversational, and coding skills. It also has initial agentic abilities and supports function calling.

Dolphin is uncensored. We have filtered the dataset to remove alignment and bias. This makes the model more compliant. You are advised to implement your own alignment layer before exposing the model as a service. It will be highly compliant with any requests, even unethical ones. Please read my blog post about uncensored models. https://erichartford.com/uncensored-models You are responsible for any content you create using this model. Enjoy responsibly.

Dolphin is licensed according to Meta's Llama license. We grant permission for any use, including commercial, that falls within accordance with Meta's Llama-3 license. Dolphin was trained on data generated from GPT4, among other models.

Evals

image/png

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 8
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • total_eval_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_steps: 5
  • num_epochs: 3

Training results

Training LossEpochStepValidation Loss
0.76590.000410.7454
0.50060.25015870.4817
0.48070.500211740.4698
0.47580.750317610.4627
0.49691.000423480.4558
0.36041.234629350.4635
0.38171.484735220.4572
0.3771.734841090.4533
0.36951.984946960.4487
0.26762.218752830.4825
0.2552.468858700.4814
0.28512.718964570.4808

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.2.2+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1