50% off GPT-5 until September 24th! Learn more

Nous: Hermes 2 Mixtral 8x7B DPO

nousresearch/nous-hermes-2-mixtral-8x7b-dpo

Created Jan 16, 202432,768 context

Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the Mixtral 8x7B MoE LLM.

The model was trained on over 1,000,000 entries of primarily GPT-4 generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks.

#moe

Recent activity on Hermes 2 Mixtral 8x7B DPO

Total usage per day on OpenRouter

Jun 20Jun 25Jun 30Jul 5Jul 10Jul 15Jul 20Jul 25Jul 30Aug 4Aug 9Aug 14Aug 19Aug 243.5M7M10.5M14M

More models from Nous Research

    Hermes 2 Mixtral 8x7B DPO - API, Providers, Stats | OpenRouter