We use essential cookies to make our site work. With your consent, we may also use non-essential cookies to improve user experience and analyze website traffic…
microsoft/phi-4-reasoning-plus cover image
featured

microsoft/phi-4-reasoning-plus

Phi-4-reasoning-plus is a state-of-the-art open-weight reasoning model finetuned from Phi-4 using supervised fine-tuning on a dataset of chain-of-thought traces and reinforcement learning. The supervised fine-tuning dataset includes a blend of synthetic prompts and high-quality filtered data from public domain websites, focused on math, science, and coding skills as well as alignment data for safety and Responsible AI. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. Phi-4-reasoning-plus has been trained additionally with Reinforcement Learning, hence, it has higher accuracy but generates on average 50% more tokens, thus having higher latency.

Phi-4-reasoning-plus is a state-of-the-art open-weight reasoning model finetuned from Phi-4 using supervised fine-tuning on a dataset of chain-of-thought traces and reinforcement learning. The supervised fine-tuning dataset includes a blend of synthetic prompts and high-quality filtered data from public domain websites, focused on math, science, and coding skills as well as alignment data for safety and Responsible AI. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. Phi-4-reasoning-plus has been trained additionally with Reinforcement Learning, hence, it has higher accuracy but generates on average 50% more tokens, thus having higher latency.

Public
$0.07/$0.35 in/out Mtoken
bfloat16
32,768
microsoft/phi-4-reasoning-plus cover image

phi-4-reasoning-plus

Ask me anything

0.00s

Phi-4-reasoning-plus Model Card

Phi-4-reasoning Technical Report

Model Summary

DevelopersMicrosoft Research
DescriptionPhi-4-reasoning-plus is a state-of-the-art open-weight reasoning model finetuned from Phi-4 using supervised fine-tuning on a dataset of chain-of-thought traces and reinforcement learning. The supervised fine-tuning dataset includes a blend of synthetic prompts and high-quality filtered data from public domain websites, focused on math, science, and coding skills as well as alignment data for safety and Responsible AI. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning. Phi-4-reasoning-plus has been trained additionally with Reinforcement Learning, hence, it has higher accuracy but generates on average 50% more tokens, thus having higher latency.
ArchitectureBase model same as previously released Phi-4, 14B parameters, dense decoder-only Transformer model
InputsText, best suited for prompts in the chat format
Context length32k tokens
GPUs32 H100-80G
Training time2.5 days
Training data16B tokens, ~8.3B unique tokens
OutputsGenerated text in response to the input. Model responses have two sections, namely, a reasoning chain-of-thought block followed by a summarization block
DatesJanuary 2025 – April 2025
StatusStatic model trained on an offline dataset with cutoff dates of March 2025 and earlier for publicly available data
Release dateApril 30, 2025
LicenseMIT

Intended Use

Primary Use CasesOur model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:

1. Memory/compute constrained environments.
2. Latency bound scenarios.
3. Reasoning and logic.
Out-of-Scope Use CasesThis model is designed and tested for math reasoning only. Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English. Review the Responsible AI Considerations section below for further guidance when choosing a use case. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.

Data Overview

Training Datasets

Our training data is a mixture of Q&A, chat format data in math, science, and coding. The chat prompts are sourced from filtered high-quality web data and optionally rewritten and processed through a synthetic data generation pipeline. We further include data to improve truthfulness and safety.

Benchmark Datasets

We evaluated Phi-4-reasoning-plus using the open-source Eureka evaluation suite and our own internal benchmarks to understand the model's capabilities. More specifically, we evaluate our model on:

Reasoning tasks:

  • AIME 2025, 2024, 2023, and 2022: Math olympiad questions.

  • GPQA-Diamond: Complex, graduate-level science questions.

  • OmniMath: Collection of over 4000 olympiad-level math problems with human annotation.

  • LiveCodeBench: Code generation benchmark gathered from competitive coding contests.

  • 3SAT (3-literal Satisfiability Problem) and TSP (Traveling Salesman Problem): Algorithmic problem solving.

  • BA Calendar: Planning.

  • Maze and SpatialMap: Spatial understanding.

General-purpose benchmarks:

  • Kitab: Information retrieval.

  • IFEval and ArenaHard: Instruction following.

  • PhiBench: Internal benchmark.

  • FlenQA: Impact of prompt length on model performance.

  • HumanEvalPlus: Functional code generation.

  • MMLU-Pro: Popular aggregated dataset for multitask language understanding.

Safety

Approach

Phi-4-reasoning-plus has adopted a robust safety post-training approach via supervised fine-tuning (SFT). This approach leverages a variety of both open-source and in-house generated synthetic prompts, with LLM-generated responses that adhere to rigorous Microsoft safety guidelines, e.g., User Understanding and Clarity, Security and Ethical Guidelines, Limitations, Disclaimers and Knowledge Scope, Handling Complex and Sensitive Topics, Safety and Respectful Engagement, Confidentiality of Guidelines and Confidentiality of Chain-of-Thoughts.

Safety Evaluation and Red-Teaming

Prior to release, Phi-4-reasoning-plus followed a multi-faceted evaluation approach. Quantitative evaluation was conducted with multiple open-source safety benchmarks and in-house tools utilizing adversarial conversation simulation. For qualitative safety evaluation, we collaborated with the independent AI Red Team (AIRT) at Microsoft to assess safety risks posed by Phi-4-reasoning-plus in both average and adversarial user scenarios. In the average user scenario, AIRT emulated typical single-turn and multi-turn interactions to identify potentially risky behaviors. The adversarial user scenario tested a wide range of techniques aimed at intentionally subverting the model's safety training including grounded-ness, jailbreaks, harmful content like hate and unfairness, violence, sexual content, or self-harm, and copyright violations for protected material. We further evaluate models on Toxigen, a benchmark designed to measure bias and toxicity targeted towards minority groups.

Please refer to the technical report for more details on safety alignment.

Model Quality

At the high-level overview of the model quality on representative benchmarks. For the tables below, higher numbers indicate better performance:

AIME 24AIME 25OmniMathGPQA-DLiveCodeBench (8/1/24–2/1/25)
Phi-4-reasoning75.362.976.665.853.8
Phi-4-reasoning-plus81.378.081.968.953.1
OpenThinker2-32B58.058.064.1
QwQ 32B79.565.859.563.4
EXAONE-Deep-32B72.165.866.159.5
DeepSeek-R1-Distill-70B69.351.563.466.257.5
DeepSeek-R178.770.485.073.062.8
o1-mini63.654.860.053.8
o174.675.367.576.771.0
o3-mini88.078.074.677.769.5
Claude-3.7-Sonnet55.358.754.676.8
Gemini-2.5-Pro92.086.761.184.069.2
Phi-4Phi-4-reasoningPhi-4-reasoning-pluso3-miniGPT-4o
FlenQA [3K-token subset]82.097.797.996.890.8
IFEval Strict62.383.484.991.581.8
ArenaHard68.173.379.081.975.6
HumanEvalPlus83.592.992.394.088.0
MMLUPro71.574.376.079.473.0
Kitab
No Context - Precision
With Context - Precision
No Context - Recall
With Context - Recall

19.3
88.5
8.2
68.1

23.2
91.5
4.9
74.8

27.6
93.6
6.3
75.4

37.9
94.0
4.2
76.1

53.7
84.7
20.3
69.2
Toxigen Discriminative
Toxic category
Neutral category

72.6
90.0

86.7
84.7

77.3
90.5

85.4
88.7

87.6
85.1
PhiBench 2.2158.270.674.278.072.4

Overall, Phi-4-reasoning and Phi-4-reasoning-plus, with only 14B parameters, performs well across a wide range of reasoning tasks, outperforming significantly larger open-weight models such as DeepSeek-R1 distilled 70B model and approaching the performance levels of full DeepSeek R1 model. We also test the models on multiple new reasoning benchmarks for algorithmic problem solving and planning, including 3SAT, TSP, and BA-Calendar. These new tasks are nominally out-of-domain for the models as the training process did not intentionally target these skills, but the models still show strong generalization to these tasks. Furthermore, when evaluating performance against standard general abilities benchmarks such as instruction following or non-reasoning tasks, we find that our new models improve significantly from Phi-4, despite the post-training being focused on reasoning skills in specific domains.

Usage

Inference Parameters

Inference is better with temperature=0.8, top_p=0.95, and do_sample=True. For more complex queries, set the maximum number of tokens to 32k to allow for longer chain-of-thought (CoT).

Phi-4-reasoning-plus has shown strong performance on reasoning-intensive tasks. In our experiments, we extended its maximum number of tokens to 64k, and it handled longer sequences with promising results, maintaining coherence and logical consistency over extended inputs. This makes it a compelling option to explore for tasks that require deep, multi-step reasoning or extensive context.

Input Formats

Given the nature of the training data, always use ChatML template with the following system prompt for inference:

<|im_start|>system<|im_sep|>
Your role as an assistant involves thoroughly exploring questions through a systematic thinking process before providing the final precise and accurate solutions. This requires engaging in a comprehensive cycle of analysis, summarizing, exploration, reassessment, reflection, backtracing, and iteration to develop well-considered thinking process. Please structure your response into two main sections: Thought and Solution using the specified format: **\<think>** {Thought section} <\think> {Solution section}. In the Thought section, detail your reasoning process in steps. Each step should include detailed considerations such as analysing questions, summarizing relevant findings, brainstorming new ideas, verifying the accuracy of the current steps, refining any errors, and revisiting previous steps. In the Solution section, based on various attempts, explorations, and reflections from the Thought section, systematically present the final solution that you deem correct. The Solution section should be logical, accurate, and concise and detail necessary steps needed to reach the conclusion. Now, try to solve the following question through the above guidelines:<|im_end|>
<|im_start|>user<|im_sep|>
What is the derivative of x^2?<|im_end|>
<|im_start|>assistant<|im_sep|>
copy

Responsible AI Considerations

Like other language models, Phi-4-reasoning-plus can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:

  • Quality of Service: The model is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. Phi-4-reasoning-plus is not intended to support multilingual use.

  • Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.

  • Inappropriate or Offensive Content: These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.

  • Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.

  • Election Information Reliability: The model has an elevated defect rate when responding to election-critical queries, which may result in incorrect or unauthoritative election critical information being presented. We are working to improve the model's performance in this area. Users should verify information related to elections with the election authority in their region.

  • Limited Scope for Code: Majority of Phi-4-reasoning-plus training data is based in Python and uses common packages such as typing, math, random, collections, datetime, itertools. If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.

Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Using safety services like Azure AI Content Safety that have advanced guardrails is highly recommended. Important areas for consideration include:

  • Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.

  • High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.

  • Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).

  • Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.

  • Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.