Model Details
This model is a mixed gguf q2ks format of miromind-ai/MiroThinker-v1.5-30B generated by intel/auto-round algorithm. Embedding layer and lm-head layer are fallback to 8 bits and non expert layers are fallback to 4 bits. Please refer to Section Generate the model for more details.
Please follow the license of the original model.
How To Use
Llamacpp inference.
./llama-cli -hf Intel/MiroThinker-v1.5-30B-gguf-q2ks-mixed-AutoRound
> Write a quick sort algorithm.
<think>
The user wants a quick sort algorithm. I should provide a clear, correct implementation. Let me think about the best way to do this.
Quick sort is a divide-and-conquer algorithm. The basic steps are:
1. Choose a pivot element (commonly the last element, but can be any)
2. Partition the array so that elements smaller than the pivot are on the left and elements greater are on the right
3. Recursively sort the left and right partitions
I should write a Python implementation since it's more readable and widely used. I'll use a clear, well-commented approach.
Let me structure the response:
- First, a brief explanation of the algorithm
- Then the actual code (both a quick sort function and a helper function for partitioning, or a single function with partition logic)
- Maybe include an example of how to use it
- Also mention the time complexity (average case O(n log n), worst case O(n^2))
Let me write the code:
```python
def quick_sort(arr, left=0, right=None):
if right is None:
right = len(arr) - 1
if left < right:
# Partition and get pivot index
pivot_index = _partition(arr, left, right)
# Recursively sort left half
quick_sort(arr, left, pivot_index - 1)
# Recursively sort right half
quick_sort(arr, pivot_index + 1, right)
def _partition(arr, left, right):
# Pivot is the last element
pivot = arr[right]
i = left - 1 # Index of smaller element
for j in range(left, right):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i]
# Swap pivot to the correct position
arr[i + 1], arr[right] = arr[right], arr[i + 1]
return i + 1
```
This is a good, standard implementation. Let me make sure the explanation is clear. I should also mention that this is an in-place sort (modifies the original array). If the user wants a version that returns a new list, I should mention that too.
Let me add an example:
```python
# Example usage:
arr = [3, 6, 8, 10, 1, 2, 1]
quick_sort(arr)
print(arr) # [1, 1, 2, 3, 6, 8, 10]
```
I should also mention the pivot selection strategy - choosing the last element is simple but can lead to worst-case performance on already sorted arrays. Maybe I can add a note about that.
Let me refine the response to be comprehensive but concise.
</think>
Here's a Python implementation of the quick sort algorithm:
```python
def quick_sort(arr, left=0, right=None):
"""Sorts an array in place using quick sort algorithm
Args:
arr: List of comparable elements
left: Starting index (default 0)
right: Ending index (default len(arr)-1)
"""
if right is None:
right = len(arr) - 1
if left < right:
# Partition the array and get the pivot index
pivot_index = _partition(arr, left, right)
# Recursively sort elements before and after pivot
quick_sort(arr, left, pivot_index - 1)
quick_sort(arr, pivot_index + 1, right)
def _partition(arr, left, right):
"""Partitions the array around the pivot (last element)"""
# Pivot is the last element
pivot = arr[right]
i = left - 1 # Index of smaller element
for j in range(left, right):
if arr[j] <= pivot:
i += 1
arr[i], arr[j] = arr[j], arr[i]
# Swap pivot to its correct position
arr[i + 1], arr[right] = arr[right], arr[i + 1]
return i + 1
# Example usage:
arr = [3, 6, 8, 10, 1, 2, 1]
quick_sort(arr)
print(arr) # [1, 1, 2, 3, 6, 8, 10]
```
### Notes:
- **Time complexity**: Average case O(n log n), worst case O(n²) when the pivot is poorly chosen (e.g., already sorted array)
- **Space complexity**: O(log n) for recursion stack
- For a version that returns a new sorted list instead of modifying in-place, you can modify the function to return `sorted(arr)` using recursion or implement a separate function.
Generate the model
Please use auto-round 0.9.2 or >=0.95 as 0.93/0.94 have device bugs that cuasing the quantization process very slow
import torch
from auto_round import AutoRound
from auto_round.utils import llm_load_model
model_name = "miromind-ai/MiroThinker-v1.5-30B"
model, tokenizer=llm_load_model(model_name,trust_remote_code=False,device="cpu")
layer_config = {}
for n, m in model.named_modules():
if isinstance(m,torch.nn.Embedding):
layer_config[n] = {"bits": 8}
if isinstance(m, torch.nn.Linear):
if n=="lm_head":
layer_config[n] = {"bits": 8}
continue
if "expert" in n and "shared_experts" not in n:
layer_config[n] = {"bits": 2}
elif n != "lm_head":
layer_config[n] = {"bits": 4}
print(n, 4)
ar = AutoRound(model, tokenizer=tokenizer, iters=0, scheme="gguf:q2_k_s", layer_config=layer_config)
ar.quantize_and_save(format="gguf:q2_k_s", output_dir="./MiroThinker-v1.5-30B-gguf-q2ks-mixed")
Ethical Considerations and Limitations
The model can produce factually incorrect output, and should not be relied on to produce factually accurate information. Because of the limitations of the pretrained model and the finetuning datasets, it is possible that this model could generate lewd, biased or otherwise offensive outputs.
Therefore, before deploying any applications of the model, developers should perform safety testing.
Caveats and Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
Here are a couple of useful links to learn more about Intel's AI software:
- Intel Neural Compressor link
Disclaimer
The license on this model does not constitute legal advice. We are not responsible for the actions of third parties who use this model. Please consult an attorney before using this model for commercial purposes.
Cite
@article{cheng2025signroundv2,
title={SignRoundV2: Closing the Performance Gap in Extremely Low-Bit Post-Training Quantization for LLMs},
author={Cheng, Wenhua and Zhang, Weiwei and Guo, Heng and Shen, Haihao},
journal={arXiv preprint arXiv:2512.04746},
year={2025}
}
- Downloads last month
- 279
2-bit
Model tree for Intel/MiroThinker-v1.5-30B-gguf-q2ks-mixed-AutoRound
Base model
MiniMaxAI/MiniMax-M2