sbsa/cu130/: bitsandbytes-0.48.0 metadata and description

Simple index

k-bit optimizers and matrix multiplication routines.

author_email Tim Dettmers <dettmers@cs.washington.edu>
classifiers
  • Development Status :: 4 - Beta
  • Environment :: GPU :: NVIDIA CUDA :: 11.8
  • Environment :: GPU :: NVIDIA CUDA :: 12
  • Environment :: GPU :: NVIDIA CUDA :: 13
  • Intended Audience :: Developers
  • Intended Audience :: Science/Research
  • Operating System :: POSIX :: Linux
  • Operating System :: Microsoft :: Windows
  • Programming Language :: C++
  • Programming Language :: Python :: Implementation :: CPython
  • Programming Language :: Python :: 3.9
  • Programming Language :: Python :: 3.10
  • Programming Language :: Python :: 3.11
  • Programming Language :: Python :: 3.12
  • Programming Language :: Python :: 3.13
  • Topic :: Scientific/Engineering :: Artificial Intelligence
description_content_type text/markdown
dynamic
  • license-file
keywords gpu,optimizers,optimization,8-bit,quantization,compression
license_expression MIT
license_file
  • LICENSE
maintainer_email Titus von KΓΆller <titus@huggingface.co>, Matthew Douglas <matthew.douglas@huggingface.co>
project_urls
  • homepage, https://github.com/bitsandbytes-foundation/bitsandbytes
  • changelog, https://github.com/bitsandbytes-foundation/bitsandbytes/blob/main/CHANGELOG.md
  • docs, https://huggingface.co/docs/bitsandbytes/main
  • issues, https://github.com/bitsandbytes-foundation/bitsandbytes/issues
provides_extras
  • benchmark
  • docs
  • dev
  • test
requires_dist
  • torch<3,>=2.3
  • numpy>=1.17
  • packaging>=20.9
  • pandas; extra == "benchmark"
  • matplotlib; extra == "benchmark"
  • hf-doc-builder==0.5.0; extra == "docs"
  • bitsandbytes[test]; extra == "dev"
  • build<2,>=1.0.0; extra == "dev"
  • ruff==0.11.2; extra == "dev"
  • pre-commit<4,>=3.5.0; extra == "dev"
  • wheel<1,>=0.42; extra == "dev"
  • einops~=0.8.0; extra == "test"
  • lion-pytorch==0.2.3; extra == "test"
  • pytest~=8.3; extra == "test"
  • scipy<2,>=1.11.4; extra == "test"
  • transformers<5,>=4.30.1; extra == "test"
requires_python >=3.9

Because this project isn't in the mirror_whitelist, no releases from root/pypi are included.

File Tox results History
bitsandbytes-0.48.0-cp312-cp312-linux_aarch64.whl
Size
2 MB
Type
Python Wheel
Python
3.12

bitsandbytes

License Downloads Nightly Unit Tests GitHub Release PyPI - Python Version

bitsandbytes enables accessible large language models via k-bit quantization for PyTorch. We provide three main features for dramatically reducing memory consumption for inference and training:

The library includes quantization primitives for 8-bit & 4-bit operations, through bitsandbytes.nn.Linear8bitLt and bitsandbytes.nn.Linear4bit and 8-bit optimizers through bitsandbytes.optim module.

System Requirements

bitsandbytes has the following minimum requirements for all platforms:

Accelerator support:

Note: this table reflects the status of the current development branch. For the latest stable release, see the document in the 0.48.0 tag.

Legend:

🚧 = In Development, 〰️ = Partially Supported, βœ… = Supported, ❌ = Not Supported

Platform Accelerator Hardware Requirements LLM.int8() QLoRA 4-bit 8-bit Optimizers
🐧 Linux, glibc >= 2.24
x86-64 ◻️ CPU AVX2 βœ… βœ… ❌
🟩 NVIDIA GPU
cuda
SM60+ minimum
SM75+ recommended
βœ… βœ… βœ…
πŸŸ₯ AMD GPU
cuda
CDNA: gfx90a, gfx942
RDNA: gfx1100
🚧 🚧 🚧
🟦 Intel GPU
xpu
Data Center GPU Max Series
Arc A-Series (Alchemist)
Arc B-Series (Battlemage)
βœ… βœ… 〰️
πŸŸͺ Intel Gaudi
hpu
Gaudi2, Gaudi3 βœ… 〰️ ❌
aarch64 ◻️ CPU βœ… βœ… ❌
🟩 NVIDIA GPU
cuda
SM75+ βœ… βœ… βœ…
πŸͺŸ Windows 11 / Windows Server 2019+
x86-64 ◻️ CPU AVX2 βœ… βœ… ❌
🟩 NVIDIA GPU
cuda
SM60+ minimum
SM75+ recommended
βœ… βœ… βœ…
🟦 Intel GPU
xpu
Arc A-Series (Alchemist)
Arc B-Series (Battlemage)
βœ… βœ… 〰️
🍎 macOS 14+
arm64 ◻️ CPU Apple M1+ 🚧 🚧 ❌
⬜ Metal
mps
Apple M1+ 🚧 🚧 ❌

:book: Documentation

:heart: Sponsors

The continued maintenance and development of bitsandbytes is made possible thanks to the generous support of our sponsors. Their contributions help ensure that we can keep improving the project and delivering valuable updates to the community.

Hugging Face   Intel

License

bitsandbytes is MIT licensed.

We thank Fabio Cannizzo for his work on FastBinarySearch which we use for CPU quantization.

How to cite us

If you found this library useful, please consider citing our work:

QLoRA

@article{dettmers2023qlora,
  title={Qlora: Efficient finetuning of quantized llms},
  author={Dettmers, Tim and Pagnoni, Artidoro and Holtzman, Ari and Zettlemoyer, Luke},
  journal={arXiv preprint arXiv:2305.14314},
  year={2023}
}

LLM.int8()

@article{dettmers2022llmint8,
  title={LLM.int8(): 8-bit Matrix Multiplication for Transformers at Scale},
  author={Dettmers, Tim and Lewis, Mike and Belkada, Younes and Zettlemoyer, Luke},
  journal={arXiv preprint arXiv:2208.07339},
  year={2022}
}

8-bit Optimizers

@article{dettmers2022optimizers,
  title={8-bit Optimizers via Block-wise Quantization},
  author={Dettmers, Tim and Lewis, Mike and Shleifer, Sam and Zettlemoyer, Luke},
  journal={9th International Conference on Learning Representations, ICLR},
  year={2022}
}